Skip to content

Azure DevOps Pipelines: YAML Guide

  • 17 min read

Struggling to get your CI/CD pipelines right? It can feel like you’re lost in a maze of scripts and settings. But what if there was a way to bring clarity and order to your deployments? There is. Azure DevOps Pipelines, with its powerful YAML configuration, lets you define your workflows as code. This approach brings numerous benefits to the table. Let’s explore how you can use YAML to craft robust and reliable pipelines, and make your deployment process a breeze.

Azure DevOps Pipelines: The YAML Way

Azure DevOps Pipelines offers you a way to automate your software builds, tests, and deployments. It’s a core component of Azure DevOps, a suite of tools designed to help teams collaborate and deliver software. You can define these pipelines in two ways: with a classic visual editor, or with YAML. While the classic editor is great for getting started, YAML provides a more scalable and maintainable solution, especially for complex workflows. Think of YAML as a way to describe your pipeline’s logic in a text file, which gives you the power to version control it, share it, and even replicate it across projects.

Why YAML?

YAML, which stands for “YAML Ain’t Markup Language,” is a human-readable data serialization format. It’s perfect for defining pipelines because it’s clear, concise, and easy to understand. With YAML, you gain some significant advantages over traditional GUI based tools:

  • Version Control: Because it’s code, you can store your pipeline definitions in a version control system like Git. This makes it easy to track changes, collaborate with teammates, and revert to previous versions if needed.
  • Reproducibility: With YAML, you have a blueprint of your pipeline. This makes it easy to replicate the pipeline in different environments or projects, ensuring consistency across the board.
  • Automation: Since the process is codified, you can automate changes to your pipelines, meaning that as your workflow needs change, you can program those changes.
  • Maintainability: YAML files are easy to read and modify. This simplifies the process of managing and updating your pipelines as your project evolves.
  • Flexibility: YAML allows you to describe your pipelines in a fine-grained way. This means you can create pipelines that do exactly what you need them to do, and no more.
  • Code Reuse: By breaking down a pipeline into reusable components, you can save yourself from repeating the same tasks for different applications, thereby saving time and effort.

Understanding the Structure of a YAML Pipeline

Before you dive into writing a YAML pipeline, it’s important to understand its basic structure. A YAML pipeline in Azure DevOps is made up of several key elements:

  • Triggers: Triggers are what cause your pipeline to run. They can be anything from code commits to scheduled events.
  • Variables: Variables allow you to store data that can be reused throughout your pipeline. This can be credentials, paths, or any other configuration value you need.
  • Stages: Stages are a way to organize your pipeline into logical phases, like build, test, and deploy.
  • Jobs: Jobs are the execution units of a stage. They define the steps and environment in which you want the tasks to run.
  • Tasks: Tasks are pre-built or custom actions that you want your job to perform. These can be anything from compiling code to deploying applications to Azure.
  • Steps: Steps are a sequence of tasks executed within a job.
  • Pools: Pools specify the agent on which the jobs will run. The agent is the machine that executes the steps.

Let’s take a look at these components in more detail.

Triggers: Starting Your Pipeline

Triggers are what kick off your pipeline. You define triggers to start the pipeline when certain events happen in your repository or on a schedule. Triggers help automate your deployment process. Without triggers, your pipelines would have to be manually started every time, which defeats much of the purpose of automation.

Types of Triggers

  • Branch-Based Triggers: These triggers start the pipeline when a code change is pushed to a specific branch of your repository. This is useful if you have a separate branch for development, testing, and production. The pipeline could deploy changes from the development branch to the development environment, and from the production branch to the production environment.

    yaml
    trigger:
    branches:
    include:
    - main
    - develop

    This tells Azure DevOps to run the pipeline whenever changes are pushed to the main or develop branches.

  • Path-Based Triggers: Path based triggers are useful when you need the pipeline to start only when changes happen in a specific directory. For example, a pipeline that deploys a website may only need to run when something in the website’s folder gets changed.

    yaml
    trigger:
    paths:
    include:
    - webapp/*

    This starts the pipeline when changes are detected in the webapp directory, or any of its subdirectories.
    * Scheduled Triggers: Scheduled triggers start your pipeline on a set schedule. This can be useful for running nightly builds, database backups, or anything else that needs to happen at regular intervals.

    yaml
    schedules:
    - cron: "0 0 * * *"
    displayName: Daily at Midnight
    branches:
    include:
    - main

    This pipeline will run every day at midnight on the main branch. The cron expression uses standard cron syntax to schedule tasks.

  • Pull Request Triggers: Pull request triggers are started when a pull request is created or updated, letting you make sure the changes work before they’re actually merged into the target branch.

    yaml
    pr:
    branches:
    include:
    - main

    This trigger makes the pipeline run for all pull requests targeting the main branch.

Variables: Adding Dynamic Values

Variables let you define values that can be used throughout your pipeline. This makes your pipelines more flexible and reusable. You can specify variables at the pipeline level, stage level, job level, or even step level. They can be used to store sensitive information, such as secrets or credentials.

Defining Variables

You can define variables within the pipeline YAML file, or in the pipeline settings using the Azure DevOps UI.

  • Pipeline Variables: These variables are available throughout the entire pipeline. They’re set at the top level of the YAML file.

    yaml
    variables:
    buildConfiguration: 'Release'
    environmentName: 'Production'

    In this example, we’ve defined the variables buildConfiguration and environmentName, which can be used in all stages, jobs, and steps.

  • Stage Variables: Variables can also be defined within individual stages, making them accessible only within that stage.

    yaml
    stages:
    - stage: Build
    variables:
    buildPath: 'src/build'

    Here, the buildPath variable will only exist within the Build stage.

  • Job Variables: Variables defined in jobs will only be available in the job that defined them.

    yaml
    jobs:
    - job: BuildJob
    variables:
    buildArgs: '--verbose'

    The variable buildArgs will only be available in the BuildJob.

Using Variables

Variables are referenced using the $(variableName) syntax:

jobs:
  - job: DeployJob
    steps:
      - task: AzureWebApp@1
        inputs:
          appName: 'myWebApp-$(environmentName)'

In this task, the app name will use whatever value is set to the environmentName variable, like myWebApp-Production if the value of the environmentName variable is “Production”.

  • Secret Variables: For sensitive data, you can define secret variables that aren’t shown in the pipeline logs. These are set using the Azure DevOps UI.

    yaml
    variables:
    mySecret: $(mySecretVariable)

    In this example, mySecret is using a secret variable mySecretVariable that you’ll have to configure via the UI, and the secret value will not appear in the pipeline output logs.

Stages: Structuring Your Workflow

Stages are used to logically organize your pipeline into distinct phases. Common stages include Build, Test, and Deploy. Stages allow you to track the overall progress of your pipeline, and it helps create a clean structure.

stages:
  - stage: Build
    displayName: Build Stage
    jobs:
    - job: BuildJob
      displayName: Build the application
      steps:
        - script: dotnet build
          displayName: 'Build the application'
  - stage: Deploy
    displayName: Deploy Stage
    dependsOn: Build
    jobs:
      - job: DeployJob
        displayName: Deploy the application
        steps:
          - task: AzureWebApp@1
            displayName: 'Deploy to Azure'
            inputs:
              azureSubscription: 'myAzureSubscription'
              appName: 'myWebApp'
  • The stages keyword defines a list of stages.
  • The stage keyword defines the name of the stage.
  • The displayName keyword specifies the name that will show up in the Azure DevOps UI for that stage.
  • The dependsOn keyword specifies that a stage is dependent on another, making the stages run sequentially in the order specified.

Jobs: Defining Execution Units

Jobs are the units of work that run within a stage. Jobs define the environment where tasks will run. You can have multiple jobs within a single stage, allowing you to perform several tasks in parallel or sequentially.

jobs:
  - job: BuildJob
    displayName: Build the application
    pool:
      vmImage: 'ubuntu-latest'
    steps:
      - script: dotnet build
        displayName: 'Build the application'
  - job: TestJob
    displayName: Run tests
    dependsOn: BuildJob
    pool:
      vmImage: 'windows-latest'
    steps:
      - script: dotnet test
        displayName: 'Run tests'
  • The jobs keyword defines the jobs to run within the scope it’s defined.
  • The job keyword defines the name of the job.
  • The pool keyword defines the agent where the job will run.
  • The vmImage keyword defines a pre-built image that the agent will use.
  • The dependsOn keyword can specify that a job is dependent on another job, forcing the jobs to run sequentially in the order specified.

Tasks: Performing Actions

Tasks are pre-built or custom actions you want your pipeline to perform. These tasks can range from simple scripts to complex deployments. There’s a wide variety of built-in tasks available in Azure DevOps for common scenarios, but you can also use custom tasks as well.

steps:
  - task: DotNetCoreCLI@2
    displayName: 'Build the application'
    inputs:
      command: 'build'
      projects: '**/*.csproj'
      arguments: '--configuration $(buildConfiguration)'
  - task: CopyFiles@2
    displayName: 'Copy files to artifact directory'
    inputs:
      SourceFolder: '$(System.DefaultWorkingDirectory)'
      Contents: |
        **/*.dll
        **/*.pdb
      TargetFolder: '$(Build.ArtifactStagingDirectory)'
  - task: PublishBuildArtifacts@1
    displayName: 'Publish artifacts'
    inputs:
      PathtoPublish: '$(Build.ArtifactStagingDirectory)'
      ArtifactName: 'myBuildArtifacts'
  • The steps keyword defines a list of tasks to be executed.
  • The task keyword specifies the name of the task.
  • The displayName keyword displays the name of the task in the pipeline UI.
  • The inputs keyword specifies the input parameters for the task.

Common Built-in Tasks

Here are some common built in tasks you can make use of:

  • Script: Runs a command-line script.
  • PowerShell: Runs a PowerShell script.
  • Bash: Runs a Bash script.
  • Copy Files: Copies files from one location to another.
  • Publish Build Artifacts: Publishes build outputs to Azure Artifacts or other sources.
  • AzureWebApp: Deploys an application to Azure Web App Service.
  • Docker: Builds, pushes, and manages Docker images.
  • Install SSH Key: Installs a SSH Key on the agent.

Custom Tasks

You can also create your own custom tasks when the built in tasks are not enough, by writing extensions for Azure DevOps. You can write custom tasks that do just about anything that needs to be done. This way, you can reuse your custom tasks across your projects.

Pools: Defining Agents

Pools specify the agent that will be used to run your jobs. An agent is the machine that executes the steps in your pipeline. There are two types of agents: Microsoft-hosted and self-hosted.

pool:
  vmImage: 'ubuntu-latest'
  • Microsoft-Hosted Agents: These are agents managed by Microsoft that are available on demand. They come preconfigured with common tools and software, and you don’t have to manage the underlying machines yourself.
  • Self-Hosted Agents: These are agents that you manage yourself. This means you have complete control of the underlying environment. You’re responsible for the machine itself. This is useful if your pipeline requires specific software, hardware, or access to internal resources that aren’t available in Microsoft-hosted agents.

Writing Your First YAML Pipeline

Now let’s put all this together and write a simple YAML pipeline. This example will build a .NET application, run tests, and then deploy it to an Azure Web App.

trigger:
  branches:
    include:
      - main

variables:
  buildConfiguration: 'Release'
  azureSubscription: 'myAzureSubscription'
  appName: 'myWebApp'

stages:
  - stage: Build
    displayName: Build Stage
    jobs:
    - job: BuildJob
      displayName: Build the application
      pool:
        vmImage: 'ubuntu-latest'
      steps:
        - task: DotNetCoreCLI@2
          displayName: 'Build the application'
          inputs:
            command: 'build'
            projects: '**/*.csproj'
            arguments: '--configuration $(buildConfiguration)'
        - task: CopyFiles@2
          displayName: 'Copy files to artifact directory'
          inputs:
            SourceFolder: '$(System.DefaultWorkingDirectory)'
            Contents: |
              **/*.dll
              **/*.pdb
            TargetFolder: '$(Build.ArtifactStagingDirectory)'
        - task: PublishBuildArtifacts@1
          displayName: 'Publish artifacts'
          inputs:
            PathtoPublish: '$(Build.ArtifactStagingDirectory)'
            ArtifactName: 'myBuildArtifacts'

  - stage: Test
    displayName: Test Stage
    dependsOn: Build
    jobs:
    - job: TestJob
      displayName: Run tests
      pool:
        vmImage: 'ubuntu-latest'
      steps:
        - task: DotNetCoreCLI@2
          displayName: 'Run tests'
          inputs:
            command: 'test'
            projects: '**/*Tests.csproj'
            arguments: '--configuration $(buildConfiguration)'
  - stage: Deploy
    displayName: Deploy Stage
    dependsOn: Test
    jobs:
      - job: DeployJob
        displayName: Deploy the application
        pool:
          vmImage: 'windows-latest'
        steps:
          - task: AzureWebApp@1
            displayName: 'Deploy to Azure'
            inputs:
              azureSubscription: '$(azureSubscription)'
              appName: '$(appName)'
              package: '$(System.DefaultWorkingDirectory)/myBuildArtifacts/*.zip'

This pipeline has three stages: Build, Test, and Deploy.

  • Build Stage: This stage builds the .NET application and publishes the build artifacts.
    • It defines a build job that specifies the steps for this task.
    • The DotNetCoreCLI@2 task builds the .NET application using the command dotnet build.
    • The CopyFiles@2 task copies the DLL and PDB files to an artifact staging directory.
    • The PublishBuildArtifacts@1 task publishes the artifacts under the name myBuildArtifacts.
  • Test Stage: This stage runs the automated tests for your application, which can catch any defects earlier.
    • It defines a TestJob that runs the tests.
    • It uses the DotNetCoreCLI@2 task again, but this time to execute the tests.
  • Deploy Stage: This stage deploys the built application to an Azure Web App.
    • It defines a deploy job that runs the deployment steps.
    • The AzureWebApp@1 task deploys the application using the specified Azure subscription, web app name, and package.

Best Practices for YAML Pipelines

To make the most out of YAML pipelines, consider the following best practices:

  • Keep It Modular: Divide your pipeline into reusable components. This makes your pipelines easier to maintain and update. By creating separate templates for common tasks, you don’t have to keep repeating yourself, and your overall pipeline logic can become easier to manage.
  • Use Meaningful Names: Use clear and descriptive names for stages, jobs, and tasks. This makes it easy to understand the purpose of each part of your pipeline. This is important for readability, which makes troubleshooting a whole lot easier.
  • Version Control Everything: Store your YAML files in a version control system and treat them like code. This is key for collaboration, auditing, and reproducibility, as you want your pipelines to be repeatable over and over.
  • Use Variables Effectively: Define variables for values that change between environments. This makes your pipeline configurations more adaptable and manageable. And storing passwords or secret keys as variables, instead of directly in the pipeline definition, provides much better security.
  • Use Templates: Templates enable you to reuse code blocks across multiple pipelines. They are a good way to reduce duplication. For instance, if all your pipelines need a common set of setup tasks, those tasks can be defined in a template and be re-used everywhere.
  • Validate Your YAML: Before you commit your changes, make sure to validate your YAML syntax. Azure DevOps offers you a built-in validator. You can use other online tools or code editors for this, too. Catching syntax errors early is always better than being caught in a production error.
  • Test Thoroughly: After you commit changes to the pipeline, test your pipeline on a development environment first to make sure that everything works as expected before pushing changes to production, thus avoiding potential disruptions.

Debugging YAML Pipelines

Even with the best planning, issues might come up. So it’s essential to know how to debug your pipelines.

  • Check Logs: Azure DevOps provides detailed logs for each step in your pipeline. These logs often contain valuable information about errors, warnings, and other issues. Reading the output of each task can often help figure out what went wrong.
  • Use Debug Mode: In the pipeline settings, you can enable debug mode. This gives you more detailed log information to help you troubleshoot. There’s the possibility of having extra log information, like variable values and internal commands, which can prove useful when hunting bugs.
  • Run Steps Locally: If you’re having trouble with a script or task, try running it locally on your machine first, to rule out any issues that aren’t specific to the pipeline environment.
  • Isolate the Problem: If you encounter an issue, try to isolate the problem to a specific stage, job, or step to make debugging easier. By eliminating areas that are known to work, it becomes easier to find the one that’s not working.
  • Use Breakpoints: To debug a step that’s using a script, you can try injecting breakpoints, so you can stop the script’s execution, and take a look at the variables and the state of things.

Azure DevOps YAML Pipeline vs Classic Editor

While YAML pipelines offer a host of advantages, it’s worth acknowledging when you might choose the classic editor instead. The classic editor is an option for defining pipelines using a visual, GUI-based interface, which is a good way for new users to get started and see things in action. The classic editor is also an adequate solution for less complex scenarios, or when an end user doesn’t have a need to be dealing with code. However, as your needs become more advanced, YAML becomes a much better way to move forward. The classic editor doesn’t allow for code reuse, code versioning, which makes it a lot harder to maintain your pipeline if it becomes too big and complex.

Here’s a side by side comparison:

| Feature | YAML Pipeline | Classic Editor |
| —————— | ———————————————- | ———————————————- |
| Code | Defined as code in YAML files | Defined visually via the user interface |
| Version Control | Supported; pipeline definitions can be version controlled | Not supported; pipeline changes are hard to track |
| Reproducibility | Easy to reproduce by using the YAML definition | Hard to reproduce; settings must be manually duplicated |
| Collaboration | Easy to collaborate using Git | Hard to collaborate, changes are not in code |
| Maintainability | Easier to maintain by code reuse and templating | More difficult as the pipeline complexity increases |
| Automation | Changes can be automated | Requires manual changes |
| Complexity | Suitable for complex pipelines | Better suited for less complex pipelines |
| Learning Curve | Steeper learning curve for YAML syntax | Easier learning curve for initial use |

YAML pipelines are the preferred way to create and manage pipelines because they bring the power of code and all of its benefits to the world of deployment pipelines.

What You Need to Know to Get Started

Now that you’ve made it this far, you’ve covered all the basic elements you need to start defining your own Azure DevOps YAML pipelines.

Here’s a summary of key points you must keep in mind:

  • Understand the basic structure of a YAML pipeline, including triggers, variables, stages, jobs, tasks, steps, and pools.
  • Learn how to write basic YAML syntax. Get used to the basic syntax, including how to define key/value pairs, lists, and mappings.
  • Make use of variables for configuration: Don’t hard-code paths, credentials, or other values into your pipeline definitions.
  • Use templates to reuse and share your pipeline logic between different projects, teams, and applications.
  • Version control your YAML files to make sure you always have a way to review changes, revert if necessary, and collaborate with your team.
  • Take advantage of the built-in tasks provided by Azure DevOps. There’s a task available for almost everything, so always check before writing your own.
  • Always test in development first, before deploying to production environments.
  • Debug methodically, by using the logs, turning debug mode on, and isolating the problem areas.
  • Use clear, descriptive names to improve the readability and understanding of your pipelines.
  • Stay up-to-date with the latest changes to Azure DevOps and YAML pipelines, to ensure you’re always making the most out of the available features.

Should You Embrace YAML for Your Pipelines?

After all this information, you may be asking yourself if you should switch to YAML for your pipelines. And the answer depends on your goals and current state. For simple pipelines, the classic editor may be the fastest path. But for any team with medium or complex scenarios, YAML pipelines are a must. They bring the benefits of version control and code reuse to the CI/CD world, something no other paradigm can do. The ability to define pipelines in code will give you the control you always wanted for your deployments, allowing you to go from a pipeline novice to a real power user.