Isolated environments for feature branching in Azure

AzureAzure Devopsfeature branching

Feature branching is pretty standard these days, perhaps you are doing it, or perhaps you are considering it. Unfortunately, there is a downside to feature branching.

Testing them.

Typically, there are a finite number of environments that you can use to work on, i.e., Dev, Test, PIT, UIT, SIT, PreProd, etc., etc. Every company I’ve worked for has its own set of environments and naming conventions for those environments. That’s a bit irrelevant here. As developers, we have localhost, which doesn’t quite work the same as it does when deployed. There’s always a gotcha, i.e., if you are using Azure API management, well, you won’t be using that locally, and many other sorts of things. So feature branching works great for the most part locally, but when it’s pushed out to an environment for testing, we come up against an issue.

Firstly, that environment is now isolated to that feature. Any schema or data changes specific to that feature are now locked in. We can’t just deploy another feature branch as the data and or schema is out of sync. Also, testing resources… you may have several testers within your team. Now that the environment has been occupied by a feature, only the tester[s] involved can work on that feature; the others are sat, stuck waiting.

Sure we can have Dev1, Dev2, etc., or if your sprint team has a name, i.e., Nomad1, Nomad2, corpo4, etc., that gives some flexibility.

But, what if you could create an environment that is set up for isolated testing of just your work, regardless of data changes? Multiple features in test at any given time, the ability to have a dev/test feedback cycle isolated to just your changes, no undue pressure to get your work done, other than the sprint deadline, not “oh, $$$$ I’ve got to release this environment so Y can get their work on it”.

Having an environment that exists just long enough for the work to be done, tested, and then merged back into the main development branch? Wouldn’t that save you money not spinning up (x) number of environments which may or may not be utilized, just sat waiting doing nothing, depending entirely on the throughput of the team, i.e., one sprint you may need 4 environments, the next 2. leaving 2 fully-fledged environments costing your company for no reason.

Well, you can, you can utilize your azure pipelines and your ARM/Terraform scripts to generate an environment isolated to just your changes, not just that, but you can do so for free or “almost free”, using the free tiers of Azure’s products and sharing the ones that cost money, the sharing aspect might not be ideal for every situation, but in those situations having one or two fully-fledged environments available is still a darn sight cheaper than having a range of environments sat about for no reason, personnel resource waiting while they wait for environments to clear up.

How it’s done.

I’ll show you an example of how I’ve accomplished it and how it worked in the teams. It won’t be a fully-fledged tutorial with all of the arm templates hanging about. I can’t see the value in that, your setup will be unique to you, so there will be some work to pull apart a chosen environment and see where you can split this out into isolated per deployment environments.

I will show you how to trigger a build that identifies the branches to build from.

Basically, I’ll show you how to enable this, but this isn’t a copy/paste exercise; it will require work on your side. I would suggest wrapping all the components into a single resource group for each feature branch.

i.e. rg-featurebranch-{env}

This will allow easy removal of these feature branches once they are done with, as we can just delete the entire resource group, which will take care of the removal of the unique resources.

Consul

In the past, I’ve done the same using consul and updating the consul config to point to a different deployed service rather than the official release for that environment. As an aside, a neat trick is to use technology such as mountebank. This can mock away entire services, allowing your UI to get responses similar to those it would from a real service. Enabling your front end team to work on the front end while the backend team creates the backend. A really handy feature of this is that with mountebank, you can simulate things going wrong, so the UI can specifically handle errors not easily replicated when the backend is present. I’ve even used it to simulate intermittent backend failures. You end up building a very robust front end when you utilize technologies such as these.

Triggering a feature branch build via PowerShell

The majority of the work is going to be handled by your azure pipelines. Still, we need a way to tell azure pipelines to generate a build against a specific branch and to give it an environment name, i.e., the name of your feature branch. For example, if you are using JIRA or any other issue tracking, work tracking system, there is usually some form of a unique identifier for the ticket that you are working on. This is a fantastic identifier to use as it links the agile/issue tracking board item to your environment. You can also have your build server update the ticket with information about the environment.

Such as the CosmosDB access token for the environment (should you use cosmos).

To do this, we’ll need to create a PowerShell script that can be run. This script will need a few pieces of information.

  • the user’s email address
  • the users access token for Azure DevOps, which can be generated here: https://dev.azure.com/{organization}/_usersSettings/tokens
  • the name of the branch the user wishes to build
  • the environment name
  • (optional extra) the issue tracking ticket reference. This is so the build server can update the ticket with the build info; this is optional as you’ll need to implement the call to your issue tracker.

The PowerShell script will also need to know the project guid reference to trigger a build within; this can be found by calling this URL: https://dev.azure.com/{organization}/_apis/projects?api-version=5.0-preview.3. within the JSON response, you’ll see id and URL; both hold the guid you are looking for.

Here’s an example PowerShell script

param(
    [string] $emailAddress,
    [string] $token,
    [string] $branch,
    [ValidateLength(1,5)]
    [string] $environmentName
    [string] $ticketRef,    
)
$ErrorActionPreference = "Stop"
if (!$emailAddress) {
    Write-Error -Message "email address required -emailAddress"
}
if (!$token) {
    Write-Error -Message "Please provide azure devops access token: https://dev.azure.com/{organisation}/_usersSettings/tokens (Token needs Build Read & execute permissions) -token"
}
if (!$branch) {
    Write-Error -Message "Please provide the branch name to build -branch"
}
if (!$environmentName) {
    Write-Error -Message "Please provide an environment name -environmentName"
}

if (!$ticketRef) {
    Write-Output -Message "No ticker Reference Provided. Unable to update ticket with feature branch information"
    $ticketRef = "NA"
}



$environmentName = $environmentName.ToLower()

$body = '
{
    "stagesToSkip": [],
    "resources": {
        "repositories": {
            "self": {
                "refName": "refs/heads/' + $branch + '"
            }
        }
    },
    "templateParameters": {
        "ticketRef": "'+ $ticketRef + '",
        "EnvironmentName": "' + $environmentName + '"
    },
    "variables": {}
}
'
$bodyJson=$body | ConvertFrom-Json
Write-Output $bodyJson
$bodyString=$bodyJson | ConvertTo-Json -Depth 100
Write-Output $bodyString
$user=$emailAddress
$token=$token
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $user,$token)))

$Uri = "https://dev.azure.com/{organisation}/{projectGuid}/_apis/pipelines/{definitionId}/runs?api-version=5.1-preview"  # get project guid from: https://dev.azure.com/{organisation}/_apis/projects?api-version=5.0-preview.3 - Definition ID can be found in the url of the build you are triggering.
$buildresponse = Invoke-RestMethod -Method Post -UseDefaultCredentials -ContentType application/json -Uri $Uri -Body $bodyString -Headers @{Authorization=("Basic {0}" -f $base64AuthInfo)}
Write-Output $buildresponse

So, we’ve got a PS script that will generate a build via the Azure pipelines rest api. But what should we build?

The build

This will be unique to you, but what I suggest is that this triggers your standard build template; the easiest solution would be to duplicate your main dev pipeline and update them. Firstly, you won’t want this to be triggered by anything, as this will be handled via the Powershell script.

You’ll want the build to write out some files about this PR build, specifically the environment name, which will be vital for the release process.

param(
    [string] $ParameterKey,
    [string] $ParameterValue,
    [string] $directory
)

Write-Output $ParameterValue.trim() | Out-File "$directory/$ParameterKey.txt"
 - task: PowerShell@2
    displayName: Write EnvironmentName to a file
    inputs:
      targetType: 'filePath'
      filePath: $(Build.SourcesDirectory)\build\build_pipeline_scripts\write-BuildParameterToFile.ps1
      arguments: -ParameterKey "EnvironmentName" -ParameterValue "${{ parameters.EnvironmentName }}" -directory "$(Build.ArtifactStagingDirectory)/environment-info"
    env: 
      SYSTEM_ACCESSTOKEN: $(system.accesstoken)
  - task: PublishPipelineArtifact@1
    displayName: Package environment-info artifact 
    inputs:
      targetPath: '$(Build.ArtifactStagingDirectory)/environment-info'
      artifact: 'environment-info'

The rest of the build process will most likely mirror your existing pipeline, perhaps with a step here or there missing, such as not running code analysis on the build.

This can now trigger your feature branch environment release.

The release

One of the differences we’ll make to the release is that this will update the variables beforehand to include the feature branch environment name; we saved this in a folder called environment-info and a file named EnvironmentName.txt

  • Step 1 - deploy the infrastructure (Cosmos, storage accounts, queues, etc.)
  • Step 2 - Deploy code-based things such as function apps
  • Step 3 - Provide access tokens
  • Step 4 - API Management and the like

Step 1

So perhaps you have a storage account name variable in the main pipeline, then I suggest that this updates that variable to include the environment by running a script: Write-Host "##vso[task.setvariable variable=STORAGE_ACCOUNT_NAME]yournamingconvention$env"

Do this for all the infrastructure that doesn’t need to be shared. i.e., function apps, cosmos, storage, queues, etc.

It’s then a case of letting the release happen as it normally would; the only difference here is that the variables have been updated to include the environment, so each deployment will be unique. and asides any updates to the ticket via the ticketing system api

Step 2

As with step one, the main difference here is that we’ll update the variables to include the environment name; this is so we know where to deploy the code and also making sure that the software services have unique and identifiable names, i.e., FUNCAPPCACHERESETTER in dev would be FUNCAPPCACHERESETTER_1021 in your feature branch, assuming the reference was 1021.

Again, this release pipeline step will match your development one after the variables have been updated. asides any updates to the ticket via the ticketing system api

Step 3

Once the deployment of step 1 & two has been successful, we can use Azure Powershell to get the variables access tokens from storage accounts, cosmos, function apps, etc. This information can then be published on teams, slack, etc. Added to your ticket within the ticketing system.

As we have an environment now in Azure, all contained in its own resource group, any access tokens can be picked up by going to the individual resource. You may have app insights or log analytics enabled for your feature. As such, you can access those logs within your resource group.

Within the pipeline, you may have acceptance tests triggered against dev; if so, like we’ve modified the infrastructure, we can modify these to take in various variables, allowing you to run these acceptance tests against your version of the deployed assets, which should give you a high level of confidence that everything is working as it once was before you merge your changes to the develop or master or whatever branch. It also allows your testers (or yourself?) to add further acceptance tests to cover the feature and test them against your deployed environment.

Step 4

If you use something like API Management, then you’ll want a way of calling your APIs for your feature branch, bypassing the development ones. For example, This can be as simple as adding a header such as “x-featurebranch”. which can be used to route APIM to your APIs; you may need to also pass headers that have the relevant access token keys for each function.

The rest of your APIM policy will continue as is. Though you may have things, you need to change to work with your unique environment.

Wrap Up

When I started to write this post, I thought it would be much longer, but really, this builds on top of your very unique build and release pipelines. This turned into a “here’s an idea of how to” rather than a, here’s how to do it! I’m also not suggesting that this approach is the best, just an approach. There may be better ways to do this using Azure Blueprints or something entirely different.

The core concept that I hope I’ve shared is that it’s possible thanks to Azure, AWS, GCP, and the like to deploy infrastructure and code repeatably and uniquely. i,e. I can have a Development environment and an Adams Testing Badger land environment, which look exactly the same but run on their own instances of infrastructure and code. Because of this, we don’t need to keep environments hanging around, getting dirty, wondering why suddenly something isn’t working when it was yesterday, turns out someone changed a file.

No, we can deploy from scratch each time and do apples to apples comparisons, ensuring that tests pass repeatably. It also means that should we want to introduce load testing, we can. We can fire up a clean environment, load it with data, and hammer it. We’ll get predictable results each time, give or take the various networking and cloud platform niggles.

Buy Me A Coffee