Each CI/CD product requires setup of build agents, which actually do the hard work. This can be a virtual or physical machine with all software required to do tasks in your defined build / delivery pipeline. However, setting up such machine is not as easy as it seems. For example, you are starting a new software development and want to setup continuous integration:

In past we used to manage build agents manually. When a new project started, we setup a new build agent.

  1. At first we setup a new virtual machine in our network.
  2. Then we started long process of installing Visual Studio, SDKs, development tools, Git and other tools on the server.
  3. Finally we installed build agent and connect to VSTS/TFS.

Then in 1 month there was a new version of Visual Studio released or Android SDK. So we took some time again to install new versions. And later we installed Java SDK, because it was required by one of micro-services. And after few iterations no one knew, what was actually installed on the build agent.

I guess you understand what problems we dealt with:

  • Installation and maintenance took lot of effort
  • It was not clear, what was installed and how was it configured.
  • Build agents were not uniform
  • When anything went wrong with build agent it took long time to do repair or reinstall.
  • When anyone did a change to build agent, it was not easy to find out what and when was it changed.
  • All these problems are multiplied by fact that development tools (like MS Visual Studio) are released every few weeks or months. Those companies are becoming agile too.

Then we found solution: Infrastructure-as-Code.


To run PowerShell code in this article, you need to have installed Azure PowerShell Module.

We found lot of help, when we had a better look at VSTS Hosted Build Agents.

Visual Studio Team Services Hosted Build Agent

Have you wondered how does VSTS Hosted Build Agent work? This is what happens, when a new build queued on hosted agent.

  1. VSTS starts new virtual machine from build agent image. Actually it is already prepared and running, so that process is faster.
  2. Build is executed on the build agent.
  3. After the build is finished and all artifacts and test results are saved, virtual machine is disposed.

Last step is very important, because hosted agents are shared between different VSTS users and accounts, and no one wants their software artifacts from build to be still available at virtual machine, when next build is running by a different user.

However, first step is very interesting too. Recently Microsoft open-sourced scripts they use to create hosted build agent images. It is available at And this can be used to manage own build agent images and instances.

Create build agent image

Microsoft uses Packer to build agent image. Packer is nice tool for building virtual machine images. It works with AzureAWSGoogle CloudHyper-VVMware, and many more. Basically it does following steps:

  1. Starts a new virtual machine.
  2. Executes steps specified in JSON template.
  3. Stops the virtual machine and saves image.

At the GitHub repo there are 3 images:

  • Hosted VS2017 (images/win/vs2017-Server2016-Azure.json) – Windows image with installed Visual Studio 2017 and other tools (Java, Maven, etc.)
  • Hosted VS (images/win/vs2015-Server2012R2-Azure.json) – Windows image with installed Visual Studio 2015
  • Linux (images/linux/linux.json) – Linux image with installed .NET Core SDK

For purpose of this article we use Hosted VS2017 as base image. At first we fork the whole repository at GitHub, where we will manage our changes. This way, when Microsoft releases new version of the template, we can easily merge into our repository. Now we can do changes in the template. First section defines, where the image is created. It is created in Microsoft Azure in specified subscription and resource group. We will provide these information later.

    "variables": {
        "client_id": "{{env 'ARM_CLIENT_ID'}}",
        "client_secret": "{{env 'ARM_CLIENT_SECRET'}}",
        "subscription_id": "{{env 'ARM_SUBSCRIPTION_ID'}}",
        "tenant_id": "{{env 'ARM_TENANT_ID'}}",
        "object_id": "{{env 'ARM_OBJECT_ID'}}",
        "resource_group": "{{env 'ARM_RESOURCE_GROUP'}}",
        "storage_account": "{{env 'ARM_STORAGE_ACCOUNT'}}",
        "location": "{{env 'ARM_RESOURCE_LOCATION'}}",
        "ssh_password": "{{env 'SSH_PASSWORD'}}",
        "vm_size": "Standard_DS4_v2",

        "image_folder": "C:\\image",
        "commit_file": "C:\\image\\commit.txt",
        "metadata_file": "C:\\image\\metadata.txt",
        "helper_script_folder": "C:\\Program Files\\WindowsPowerShell\\Modules\\",
        "commit_id": "LATEST",
        "install_user": "installer",
        "install_password": "P@ssw0rd1"
    "builders": [
            "name": "vhd",
            "type": "azure-arm",
            "client_id": "{{user 'client_id'}}",
            "client_secret": "{{user 'client_secret'}}",
            "subscription_id": "{{user 'subscription_id'}}",
            "object_id": "{{user 'object_id'}}",
            "tenant_id": "{{user 'tenant_id'}}",

            "location": "{{user 'location'}}",
            "vm_size": "{{user 'vm_size'}}",
            "resource_group_name": "{{user 'resource_group'}}",
            "storage_account": "{{user 'storage_account'}}",
            "capture_container_name": "images",
            "capture_name_prefix": "packer",
            "os_type": "Windows",
            "image_publisher": "MicrosoftWindowsServer",
            "image_offer": "WindowsServer",
            "image_sku": "2016-Datacenter",
            "communicator": "winrm",
            "winrm_use_ssl": "true",
            "winrm_insecure": "true",
            "winrm_timeout": "4h",
            "winrm_username": "packer"

I suggest to make install_password value to be passed from environment variable too, so we can provide it later. However, keep in mind that template uses old net command to create user. This command accepts password with maximum lenght 16. We could rewrite it to new PowerShell command-lets New-LocalUser and Add-LocalGroupMember, but I keep this as homework for reader.

    "variables": {
        "install_password": "{{env 'INSTALL_PASSWORD'}}"


    "provisioners": [

            "type": "windows-shell",
            "inline": [
                "net user {{user 'install_user'}} {{user 'install_password'}} /add /passwordchg:no /passwordreq:yes /active:yes" ,
                "net localgroup Administrators {{user 'install_user'}} /add",
                "winrm set winrm/config/service/auth @{Basic=\"true\"}",
                "winrm get winrm/config/service/auth"


As you can see template simply defines steps to execute PowerShell scripts, which actually do the hard work. For example let’s have a look at script Install-VS2017.ps1 that installs Visual Studio 2017. Current script installs all workloads and components. This installation may take some time and it’s not needed for our project. So we update list of components to install.

## File: Install-VS2017.ps1
## Team: CI-Build
## Desc: Install Visual Studio 2017

Function InstallVS
    [String] $VSBootstrapperURL

    $exitCode = -1

    Write-Host "Downloading Bootstrapper ..."
    Invoke-WebRequest -Uri $VSBootstrapperURL -OutFile "${env:Temp}\vs_$Sku.exe"

    $FilePath = "${env:Temp}\vs_$Sku.exe"
    $Arguments = ('/c', $FilePath, $WorkLoads, '--quiet', '--norestart', '--wait', '--nocache' )

    Write-Host "Starting Install ..."
    $process = Start-Process -FilePath cmd.exe -ArgumentList $Arguments -Wait -PassThru
    $exitCode = $process.ExitCode

    if ($exitCode -eq 0 -or $exitCode -eq 3010)
        Write-Host -Object 'Installation successful'
        return $exitCode
        Write-Host -Object "Non zero exit code returned by the installation process : $exitCode."

        # this wont work because of log size limitation in extension manager
        # Get-Content $customLogFilePath | Write-Host

        exit $exitCode
    Write-Host -Object "Failed to install Visual Studio. Check the logs for details in $customLogFilePath"
    Write-Host -Object $_.Exception.Message
    exit -1

$WorkLoads = '--add Microsoft.VisualStudio.Workload.CoreEditor ' + '
                '--add Microsoft.VisualStudio.Workload.ManagedDesktop ' + '
                '--add Microsoft.Net.ComponentGroup.TargetingPacks.Common ' + '
                '--add Microsoft.VisualStudio.Component.Debugger.JustInTime ' + '
                '--add Microsoft.Net.Component.4.7.SDK ' + '
                '--add Microsoft.Net.Component.4.7.TargetingPack ' + '
                '--add Microsoft.Net.ComponentGroup.4.7.DeveloperTools ' + '
                '--add Microsoft.Net.Component.4.7.1.SDK ' + '
                '--add Microsoft.Net.Component.4.7.1.TargetingPack ' + '
                '--add Microsoft.Net.ComponentGroup.4.7.1.DeveloperTools ' + '
                '--add Microsoft.VisualStudio.Workload.NetWeb ' + '
                '--add Microsoft.VisualStudio.Component.Web ' + '
                '--add Microsoft.VisualStudio.Workload.Universal ' + '
                '--add Microsoft.VisualStudio.Component.Windows10SDK.15063.UWP ' + '
                '--add Microsoft.VisualStudio.Workload.NetCrossPlat ' + '
                '--add Component.Android.SDK25 ' + '
                '--add Component.JavaJDK ' + '
                '--add Component.Xamarin ' + '
                '--add Component.Xamarin.SdkManager '

$Sku = 'Enterprise'
$VSBootstrapperURL = ''

$ErrorActionPreference = 'Stop'

# Install VS
$exitCode = InstallVS -WorkLoads $WorkLoads -Sku $Sku -VSBootstrapperURL $VSBootstrapperURL

# Find the version of VS installed for this instance
# Only supports a single instance
$vsProgramData = Get-Item -Path "C:\ProgramData\Microsoft\VisualStudio\Packages\_Instances"
$instanceFolders = Get-ChildItem -Path $vsProgramData.FullName

if($instanceFolders -is [array])
    Write-Host "More than one instance installed"
    exit 1

$catalogContent = Get-Content -Path ($instanceFolders.FullName + '\catalog.json')
$catalog = $catalogContent | ConvertFrom-Json
Write-Host "Visual Studio version" $ "installed"

# Updating content of MachineState.json file to disable autoupdate of VSIX extensions
$newContent = '{"Extensions":[{"Key":"1e906ff5-9da8-4091-a299-5c253c55fdc9","Value":{"ShouldAutoUpdate":false}},{"Key":"Microsoft.VisualStudio.Web.AzureFunctions","Value":{"ShouldAutoUpdate":false}}],"ShouldAutoUpdate":false,"ShouldCheckForUpdates":false}'
Set-Content -Path "C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\Extensions\MachineState.json" -Value $newContent

exit $exitCode

When template is ready, it’s time to build image. At first it is necessary to create:

  1. Azure Service Principal – this is authentication secret that will be used by Packer to access Azure resources.
  2. Azure Storage Account – this is, where the image will be stored.

Simply we use following PowerShell script.

Note: The first command is Login-AzureRmAccount that asks you to login to Azure account. When you use scripts from this article, make sure you are logged in using this command.

    [string] $subscriptionId,
    [string] $rgName,
    [string] $location,
    [string] $storageAccountName,
    [string] $spDisplayName,
    [string] $spClientSecret

Set-AzureRmContext -Subscription $subscriptionId
New-AzureRmResourceGroup -Name $rgName -Location $location
New-AzureRmStorageAccount -ResourceGroupName $rgName -AccountName $storageAccountName -Location $location -SkuName "Standard_LRS"
$sp = New-AzureRmADServicePrincipal -DisplayName $spDisplayName -Password (ConvertTo-SecureString $spClientSecret -AsPlainText -Force)
$spAppId = $sp.ApplicationId
$spClientId = $sp.ApplicationId
$spObjectId = $sp.Id
Start-Sleep 40
New-AzureRmRoleAssignment -RoleDefinitionName Contributor -ServicePrincipalName $spAppId
$sub = Get-AzureRmSubscription -SubscriptionId $subscriptionId
$tenantId = $sub.TenantId
$result = @(
    "Note this variable-setting script for running Packer with these Azure resources in the future:"
    "'$spClientId = '"$spClientId'""
    "'$spClientSecret = '"$spClientSecret'""
    "'$subscriptionId = '"$subscriptionId'""
    "'$tenantId = '"$tenantId'""
    "'$spObjectId = '"$spObjectId'""
    "'$location = '"$location'""
    "'$rgName = '"$rgName'""
    "'$storageAccountName = '"$storageAccountName'""

Write-Output $result

Following example creates new service principal ‘MyVS2017sp’ and resource group ‘MyVS2017BuildAgent’ with ‘myvs2017buildagent’ storage located in data center in west Europe. The script will write down new service principal ID and other values, which must be passed to Packer.

$subscriptionId = "f827df89-4f1a-45df-9178-d0efdb9d01f6"
$rgName = "MyVS2017BuildAgent"
$location = "westeurope"
$storageAccountName = "myvs2017buildagent"
$spDisplayName = "MyVS2017sp"
$spClientSecret = "MySecretPassword"

.\SetupPacker.ps1 -subscriptionId $subscriptionId -rgName $rgName -location $location -storageAccountName $storageAccountName -spDisplayName $spDisplayName -spClientSecret $spClientSecret

Now we are ready to run Packer. Well, at first it must be downloaded from It’s single executable. After downloading don’t forget to open file properties and check Unblock. Then we simply run Packer with correct variables (find output of previous script) and specified template.

$spClientId = "660c3be0-8697-4988-9ce7-a141336539fa"
$spClientSecret = "MySecretPassword"
$subscriptionId = "f827df89-4f1a-45df-9178-d0efdb9d01f6"
$tenantId = "f724e2f8-5dd4-4725-a012-1377c1f31379"
$spObjectId = "79f2353a-d9c3-407e-8905-14d519cf0529"
$location = "westeurope"
$rgName = "MyVS2017BuildAgent"
$storageAccountName = "myvs2017buildagent"
$installPassword = "MyInstallPwd" # cannot be longer than 16 chars

.\packer.exe build -var "client_id=$($spClientId)" -var "client_secret=$($spClientSecret)" -var "subscription_id=$($subscriptionId)" -var "tenant_id=$($tenantId)" -var "object_id=$($spObjectId)" -var "location=$($location)" -var "resource_group=$($rgName)" -var "storage_account=$($storageAccountName)" -var "install_password=$($installPassword)" vs2017-Server2016-Azure.json

I suggest to run this script on a temporary Azure virtual machine. That machine won’t sleep, while I am going to, because this process takes about 8 hours.

Create virtual machine from image

Next morning (hopefully) image is successfully created and I got following output from Packer.

StorageAccountLocation: westeurope

OSDiskUri value is the one that is important for us. There was also Azure Resource Management template created, but we will not use that. We will use PowerShell of course.

    [string] $rgName,
    [string] $location,
    [string] $vhdUri,
    [string] $VMName,
    [PSCredential] $cred,
    [string] $VMSize = 'Standard_B2S'

# Create private key for WinRM
$fullDnsName = "$VMName.$"
$tempPath = [System.IO.Path]::GetTempPath()
$privateKeyPath = Join-Path $tempPath "WinRM.pfx"
$privateKeyPasswordPlain = (New-Guid).ToString('n')
$privateKeyPassword = ConvertTo-SecureString -String $privateKeyPasswordPlain -AsPlainText -Force
$privateKey = New-SelfSignedCertificate -DnsName $fullDnsName -CertStoreLocation 'Cert:\CurrentUser\My'
Export-PfxCertificate -Cert $privateKey -FilePath $privateKeyPath -Password $privateKeyPassword -Force
Remove-Item "Cert:\CurrentUser\My\$($privateKey.Thumbprint)" -Force

# Store private key in Azure Key Vault
$args = @{
    VaultName = $VMName + 'KeyVault'
    ResourceGroupName = $rgName
    Location = $location
    EnabledForDeployment = $true
$keyVault = New-AzureRmKeyVault @args
Write-Output "Created Key Vault: $($keyVault.ResourceId)"

$privateKeyBytes = Get-Content $privateKeyPath -Encoding Byte
$privateKeyBase64 = [System.Convert]::ToBase64String($privateKeyBytes)
$privateKeyJson = @{
    data = $privateKeyBase64
    dataType = 'pfx'
    password = $privateKeyPasswordPlain
$privateKeyJson = ConvertTo-Json -InputObject $privateKeyJson
$privateKeyBytes = [System.Text.Encoding]::UTF8.GetBytes($privateKeyJson)
$privateKeyBase64 = [System.Convert]::ToBase64String($privateKeyBytes)
$privateKeySecret = ConvertTo-SecureString -String $privateKeyBase64 -AsPlainText -Force

$keyVaultKeyName = $VMName + '-WinRM'
$keyVaultWinRM = Set-AzureKeyVaultSecret -VaultName $keyVault.VaultName -Name $keyVaultKeyName -SecretValue $privateKeySecret
Write-Output "Added Key Vault key: $($keyVaultWinRM.Id)"

Remove-Item $privateKeyPath -Force

# Create a subnet configuration
$args = @{
    Name = $VMName + 'Subnet'
    AddressPrefix = ''
$subnetConfig = New-AzureRmVirtualNetworkSubnetConfig @args

# Create a virtual network
$args = @{
    Name = $VMName + 'Net'
    ResourceGroupName = $rgName
    Location = $location
    AddressPrefix = ''
    Subnet = $subnetConfig
$vnet = New-AzureRmVirtualNetwork @args
Write-Output "Created Virtual Network: $($vnet.Id)"

# Create a public IP address and specify a DNS name
$args = @{
    Name = $VMName + 'PublicIP'
    ResourceGroupName = $rgName
    Location = $location
    AllocationMethod = 'Dynamic'
    IdleTimeoutInMinutes = 4
$publicIP = New-AzureRmPublicIpAddress @args
Write-Output "Created Public IP: $($publicIP.Id)"

# Create an inbound network security group rule for port 5986 - WinRM: HTTPS
$args = @{
    Name = 'WinRM'
    Protocol = 'Tcp'
    Direction = 'Inbound'
    SourceAddressPrefix = '*'
    SourcePortRange = '*'
    DestinationAddressPrefix = '*'
    DestinationPortRange = 5986
    Access = 'Allow'
    Priority = 1001
$nsgRuleWRM = New-AzureRmNetworkSecurityRuleConfig @args

# Create a network security group
$args = @{
    Name = $VMName + 'NSG'
    ResourceGroupName = $rgName
    Location = $location
    SecurityRules = $nsgRuleWRM
$nsg = New-AzureRmNetworkSecurityGroup @args
Write-Output "Created Network Security Group: $($nsg.Id)"

# Create a virtual network card and associate with public IP address and NSG
$args = @{
    Name = $VMName + 'NIC'
    ResourceGroupName = $rgName
    Location = $location
    SubnetId = $vnet.Subnets[0].Id
    NetworkSecurityGroupId = $nsg.Id
    PublicIpAddressId = $publicIP.Id
$nic = New-AzureRmNetworkInterface @args
Write-Output "Created Network Interface: $($nic.Id)"

# Define the image created by Packer
$imageConfig = New-AzureRmImageConfig -Location $location
$imageConfig = Set-AzureRmImageOsDisk -Image $imageConfig -OsType Windows -OsState Generalized -BlobUri $vhdUri -StorageAccountType Premium_LRS
$imageName = $VMName + 'Image'
$image = New-AzureRmImage -ImageName $imageName -ResourceGroupName $rgName -Image $imageConfig
Write-Output "Created Image: $($image.Id)"

# Create a virtual machine configuration
$vmConfig = New-AzureRmVMConfig -VMName $VMName -VMSize $VMSize
$vmConfig = $vmConfig | Set-AzureRmVMOperatingSystem -Windows -ComputerName $VMName -Credential $cred -ProvisionVMAgent -WinRMHttps -WinRMCertificateUrl $keyVaultWinRM.Id
$vmConfig = $vmConfig | Set-AzureRmVMSourceImage -Id $image.Id
$vmConfig = $vmConfig | Add-AzureRmVMSecret -SourceVaultId $keyVault.ResourceId -CertificateStore 'My' -CertificateUrl $keyVaultWinRM.Id
$vmConfig = $vmConfig | Add-AzureRmVMNetworkInterface -Id $nic.Id
$vmConfig = $vmConfig | Add-AzureRmVMDataDisk -DiskSizeInGB 64 -CreateOption Empty -Lun 0

New-AzureRmVM -ResourceGroupName $rgName -Location $location -VM $vmConfig
$vm = Get-AzureRmVM -ResourceGroupName $rgName -Name $VMName
Write-Output "Created Virtual Machine: $($vm.Id)"

Previous script creates new Azure virtual machine from specified image, but most importantly it creates self-signed certificate and setups WinRM remoting. The script can be run with following arguments for example:

$rgName = "MyVS2017BuildAgent"
$location = "westeurope"
$vhdUri = ''
$VMName = 'VS2017Build'
$cred = Get-Credential

.\CreateVM.ps1 -rgName $rgName -location $location -vhdUri $vhdUri -VMName $VMName -cred $cred

Script asks for user name and password for a new user created as administrator in new virtual machine. Then it asks for Azure credentials, so it can actually create a new virtual machine in Azure. It will take few minutes, until new virtual machine is started.

Install VSTS build agent

After virtual machine is started, it should be possible to login using PowerShell remote session. It is easy to verify that. At first I check VM’s public IP in Azure portal. Then I simply run Enter-PSSession command.

$cred = Get-Credential
$option = New-PSSessionOption -SkipCACheck -SkipCNCheck -SkipRevocationCheck
Enter-PSSession -ComputerName 'VM_IP_ADDRESS' -UseSSL -Credentail $cred -SessionOption $option

Type gci and it should be possible to see remote files. Type exit to return to local session

Following script downloads, installs and configures VSTS build agent on the target machine.

    [string] $rgName,
    [string] $location,
    [string] $VMName,
    [PSCredential] $cred,
    [string] $VSTSAccount,
    [string] $PAT,
    [string] $VSTSAgentUrl,
    [string] $AgentPool = 'default'

# Get VM IP address
$vm = Get-AzureRmVM -ResourceGroupName $rgName -Name $VMName
$nicId = $vm.NetworkProfile.NetworkInterfaces.Id
$nic = Get-AzureRmNetworkInterface -ResourceGroupName $rgName | Where-Object { $_.Id -eq $nicId }
$publicIpId = $nic.IpConfigurations.PublicIpAddress.Id
$publicIp = Get-AzureRmPublicIpAddress -ResourceGroupName $rgName | Where-Object { $_.Id -eq $publicIpId }
$vmAddress = $publicIp.IpAddress
Write-Output "Connecting to VM $($publicIp.IpAddress)"

$sessionOption = New-PSSessionOption -SkipCACheck -SkipCNCheck -SkipRevocationCheck
$session = New-PSSession -ComputerName $vmAddress -UseSSL -Credential $cred -SessionOption $sessionOption
$remoteArgs = @($VSTSAgentUrl, $VSTSAccount, $PAT, $VMName, $AgentPool)
Invoke-Command -Session $session -ArgumentList $remoteArgs -ScriptBlock {
        [string] $VSTSAgentUrl,
        [string] $VSTSAccount,
        [string] $PAT,
        [string] $VMName,
        [string] $AgentPool

    function GetRandomPassword {
        $sourceChars = '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@#$^&(){}[],.'
        $max = $sourceChars.Length
        $result = ''
        for ($i = 0; $i -lt 20; $i += 1) {
            $index = Get-Random -Minimum 0 -Maximum $max
            $c = $sourceChars[$index]
            $result += $c

        return $result

    # Create and format partition G:
    $disk = Get-Disk | Where-Object { $_.PartitionStyle -eq 'RAW' }
    Initialize-Disk -InputObject $disk -PartitionStyle GPT
    $partition = New-Partition -InputObject $disk -UseMaximumSize -DriveLetter 'G'
    Format-Volume -Partition $partition -FileSystem NTFS -NewFileSystemLabel 'BUILD' -Confirm:$true
    Write-Output 'Formatted volume G:'

    # Create user svcBuild
    $serviceUserName = 'svcBuild'
    $servicePassword = GetRandomPassword
    $serviceSecurePassword = ConvertTo-SecureString $servicePassword -AsPlainText -Force
    $args = @{
        Name = $serviceUserName
        Password = $serviceSecurePassword
        FullName = 'BuildService'
        AccountNeverExpires = $true
        PasswordNeverExpires = $true
    $serviceUser = New-LocalUser @args
    $administratorsGroup = Get-LocalGroup -Name 'Administrators'
    Add-LocalGroupMember -Group $administratorsGroup -Member $serviceUser
    Write-Output 'Created user svcBuild'

    # Disable installer user, that installed Visual Studio 2017
    Disable-LocalUser -Name 'installer'
    Write-Host 'Disabled user installer'

    Set-Location 'G:\'

    # Download VSTS Agent
    $vstsAgentZipPath = ""
    Invoke-WebRequest -Uri $VSTSAgentUrl -UseBasicParsing -OutFile $vstsAgentZipPath
    Write-Output 'Downloaded'

    # Unzip VSTS Agent
    $buildFolder = New-Item -Path 'Build' -ItemType Directory
    Expand-Archive -Path $vstsAgentZipPath -DestinationPath $buildFolder.FullName
    Set-Location $buildFolder.FullName
    Write-Output 'Extracted'

    $serviceUserQualifiedName = "$VMName\$serviceUserName"
    & .\config.cmd --unattended --url "'"https://$'"" --auth pat --token "'"$PAT'"" --pool "'"$AgentPool'"" --agent "'"$VMName'"" --runAsService --windowsLogonAccount "'"$serviceUserQualifiedName'"" --windowsLogonPassword "'"$servicePassword'""
    Write-Output 'VSTS Build Agent configured.'

Remove-PSSession -Session $session

Previous script installs build agent on newly formatted data disk that we added in previous script. It also creates new user svcBuild with administrative rights. Then the VSTS build agent is configured to run as this user. This is not required in all cases. For example building of .NET Framework or .NET Core projects can easily run as default NETWORK_SERVICE user. However, full user is required by building Universal Windows Application or Xamarin Android applications. Also Microsoft configures their Hosted build agents to run with administrative privileges.

Actually for .NET Framework or .NET Core projects, Visual Studio Build Tools should be sufficient to install, and it can run inside Docker container. But this will be another article.

Now we can run the script with following parameters:

$rgName = "MyVS2017BuildAgent"
$location = "westeurope"
$VMName = 'VS2017Build'
$cred = Get-Credential
$VSTSAccount = "erni"
$PAT = "{Get private access token from VSTS}"
$VSTSAgentUrl = ""

.\InstallAgent.ps1 -rgName $rgName -location $location -VMName $VMName -cred $cred -VSTSAccount $VSTSAccount -PAT $PAT -VSTSAgentUrl $VSTSAgentUrl

PAT must be obtained before installing build agent as described at Deploy an agent on Windows

And now the agent should be visible in default queue.


This way we installed new VSTS/TFS build agent simply by running few PowerShell scripts. If there is need to have another identical build agent or current agent breaks down, it is very easy to setup new build agent by running 2 PowerShell scripts to create new virtual machine and install VSTS build agent.

And when there is new version of Visual Studio available, simply create new image by running single script.

In the end I would like to say I am glad that Packer templates to install development tools are available at GitHub, so that we don’t have to start from scratch.

News from ERNI

In our newsroom, you find all our articles, blogs and series entries in one place.

  • 27.09.2023.

    Unveiling the power of data: Part III – Navigating challenges and harnessing insights in data-driven projects

    Transforming an idea into a successful machine learning (ML)-based product involves navigating various challenges. In this final part of our series, we delve into two crucial aspects: ensuring 24/7 operation of the product and prioritising user experience (UX).

  • 13.09.2023.

    Exploring Language Models: An overview of LLMs and their practical implementation

    Generative AI models have recently amazed with unprecedented outputs, such as hyper-realistic images, diverse music, coherent texts, and synthetic videos, sparking excitement. Despite this progress, addressing ethical and societal concerns is crucial for responsible and beneficial utilization, guarding against issues like misinformation and manipulation in this AI-powered creative era.

  • 01.09.2023.

    Peter Zuber becomes the new Managing Director of ERNI Switzerland

    ERNI is setting an agenda for growth and innovation with the appointment of Peter Zuber as Managing Director of the Swiss business unit. With his previous experience and expertise, he will further expand the positioning of ERNI Switzerland, as a leading consulting firm for software development and digital innovation.

  • data230.08.2023.

    Unveiling the power of data: Part II – Navigating challenges and harnessing insights in data-driven projects

    The second article from the series on data-driven projects, explores common challenges that arise during their execution. To illustrate these concepts, we will focus on one of ERNI’s latest project called GeoML. This second article focuses on the second part of the GeoML project: Idea2Proof.

  • 16.08.2023.

    Unveiling the power of data: Part I – Navigating challenges and harnessing insights in data-driven projects

    In this series of articles (three in total), we look at data-driven projects and explore seven common challenges that arise during their execution. To illustrate these concepts, we will focus on one of ERNI’s latest project – GeoML, dealing with the development of a machine learning algorithm capable of assessing road accident risks more accurately than an individual relying solely on their years of personal experience as a road user, despite limited resources and data availability.


  • 09.08.2023.

    Collaborative robots revolutionising the future of work

    The future of work involves collaboration between robots and humans. After many years of integrating technology into work dynamics, the arrival of collaborative robots, or cobots, is a reality, boosting not only safety in the workplace but also productivity and efficiency in companies.

  • 19.07.2023.

    When the lid doesn’t fit the container: User Experience Design as risk minimisation

    Struggling with a difficult software application is like forcing a lid onto a poorly fitting container. This article explores the significance of user experience (UX) in software development. Discover how prioritising UX improves efficiency and customer satisfaction and reduces risks and costs. Join us as we uncover the key to successful software applications through user-centric design.

  • 21.06.2023.

    How does application security impact your business?

    With the rise of cyber threats and the growing dependence on technology, businesses must recognize the significance of application security as a fundamental pillar for protecting sensitive information and preserving operational resilience.