Remove locks from Azure resources

In my previous blog post Lock Azure resources to prevent accidental deletion, I showed how to add a lock to a resource with an ARM template to protect it from accidental deletion. When you want to delete the resource, you first need to remove the lock. A lock cannot be removed with an ARM template. To remove the lock you can use:

  • Powershell
  • Rest API
  • Portal

When deploying with ARM templates, the deployment will not remove locks. This will protect your resource from accidental deletion in a Infrastructure as Code scenario when deploying from VSTS.

Only the roles Owner and User Access Administrator can delete the locks on the resources. After the lock is removed the resource can be delete as any other resource.

Removing a locks with Powershell
Powershell has the following cmdlets to manage locks:

Only the roles Owner and User Access Administrator can manage the locks on the resources.

The New-AzureRmResourceLock sets a new lock to a resource, the Get-AzureRmResourceLock cmdlets shows all the locks in you subscription, the Set-AzureRmResourceLock enables you to change locks and the Remove-AzureRmResourceLock will remove the locks. The following powershell command will remove all the locks within the specified resourcegroup:

$rg-name = "rgwithlocksname"
Get-AzureRmResourceLock | where ResourceGroupName -eq "$rg-name" | Remove-AzureRmResourceLock -Force

Removing locks with the Rest-API
Locks can also be managed with the Rest-API: Here you can see the API: Microsoft documentation management locks Rest-API

Removing locks from the Portal
select-lock
Next you can also remove the locks from the portal. To do this, go to the resource and open the lock tab in the settings. If you delete a resource group with a locked resource, the portal UI will give you an error and no resources are deleted.

Conclusion
When deploying resources with ARM template, locks can be helpful to protect your critical resource from accidental deletion. They can’t be deleted with an ARM template (even if it deploys in complete mode). If your contributors Users in the portal do not have the rights to manage locks, only the subscription owner will be able to delete the locked resources.

Lock Azure resources to prevent accidental deletion

How a lock can prevent user from accidental deletion of a resource.

In some cases you want to protect critical resources from accidental deletion. Some examples are a storage account with source data for processing, a Key Vault with disk encryption keys, or another key component in your infrastructure. When losing some resources that are key in your infrastructure, recovery can be dramatic. Resource Manager locks will enable you to protect these critical resources from deletion.

Resource Manager locks
Resource Manager locks apply to the management function of the locked resources. The locks do not have any impact the normal functions of the resource. You have two possible types of locks on a resource:

Locking down a resource can save your contributors from accidently delete a critical resources. An ‘oeps… I deleted the wrong resources’ moment should be a thing of the past.

CannotDelete means authorized users can still read and modify a resource, but they can’t delete the resource.
ReadOnly means authorized users can read a resource, but they can’t delete or update the resource. Applying this lock is similar to restricting all authorized users to the permissions granted by the Reader role.

In practice user or service principles have the role Contributor on a resource. This role allows the user to delete the resource. A lock on the resource will prevent the user with the Contributor role to delete the resource. Only the roles Owner and User Access Administrator can change the locks on the resources.

When deploying a lock from a VSTS release pipeline, the Service Principle should have the role User Access Administrator on the resource group.

Deploying Resource Manager locks
Deploying locks can be done with ARM templates or Powershell. I prefer to add them to my ARM template and deploy them with my release pipeline. A simple template to add a lock looks like:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "lockedResource": {
      "type": "string"
    }
  },
  "resources": [
    {
      "name": "[concat(parameters('lockedResource'), '/Microsoft.Authorization/myLock')]",
      "type": "Microsoft.Storage/storageAccounts/providers/locks",
      "apiVersion": "2015-01-01",
      "properties": {
        "level": "CannotDelete",
        "notes": "prevent resource from accidental deletion"
      }
    }
  ]
}

The parameter lockedResource should look like: ‘/Microsoft.Authorization/’ in the case of locking down a storage account.

When you delete a group in the portal with a locked resource, prevent-delete-of-resourcethe deletion is prevented and the following message is shown to the user:

After removing the lock from the storage account, you will be able to remove the resource group.

Conclusion
Locking critical resources can prevent you from accidental and hard to recover downtime. Applying them from within your arm template is very easy and enables you to manage them like any other resource.

Fixing ARM deployment errors for Linux disk encryption

When running ARM templates to deploy Linux with disk encryption on Azure I encountered a few errors. The errors where coming when I rerun the same template multiple times. In this post I explain the errors and how I fixed them.

Error: … is not a valid versioned Key Vault Secret URL
The first error I ran into was related to the return message of the encryption extension (.instanceView.statuses[0].message). The result message contains the Key Vault Sercet Url if the encryption is successful. When running the same template over and over again the following messages popped up:

"message": "{\r\n  \"error\": {\r\n    \"code\": \"InvalidParameter\",\r\n    \"target\": \"encryptionSettings.diskEncryptionKey.secretUrl\",\r\n    \"message\": \"Encryption succeeded for all volumes is not a valid versioned Key Vault Secret URL. It should be in the format https://<vaultEndpoint>/secrets/<secretName>/<secretVersion>.\"\r\n  }\r\n}

"message": "{\r\n  \"error\": {\r\n    \"code\": \"InvalidParameter\",\r\n    \"target\": \"encryptionSettings.diskEncryptionKey.secretUrl\",\r\n    \"message\": \"Encryption succeeded for data volumes is not a valid versioned Key Vault Secret URL. It should be in the format https://<vaultEndpoint>/secrets/<secretName>/<secretVersion>.\"\r\n  }\r\n}"

The cause of the message is that the return message is used for two different purposes:

  1. returning the Key Vault Secret URL
  2. returning verbose messages

The first time the extension runs the return value is the Key Vault Secret URL. Then when the encryption is finished the extension will return the verbose message that the Encryption succeeded. The verbose message will break the next ARM template that applies the encryption information to the VM configuration.

Idempotent provisioning and configuration
Creating an idempotent provisioning and configuration for provisioning will enable you to rerun your releases at any time. ARM Templates are idempotent. This means that every time they will be executed the result will be exactly the same. The configuration is set to what you have configured in your definitions. Because the definitions are declarative, you do not have to think about the steps on how to get there; the system will figure this out for you.

See Infrastructure as Code and VSTS

What I want the extension to do is give me the same result when I run the template multiple times (idempotent). Because of the verbose message I have to find a solution to be able to reapply my templates without having error messages about successful disk encryption. The solution turned to set the sequenceVersion parameter for the disk encryption extension to a new value every time you run the ARM template.

sequenceVersion: Sequence version of the BitLocker operation. Increment this version number every time a disk encryption operation is performed on the same VM (documentation)

Set the sequenceVersion
The documentation on the sequenceVersion suggests that it is a number, however it can be anything (as long different from the last string). Unfortunately you can’t generate a random string in a ARM template, that would break the idempotent property. To solve this I use a little trick to generate a unique string in my deployment template without adding an extra parameter. Each deployment in our VSTS pipeline get an unique name. I used that name to generate a unique string:

"value":"[uniqueString(resourceGroup().id, deployment().name)]"

The deployment name is different on each deployment (resource group deployment task VSTS), for example: xxxx-20161208-1436, this can be utilized to generate unique strings. The function deployment().name will return the name of the current template, so you have to be in the top level template when using linked templates and pass the it from there.

Error: Failed to get status file [Errno 2] No such file or directory: …
When you rerun the template with a new sequenceVersion before you have restarted the VM (when a restart is needed). You will get a new error message:

\"message\": \"VM has reported a failure when processing extension 'AzureDiskEncryptionForLinux'. Error message: \\\"Failed to get status file [Errno 2] No such file or directory: '/var/lib/waagent/Microsoft.Azure.Security.AzureDiskEncryptionForLinux-0.1.0.999239/status/1.status'\\\"

To solve this error, as the message imply, you need to restart the VM. Go to this post to see how to accomplish this.

Conclusion
After applying the unique string to the sequenceVersion, the deployments keep working. The only thing you have to be keen on is to restart you VMs when they are in the VMRestartPending state.

Make a .NET Core CLI Extensions

.NET Core comes with a new tool chain for software development. These tools run from the CLI (Command Line Interface). Out of the box you have command line restore, build, etc. These tools are the primary tools on which higher-level tools, such as Integrated Development Environments (IDEs), editors and build orchestrators can build on. The tools set in extendable on project level. That means that you can add tools in the context of your project by adding it to your project file. The tool you want to run with the from the CLI is called a verb (dotnet-verb). Running a verb is done by: dotnet verb.

No more, it works on my machine!

Not all tools you need are installed out of the box. The .NET Core CLI comes with an extension model that enables you to add your own tools. On project level you can add tools distributed with NuGet. Add the NuGet package configuration to your project file and run dotnet restore to get the tool on your system. The tool will be installed the same way your project NuGet packages are installed for your project. The tools in now available in the scope of your project. The tools will be installed on all the machines where the project is developed. This way you have the same tools on your build server as on your developer workstation. With the new model you do not need to have administrative privileges to install tooling for your project.

Create an simple extension
To show how this works I have created an example in which we will see this. Project can be found at github dotnet-allversions.

using System;
using System.Diagnostics;
using System.Linq;
using System.IO;
using Microsoft.DotNet.PlatformAbstractions;

namespace dotnetallversion
{
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("Installed .Net versions are:");

            string clipath = Path.GetDirectoryName(new Microsoft.DotNet.Cli.Utils.Muxer().MuxerPath);
            DirectoryInfo di = new DirectoryInfo(Path.Combine(clipath,"sdk"));

            Console.WriteLine("Active version: " + Microsoft.DotNet.Cli.Utils.Product.Version);
            Console.WriteLine();

            foreach (var item in di.GetDirectories()){
                string versionfile = Path.Combine(item.FullName,".version");
                if (IsVersionFilePresent(versionfile))
                {
                    string version = item.Name;
                    string hash = GetHash(versionfile);

                    string template = $@"Product Information:
 Version:            {version}
 Commit SHA-1 hash:  {hash}
";
                    Console.WriteLine(template);
                }
            }
            Console.WriteLine($@"Runtime Environment:
 OS Name:     {RuntimeEnvironment.OperatingSystem}
 OS Version:  {RuntimeEnvironment.OperatingSystemVersion}
 OS Platform: {RuntimeEnvironment.OperatingSystemPlatform}
 RID:         {RuntimeEnvironment.GetRuntimeIdentifier()}");
        }

        static string GetHash(string versionfile)
        {
            var lines = File.ReadAllLines(versionfile);
            return lines[0].Substring(0,8);
        }

        static bool IsVersionFilePresent(string versionfile){
            return File.Exists(versionfile);
        }
    }
}

To make a .NET CLI extension from the program, you have to make a NuGet package. This can be done by getting the needed packages and then run the pack command:

dotnet restore
dotnet pack

Now you have to upload the NuGet package to a NuGet server to be used in your projects.

Add the extension to your project
Now you can add the tool to you project by adding the following configuration to your project file:

<ItemGroup>
    <DotNetCliToolReference Include="dotnet-allversions">
         <Version>0.1.1-*</Version>
    </DotNetCliToolReference>
</ItemGroup>

Then run:

dotnet restore

This will download and install the tool from NuGet. Next you can run the program in your project scope:

dotnet allversions

Conclusion
.NET CLI Extension are a powerful way of extending your command within your project. For example you can enforce that all developers and build servers are using the same command versions. Using NuGet to download the give all team members access to the same set of tools everywhere.

Infrastructure as Code and VSTS

Written by Pascal Naber and Peter Groenewegen for the Xpirit Magazine

Your team is in the process of developing a new application feature, and the infrastructure has to be adapted. The first step is to change a file in your source control system that describes your infrastructure. When the changed definition file is saved in your source control system it triggers a new build and release. Your new infrastructure is deployed to your test environment, and the whole process to get the new infrastructure deployed took minutes while you only changed a definition file and you did not touch the infrastructure itself.

Does this sound like a dream? It is called Infrastructure as Code. In this article we will explain what Infrastructure as Code (IaC) is, the problems it solves and how to apply it with Visual Studio Team Services (VSTS).

Infrastructure in former times
We have radically changed the way our infrastructure is treated. Before the change to IaC it looked like this: Our Operations team was responsible for the infrastructure of the application. That team is very busy because of all their responsibilities, so we have to request changes to the infrastructure well ahead of time.

The infrastructure for the DTAP environment was partially created by hand and partly by using seven PowerShell scripts. The order in which the scripts are executed is important and there is only one IT-Pro with the required knowledge. Those PowerShell scripts are distributed over multiple people and are partly saved on local machines. The other part of the scripts is stored on a network share so every IT-pro can access it. In the course of time many different versions of the PowerShell scripts are created because it depends on the person who wants to execute it and the project it is executed for.

directories2
Figure 1: A typical network share

The configuration of the environment is also done by hand.

This process creates the following problems:

  • Changes take too long before being applied.
  • The creation of the environment takes a long time and is of high risk, not only because manual steps can be easily forgotten. The order of the PowerShell scripts is important, but only a single person knows about this order.
  • What’s more, the scripts are executed at a particular point in time and they are updated regularly. However, it is unclear whether the environment will be the same when created again.
  • Some scripts are on the work machine of the IT-Pro, sometimes because it’s the person’s expertise area, and sometimes because the scripts are not production code. In either case, nobody else has access to it.
  • Some scripts are shared, but many versions of the same script are created over time. It’s not clear what has changed, why it was changed and who changed it.
  • It’s also not clear what the latest version of the script is. (See figure 1)
  • The PowerShell scripts contained a lot of code. The code does not only contain the creation of resources, but also checks whether resources already exist and updates them, if required.
  • The whole process of deploying infrastructure is pretty much trial and error.

As you can see, the creation of infrastructure is an error-prone and risky operation that needs to change in order to deliver high quality, reproducible infrastructure.

Definition of Infrastructure as Code
Infrastructure as Code is the process of managing and provisioning computing infrastructure and its configuration through machine-processable definition files. It treats the infrastructure as a software system, applying software engineering practices to manage changes to the system in a structured, safe way.

Infrastructure as Code characteristics
Our infrastructure deployment example has the following infrastructure provisioning characteristics, which will be explained in the following paragraphs:

  • Declarative
  • Single source of truth
  • Increase repeatability and testability
  • Decrease provisioning time
  • Rely less on availability of persons to perform tasks
  • Use proven software development practices for deploying infrastructure
  • Idempotent provisioning and configuration

Declarative

imperative-vs-declarative
Figure 2: Schematic visualization of Imperative vs Declarative

A practice in Infrastructure as Code is to write your definitions in
Definition of Infrastructure as Code Infrastructure as Code is the process of managing and provisioning computing infrastructure and its configuration through machine-processable definition files. It treats the infrastructure as a software system, applying software a declarative way versus an imperative way. You define the state of the infrastructure you want to have and let the system do the work on getting there. In the Azure Cloud, the way to use declarative code definition files are ARM templates. Besides the native tooling you can use a third party tool like Terraform to deploy declarative files to Classic Azure and to AzureRM. PowerShell scripts use an imperative way. In PowerShell you specify how you want to reach your goals.

Single source of truth
The infrastructure declaration files are placed in a source control repository. This is the single source of truth. All team members can see and work on the files and start their own version of the infrastructure. They can test it, and then commit changes to source control. All changes are under version control and can be linked to work items. The source control repository gives insight into what is changed and by whom.
The link to the work item can tell you why it was changed. It’s also clear what the latest version of the file is. Team members can easily work together on the same file.

Increase repeatability and testability
When a change to source control is pushed, this initiates a build that can test the change and after that publish an artifact. That will trigger a release which deploys your infrastructure. Infrastructure as Code makes your process repeatable and testable. After deploying your infrastructure, you can run standard tests to see if the deployment is correct. Changes can be deployed and tested in a DTAP pipeline.
This makes your process of deploying infrastructure reliable, and when you redeploy, you will get the same environment time after time.

Decrease provisioning time
Everything is automated to create the infrastructure. This results in short provisioning times. In many cases a deployment to a cloud environment has a lead time of 5 to 10 minutes, compared to a deployment time of days, weeks or even months.
This is accomplished by skipping manual tasks and waiting time in combination with high-quality, proven templates. The automation creates an environment that should not be touched by hand. It handles your servers like cattle instead of pets*. In case of problems there is no need to logon to infrastructure to see what is going wrong and trying to find the problem and fix it. Just delete the environment and redeploy the infrastructure to get the original working version.

Rely less on availability of persons to perform tasks
In our team, everybody can change and deploy the infrastructure. This removes the dependency on a separate operations team. By having a shared responsibility, the whole team cares and is able to optimize the infrastructure for the application.

This will result in more efficient usage of the infrastructure deployed by the team. Operations is now spending more time on developing software than on configuring infrastructure by hand. Operations is moving more to DevOps.

Pets vs Cattle
Is a widely used metaphor for how IT operations should handle servers in the cloud.
Servers are like pets You name them and when they get sick, you nurse them back to health.
Servers are like cattle You number them and when they get sick, you get another one.

Use proven software development practices for deploying infrastructure
When applying Infrastructure as Code you can use proven software development practices for deploying infrastructure. Handing your infrastructure in the same way you handle your code, helps you to streamline the whole process. You can start and test your infrastructure on each change. Using Source control as a team is a must. The sources that it contains should always be in the state in which they can be executed. This results in the need for tests such as unit tests.

Idempotent provisioning and configuration
Creating an idempotent provisioning and configuration for provisioning will enable you to rerun your releases at any time. ARM Templates are idempotent. This means that every time they will be executed the result will be exactly the same. The configuration is set to what you have configured in your definitions. Because the definitions are declarative, you do not have to think about the steps on how to get there; the system will figure this out for you.

Creating an Infrastructure as Code pipeline with VSTS
There are many tools you can use to create an Infrastructure as Code pipeline. In this sample we will show you how to create a pipeline which deploys an ARM template with a Visual Studio Team Service (VSTS) build and release pipeline. The ARM Template will be placed in a Git repository in VSTS. When you change the template, a build is triggered, and the build will publish the ARM template as an artifact. Subsequently, the release will deploy or apply the template to an Azure Resource group.

pipeline
Figure 3: VSTS source control, build and release

Prerequisite
To start building Infrastructure as Code with VSTS you need a VSTS account. If you don’t have a VSTS account, you can create one at https://www.visualstudio.com. This is free for up to 5 users. Within the VSTS Account you create, you then create a new project with a Git repository. The next step is to get some infrastructure definition pushed to the repository.

ARM template
ARM templates are a declarative way of describing your infrastructure. ARM templates are json files that describe your infrastructure and can contain 4 sections: parameters, variables, resources and output. To get started with ARM templates you can read Resource Manager Template Walkthrough.

It is possible to create ARM templates yourself by choosing the project type Cloud → Azure Resource Group in Visual Studio. The community has already created a lot of templates that you can reuse or take as a good starting point. The community ARM templates can be found on the Azure Quickstart Templates. ARM templates are supported on Azure and also on-premise with Microsoft Azure Stack.

In our example we want to deploy a Web App with a SQL Server database. The files for this configuration are called 201-web-appsql-database. Download the ARM template and parameter files and push them in your Git source control repository in your VSTS project.

VSTS Build
Now you are ready to create the build. Navigate to the build tab in VSTS and add a new build. Use your Git repository as the source. Make sure you have Continuous Integration turned on. This will start the build when code is pushed into the Git repository. As a minimum, the build has to publish your files to an artifact called drop. To do this, add a Copy Publish Artifact step to your build and configure it like this:

code
Figure 4: ARM template in Git
buildpipeline
Figure 5: Copy Publish Artifact configuration

VSTS Release
The next step is to use VSTS Release for deploying your infrastructure to Azure. To do so, you navigate to release and add a new Release. Rename the first environment to Development and add the task Azure Resource Group Deployment to the Development environment. This task can deploy your ARM template to an Azure Resource group. To configure your task, you need to add an ARM Service Endpoint to VSTS. You can read how to do this in the following blogpost: http://xpir.it/mg3-iac4. Now you can fill in the remaining information, i.e.  the name of the ARM template and the name of the parameters file (fig. 6):

releasepipeline1
Figure 6: Azure Resource Group deployment configuration
releasepipeline2
Figure 7: Clone an environment in Release

DTAP
At this point you only have a Development environment. Now you are going to add a Test, Acceptance and Production environment. The first step is to create the other environments in VSTS release manager. Add environments by clicking the Add environment button or bycloning the development environment.

Infrastructure as Code will help you to create a robust and reliable infrastructure in a minimum of time.

Each environment needs separate parameters, so you need to create a parameter json file per DTAP environment. Each environment gets its own azuredeploy.{environment}.parameters.json file, where {environment} stands for development, test, acceptance or production.

releasepipeline3
Figure 8: Configure each environment to a different parameters file

The deployment can be changed to meet your wishes. For example, deploy to a separate Resource group in Azure per DTAP environment. Now you have your first version of an Infrastructure as Code deployment pipeline. The pipeline can be extended in multiple ways. The build can be extended with tests to make sure the infrastructure is configured as it is supposed to be. The release can be extended by adding approvers, which makes sure that an environment will only be deployed after an approval of one or more persons.

Conclusion
Infrastructure as Code will help you to create a robust and reliable infrastructure in a minimum of time. Each time you deploy, the infrastructure will be exactly the same. You can easily change the resources you are using by changing code and not by changing infrastructure.

When you apply Infrastructure as Code, everything should be automated, which will save a lot of time, manual configuration and errors. All configurations are the same, and there are no more surprises when you release your application to production. All changes in the infrastructure are accessible in source control.

Source control gives great insight in why and what is changed and by whom. A DevOps team that applies Infrastructure as Code is self-contained in running its application. The team is responsible for all aspects of the environment they are using. All team members have the same power and responsibilities in keeping everything up and running, and everybody is able to quickly fix, test and deploy changes.

peterenpascal

This article was published in the Xpirit Magazine #3, get your soft or hard copy here.

VSTS task clean resource group

cleanresourcegroupWhen testing deployment of resources in release pipelines, the resource groups need to be cleaned after you are done testing the deployment of the resources. In many scenarios you do not want or have no rights to remove the resource group it self. For removing the resources in the resource group you can use the VSTS task clean resources. This tasks removes all resources in a resource group.demo

Keep your ARM deployment secrets in the Key Vault

Keep your deployment secret secure in the key vault when using ARM templates to deploy into Azure

When creating new resource in Azure that have secrets like passwords or ssl certificates you can securely save them in the Key Vault and get them from the Key Vault when you deploy. Only the people who need access to the secrets can read and write them to the Key Vault. In a infrastructure as code scenario the secrets are supplied when deploying your templates to Azure. The code it self will be free of secrets.

To accomplish this you need to do following:

  • Deploy a Key Vault
  • Add the secret to the Key Vault
  • Create a ARM template that uses the secret on deployment

Deploy a Key Vault
When you need to access the Key Vault from you deployment, you need to set the enabledForTemplateDeployment to true. The following ARM template will create a Key Vault that is enable for deployments:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "keyVaultName": {
      "type": "string",
      "metadata": {
        "description": "Name of the vault"
      }
    },
    "tenantId": {
      "type": "string",
      "metadata": {
        "description": "Tenant Id for the subscription and use assigned access to the vault. Available from the Get-AzureRMSubscription PowerShell cmdlet"
      }
    }
  },
  "variables":{
     "skuFamily": "A",
     "skuName": "standard"
  },
  "resources": [
    {
      "type": "Microsoft.KeyVault/vaults",
      "name": "[parameters('keyVaultName')]",
      "apiVersion": "2015-06-01",
      "location": "[resourceGroup().location]",
      "properties": {
        "sku": {
          "name": "[variables('skuName')]",
          "family": "[variables('skuFamily')]"
        },
"accessPolicies": [
],
        "tenantId": "[parameters('tenantId')]",
        "enabledForDeployment": false,
        "enabledForTemplateDeployment": true,
        "enabledForVolumeEncryption": false
      }
    }
  ]
}

Add secret to the Key Vault
Secret can be added to the Key Vault with an ARM template, with Powershell or you can add it in the portal.

     {
            "type": "secrets",
            "name": "[parameters('secretName')]",
            "apiVersion": "2015-06-01",
            "properties": {
                "value": "[parameters('secretValue')]"
            },
            "dependsOn": [
                "[concat('Microsoft.KeyVault/vaults/', parameters('keyVaultName'))]"
            ]
        }

Create a ARM template that uses the secret on deployment
The last step is using the secrets in your arm templates. This can be done by making a reference to your key vault in the parameters:

{
    "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
      "password": {
        "reference": {
          "keyVault": {
            "id": "/subscriptions/{guid}/resourceGroups/{group-name}/providers/Microsoft.KeyVault/vaults/{vault-name}"
          },
          "secretName": "adminPassword"
        }
      },
      "username": {
        "value": "exampleadmin"
      }
    }
}

This way of referencing the password is to static for a infrastructure as code scenario. The next step is to get the secret dynacly from the Key Vault in the environment you are deploying into. This can be done by giving the resourceGroupName, vaultName and secretName as parameters.

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
      "vaultName": {
        "type": "string"
      },
      "secretName": {
        "type": "string"
      },
      "keyVaultResourceGroup": {
        "type": "string"
      }
    },
    "resources": [
    {
      "apiVersion": "2015-01-01",
      "name": "nestedTemplate",
      "type": "Microsoft.Resources/deployments",
      "properties": {
        "mode": "incremental",
        "templateLink": {
          "uri": "		<linked template uri>",
          "contentVersion": "1.0.0.0"
        },
        "parameters": {
          "password": {
            "reference": {
              "keyVault": {
                "id": "[concat(subscription().id,'/resourceGroups/',parameters('keyVaultResourceGroup'), '/providers/Microsoft.KeyVault/vaults/', parameters('vaultName'))]"
              },
              "secretName": "[parameters('secretName')]"
            }
          }
        }
      }
    }],
    "outputs": {}
}

This way you are able to get the secrets from the Key Vault in the subscription your are deploying into.

Some extra information can be found at: resource-manager-keyvault-parameter