Subsections of Usage Guide

Concepts

Note

This page is a work in progress and will be updated as we improve & finalize the content. Please check back regularly for updates.

When developing an Azure solution using AVM modules, there are several aspects to consider. This page covers important concepts and provides guidance the technical decisions. Each concept/topic referenced here will be further detailed in the corresponding Bicep or Terraform specific guidance.

Language-agnostic concepts

Topics/concepts that are relevant and applicable for both Bicep and Terraform.

Module Sourcing

Public Registry

Leveraging the public registries (i.e., the Bicep Public Registry or the Terraform Public Registry) is the most common and recommended approach.

This allows you to leverage the latest and greatest features of the AVM modules, as well as the latest security updates. While there aren’t any prerequisites for using the public registry - no extra software component or service needs to be installed and no configuration is needed - the client machine the deployment is initiated from will need to have access to the public registry.

Private Registry (synced)

A private registry - that is hosted in your own environment - can store modules originating from the public registry. Using a private registry still grants you the latest version of AVM modules while allowing you to review each version of each module before admitting them to your private registry. You also have control over who can access your own private registry. Note that using a private registry means that you’re still using each module as is, without making any changes.

Inner-sourcing

Inner-sourcing AVM means maintaining your own, synchronized copy of AVM modules in your own internal private registry, repositories or other storage option. Customers normally look to inner-source AVM modules when they have strict security and compliance requirements, or when they want to publish their own lightly wrapped versions of the modules to meet their specific needs; for example changing some allowed or default values for parameter or variable inputs.

This is a more complex approach and requires more effort to maintain, but it can be beneficial in certain scenarios, however, it should not be the default approach as it can lead to a lot of overhead and maintenance and requires significant skills and resources to set up and maintain.

There are many ways to approach inner-sourcing AVM modules for both Bicep and Terraform. The AVM team will be publishing guidance on this topic, based on customer experience and learnings.

Tip

You can see the AVM team talking about inner-sourcing on the AVM February 2025 community call on YouTube.

Solution Development

This section provides advanced guidance for developing solutions using Azure Verified Modules (AVM). It covers technical decisions and concepts that are important for building and deploying Azure solutions using AVM modules.

Planning your solution

When implementing infrastructure in Azure leveraging IaaS and PaaS services, there are multiple options for Azure deployments. In this article we assume that a decision has been made to implement your solution, using Infrastructure-as-Code (IaC). This is best suited to allow programmatic declarative control of the target infrastructure and is ideal for projects that require repeatability and idempotency.

Choosing an Infrastructure-as-Code language

There are multiple language choices when implementing your solution using IaC in Azure. The Azure Verified Modules project currently supports Bicep and Terraform. The following guidance summarizes considerations that can help choose the option that best suits your requirements.

Reasons to choose Bicep

Bicep is the Microsoft 1st party offering for IaC deployments. It supports Generally Available (GA) and preview features for all Azure resources and allows for modular composition of resources and solution templates. The use of simplified syntax makes IaC development intuitive and the use of the Bicep extension for VSCode provides IntelliSense and syntax validation to assist with coding. Finally, Bicep is well suited for infrastructure projects and teams that don’t require management of other cloud platforms or services outside of Azure. For a more detailed read on reasons to choose Bicep, read this article from the Bicep documentation.

Reasons to choose Terraform

HashiCorp’s Terraform is an extensible 3rd party platform that can be used across multiple cloud and on-premises platforms using multiple provider plugins. It has widespread adoption due to its simplified human-readable configuration files, common functionality, and the ability to allow a project to span multiple provider spaces.

In Azure, support is provided through two primary providers called AzureRM and AzAPI respectively. The default provider for many Azure use cases is AzureRM which is co-developed between Microsoft and HashiCorp. It includes support for generally available (GA) features, while support for new and preview features might be slightly delayed following their initial release. AzAPI is developed exclusively by Microsoft and supports all preview and GA features while being more complex to use due to the more direct interaction with Azure’s APIs. While it is possible to use both providers in a single project as needed, the best practice is to standardize on a single provider as much as is reasonable.

Projects typically choose Terraform when they bridge multiple cloud infrastructure platforms or when the development team has previous experience coding in Terraform. Modern Integrated Development Environments (IDE) - such as Visual Studio Code - include extension support for Terraform features as well as additional Azure specific extensions. These extensions enable syntax validation and highlighting as well as code formatting and HashiCorp Cloud Platform (HCP) integration for HashiCorp Cloud customers. For a more detailed read on reasons to choose Terraform, read this article from the Terraform on Azure documentation.

Architecture design

Before starting the process of codifying infrastructure, it is important to develop a detailed architecture of what will be created. This should include details for:

  1. Organizational elements such as management groups, subscriptions, and resource groups as well as any tagging and Role Based Access (RBAC) configurations for each.
  2. Infrastructure services that will be created along with key configuration details like sku values, network CIDR range sizes, or other solution specific configuration.
  3. Any relationship between services that will be codified as part of the deployment.
  4. Identify inputs to your solution for designs that are intended to be used as templates.
Note

For a production grade solution, you need to

  • follow the recommendations of the Cloud Adoption Framework (CAF) and have your platform and application landing zones defined, as per Azure Landing Zones (ALZ);
  • follow the recommendations of the Azure Well-Architected Framework (WAF) to ensure that your solution is compliant with and integrates into your organization’s policies and standards. This includes considerations for security, identity, networking, monitoring, cost management, and governance.

Sourcing content for deployment

Once the architecture is agreed upon, it is time to plan the development of your IaC code. There are several key decision points that should be considered during this phase.

Content creation methods

The two primary methods used to create your solutions module are:

  1. Using base resources (“vanilla resources”) from scratch or
  2. Leveraging pre-created modules from the AVM library to minimize the time to value during development.

The trade-off between the two options is primarily around control vs. speed. AVM works to provide the best of both options by providing modules with opinionated and recommended practice defaults while allowing for more detailed configuration as needed. In our sample exercise we’ll be using AVM modules to demonstrate building the example solution.

AVM module type considerations

When using AVM modules for your solution, there is an additional choice that should be considered. The AVM library includes both pattern and resource module types. If your architecture includes or follows a well-known pattern then a pattern module may be the right option for you. If you determine this is the case, then search the module index for pattern modules in your chosen language to see if an option exists for your scenario. Otherwise, using resource modules from the library will be your best option.

In cases where an AVM resource or pattern module isn’t available for use, review the Bicep or Terraform provider documentation to identify how to augment AVM modules with standalone resources. If you feel that additional resource or pattern modules would be useful, you can also request the creation of a pattern or resource module by creating a module proposal issue on the AVM github repository.

Module source considerations

Once the decision has been made to use AVM modules to help accelerate solution development, a decision about where those modules will be sourced from is the next key decision point. A detailed exploration of the different sourcing options can be found in the Module Sourcing section of the Concepts page. Take a moment to review the options discussed there.

For our solution we will leverage the Public Registry option by sourcing AVM modules directly from the respective Terraform and Bicep public registries. This will avoid the need to fork copies of the modules for private use.

Subsections of Solution Development

Bicep - Solution Development

Introduction

Azure Verified Modules (AVM) for Bicep are a powerful tool that leverage the Bicep domain-specific language (DSL), industry knowledge, and an Open Source community, which altogether enable developers to quickly deploy Azure resources that follow Microsoft’s recommended practices for Azure.
In this article, we will walk through the Bicep specific considerations and recommended practices on developing your solution leveraging Azure Verified Modules. We’ll review some of the design features and trade-offs and include sample code to illustrate each discussion point.

In this tutorial, we will:

  • Deploy a basic Virtual Machine architecture into Azure
  • Explore recommended practices related to Bicep template development
  • Demonstrate the ease with which you can deploy AVM modules
  • Describe each of the development and deployment steps in detail

After completing this tutorial, you’ll have a working knowledge of:

  • How to discover and add AVM modules to your Bicep template
  • How to reference and use outputs across AVM modules
  • Recommended practices for parameterization and structure of your Bicep file
  • Configuration of AVM modules to meet Microsoft’s Well Architected Framework (WAF) principles
  • How to deploy your Bicep template into an Azure subscription from your local machine

Let’s get started!

Prerequisites

You will need the following tools and components to complete this guide:

Before you begin, make sure you have these tools installed in your development environment.

Solution Architecture

Before we begin coding, it is important to have details about what the infrastructure architecture will include. For our example, we will be building a solution that will host a simple application on a Linux virtual machine (VM). The solution must be secure and auditable. The VM must not be accessible from the internet and its logs should be easily accessible. All azure services should utilize logging tools for auditing purposes.

Azure VM Solution Architecture

Develop the Solution Code

Creating the main.bicep file

The architecture diagram shows all components needed for a successful solution deployment. Rather than building the complete solution at once, this tutorial takes an incremental approach building the Bicep file piece-by-piece and testing the deployment at each stage. This approach allows for discussion of each design decision along the way.

The development will start with core platform components: first the backend logging services (Log Analytics) and then the virtual network.

Let’s begin by creating our folder structure along with a main.bicep file. Your folder structure should be as follows:

VirtualMachineAVM_Example1/
└── main.bicep

After you have your folder structure and main.bicep file, we can proceed with our first AVM resources!

Log Analytics

Let’s start by adding a logging service to our main.bicep since all other deployed resources will use this service for their logs.

Tip

Always begin template development by adding resources that create dependencies for other downstream services. This approach simplifies referencing these dependencies within your other modules as you develop them. For example, starting with Logging and Virtual Network services makes sense since all other services will depend on these.

The logging solution depicted in our Architecture Diagram shows we will be using a Log Analytics workspace. Let’s add that to our template! Open your main.bicep file and add the following:

➕ Expand Code
1module logAnalyticsWorkspace 'br/public:avm/res/operational-insights/workspace:0.11.1' = {
2  name: 'logAnalyticsWorkspace'
3  params: {
4    // Required parameters
5    name: 'VM-AVM-Ex1-law'
6    // Non-required parameters
7    location: 'westus2'
8  }
9}
Note

Always click on the “Copy to clipboard” button in the top right corner of the Code sample area in order not to have the line numbers included in the copied code.

You now have a fully functional Bicep template that will deploy a working Log Analytics workspace! If you would like to try it, run the following in your console:

Note

For keeping the example below simple, we are using the traditional deployment commands, e.g., az deployment group create or New-AzResourceGroupDeployment. However, we encourage you to look into using Deployment Stacks instead by simply replacing the previous commands with az stack group create or New-AzResourceGroupDeploymentStack as well as the other required input parameters as shown here.

Deployment Stacks allow you to deploy a Bicep file as a stack, which is a collection of resources that are deployed together. This allows you to manage the lifecycle of the stack as a single unit, making it easier to deploy, update, and now even delete resources via Bicep. You can also implement RBAC Deny Assignments on your stacks deployed resources to prevent changes to the resources or specific actions on the resources to all but an excluded list of users, groups or other principals.

Deploy with
# Log in to Azure
Connect-AzAccount

# Select your subscription
Set-AzContext -SubscriptionId '<subscriptionId>'

# Deploy a resource group
New-AzResourceGroup -Name 'avm-bicep-vmexample1' -Location '<location>'

# Invoke your deployment
New-AzResourceGroupDeployment -DeploymentName 'avm-bicep-vmexample1-deployment' -ResourceGroupName 'avm-bicep-vmexample1' -TemplateFile '/<path-to>/VirtualMachineAVM_Example1/main.bicep'
# Log in to Azure
az login

# Select your subscription
az account set --subscription '<subscriptionId>'

# Deploy a resource group
az group create --name 'avm-bicep-vmexample1' --location '<location>'

# Invoke your deployment
az deployment group create --name 'avm-bicep-vmexample1-deployment' --resource-group 'avm-bicep-vmexample1' --template-file '/<path-to>/VirtualMachineAVM_Example1/main.bicep'

The above commands will log you in to your Azure subscription, select a subscription to use, create a resource group, then deploy the main.bicep template to your resource group.

AVM Makes the deployment of Azure resources incredibly easy. Many of the parameters you would normally be required to define are taken care of by the AVM module itself. In fact, the location parameter is not even needed in your template—when left blank, by default, all AVM modules will deploy to the location in which your target Resource Group exists.

Now we have a Log Analytics workspace in our resource group which doesn’t do a whole lot of good on its own. Let’s take our template a step further by adding a Virtual Network that integrates with the Log Analytics workspace.

Virtual Network

We will now add a Virtual Network to our main.bicep file. This VNet will contain subnets and Network Security Groups (NSGs) for any of the resources we deploy that require IP addresses.

In your main.bicep file, add the following:

➕ Expand Code
 1module logAnalyticsWorkspace 'br/public:avm/res/operational-insights/workspace:0.11.1' = {
 2  name: 'logAnalyticsWorkspace'
 3  params: {
 4    // Required parameters
 5    name: 'VM-AVM-Ex1-law'
 6    // Non-required parameters
 7    location: 'westus2'
 8  }
 9}
10
11module virtualNetwork 'br/public:avm/res/network/virtual-network:0.6.1' = {
12  name: 'virtualNetworkDeployment'
13  params: {
14    // Required parameters
15    addressPrefixes: [
16      '10.0.0.0/16'
17    ]
18    name: 'VM-AVM-Ex1-vnet'
19    // Non-required parameters
20    location: 'westus2'
21  }
22}

Again, the Virtual Network AVM module requires only two things: a name and an addressPrefixes parameter.

Configure Diagnostics Settings

There is an additional parameter available in most AVM modules named diagnosticSettings. This parameter allows you to configure your resource to send its logs to any suitable logging service. In our case, we are using a Log Analytics workspace.

Let’s update our main.bicep file to have our VNet send all of its logging data to our Log Analytics workspace:

➕ Expand Code
 1module logAnalyticsWorkspace 'br/public:avm/res/operational-insights/workspace:0.11.1' = {
 2  name: 'logAnalyticsWorkspace'
 3  params: {
 4    // Required parameters
 5    name: 'VM-AVM-Ex1-law'
 6    // Non-required parameters
 7    location: 'westus2'
 8  }
 9}
10
11module virtualNetwork 'br/public:avm/res/network/virtual-network:0.6.1' = {
12  name: 'virtualNetworkDeployment'
13  params: {
14    // Required parameters
15    addressPrefixes: [
16      '10.0.0.0/16'
17    ]
18    name: 'VM-AVM-Ex1-vnet'
19    // Non-required parameters
20    location: 'westus2'
21    diagnosticSettings: [
22      {
23        name: 'vNetDiagnostics'
24        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
25      }
26    ]
27  }
28}

Notice how the diagnosticsSettings parameter needs a workspaceResourceId? All you need to do is add a reference to the built-in logAnalyticsWorkspaceId output of the logAnalyticsWorkspace AVM module. That’s it! Our VNet now has integrated its logging with our Log Analytics workspace. All AVM modules come with a set of built-in outputs that can be easily referenced by other modules within your template.

Info

All AVM modules have built-in outputs which can be referenced using the <moduleName>.outputs.<outputName> syntax.

When using plain Bicep, many of these outputs require multiple lines of code or knowledge of the correct object ID references to get at the desired output. AVM modules do much of this heavy lifting for you by taking care of these complex tasks within the module itself, then exposing them to you through the module’s outputs. Find out more about Bicep Outputs.

Add a Subnet and NAT Gateway

We can’t use a Virtual Network without subnets, so let’s add a subnet next. According to our Architecture, we will have three subnets: one for the Virtual Machine, one for the Bastion host, and one for Private Endpoints. We can start with the VM subnet for now. While we’re at it, let’s also add the NAT Gateway, the NAT Gateway’s Public IP, the attach the NAT Gateway to the VM subnet.

Add the following to your main.bicep:

➕ Expand Code
 1module logAnalyticsWorkspace 'br/public:avm/res/operational-insights/workspace:0.11.1' = {
 2  name: 'logAnalyticsWorkspace'
 3  params: {
 4    // Required parameters
 5    name: 'VM-AVM-Ex1-law'
 6    // Non-required parameters
 7    location: 'westus2'
 8  }
 9}
10
11module natGwPublicIp 'br/public:avm/res/network/public-ip-address:0.8.0' = {
12  name: 'natGwPublicIpDeployment'
13  params: {
14    // Required parameters
15    name: 'VM-AVM-Ex1-natgwpip'
16    // Non-required parameters
17    location: 'westus2'
18    diagnosticSettings: [
19      {
20        name: 'natGwPublicIpDiagnostics'
21        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
22      }
23    ]
24  }
25}
26
27module natGateway 'br/public:avm/res/network/nat-gateway:1.2.2' = {
28  name: 'natGatewayDeployment'
29  params: {
30    // Required parameters
31    name: 'VM-AVM-Ex1-natGw'
32    zone: 1
33    // Non-required parameters
34    publicIpResourceIds: [
35      natGwPublicIp.outputs.resourceId
36    ]
37  }
38}
39
40module virtualNetwork 'br/public:avm/res/network/virtual-network:0.6.1' = {
41  name: 'virtualNetworkDeployment'
42  params: {
43    // Required parameters
44    addressPrefixes: [
45      '10.0.0.0/16'
46    ]
47    name: 'VM-AVM-Ex1-vnet'
48    // Non-required parameters
49    location: 'westus2'
50    diagnosticSettings: [
51      {
52        name: 'vNetDiagnostics'
53        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
54      }
55    ]
56    subnets: [
57      {
58        name: 'VMSubnet'
59        addressPrefix: cidrSubnet('10.0.0.0/16', 24, 0) // first subnet in address space
60        natGatewayResourceId: natGateway.outputs.resourceId
61      }
62    ]
63  }
64}

The modification adds a subnets property to our virtualNetwork module. The AVM network/virtual-network module supports the creation of subnets directly within the module itself. We can also link our NAT Gateway directly to the subnet within this submodule.

A nice feature within Bicep are the various functions available. We use the cidrSubnet() function to declare CIDR blocks without having to calculate them on your own.

Switch to Parameters and Variables

See how we are reusing the same CIDR block 10.0.0.0/16 in multiple locations? You may have noticed we are defining the same location in two different spots as well. We’re now at a point in the development where we should leverage one of our first recommended practices: using parameters and variables!

Tip

Use Bicep variables to define values that will be constant and reused with your template; use parameters anywhere you may need a modifiable value.

Let’s enhance the template by adding variables for the CIDR block and prefix, then use a location parameter with a default value. We’ll then reference those in the module:

➕ Expand Code
 1param location string = 'westus2'
 2
 3var addressPrefix = '10.0.0.0/16'
 4var prefix = 'VM-AVM-Ex1'
 5
 6module logAnalyticsWorkspace 'br/public:avm/res/operational-insights/workspace:0.11.1' = {
 7  name: 'logAnalyticsWorkspace'
 8  params: {
 9    // Required parameters
10    name: '${prefix}-law'
11    // Non-required parameters
12    location: location
13  }
14}
15
16module natGwPublicIp 'br/public:avm/res/network/public-ip-address:0.8.0' = {
17  name: 'natGwPublicIpDeployment'
18  params: {
19    // Required parameters
20    name: '${prefix}-natgwpip'
21    // Non-required parameters
22    location: location
23    diagnosticSettings: [
24      {
25        name: 'natGwPublicIpDiagnostics'
26        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
27      }
28    ]
29  }
30}
31
32module natGateway 'br/public:avm/res/network/nat-gateway:1.2.2' = {
33  name: 'natGatewayDeployment'
34  params: {
35    // Required parameters
36    name: '${prefix}-natgw'
37    zone: 1
38    // Non-required parameters
39    publicIpResourceIds: [
40      natGwPublicIp.outputs.resourceId
41    ]
42  }
43}
44
45module virtualNetwork 'br/public:avm/res/network/virtual-network:0.6.1' = {
46  name: 'virtualNetworkDeployment'
47  params: {
48    // Required parameters
49    addressPrefixes: [
50      addressPrefix
51    ]
52    name: '${prefix}-vnet'
53    // Non-required parameters
54    location: location
55    diagnosticSettings: [
56      {
57        name: 'vNetDiagnostics'
58        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
59      }
60    ]
61    subnets: [
62      {
63        name: 'VMSubnet'
64        addressPrefix: cidrSubnet(addressPrefix, 24, 0) // first subnet in address space
65        natGatewayResourceId: natGateway.outputs.resourceId
66      }
67    ]
68  }
69}

We now have a good basis for the infrastructure to be utilized by the rest of the resources in our Architecture. We will come back to our networking in a future step, once we are ready to create some Network Security Groups. For now, let’s move on to other modules.

Key Vault

Key Vaults are one of the key components in most Azure architectures as they create a place where you can save and reference secrets in a secure manner (“secrets” in the general sense, as opposed to the secret object type in Key Vaults). The Key Vault AVM module makes it very simple to store secrets generated in your template. In this tutorial, we will use one of the most secure methods of storing and retrieving secrets by leveraging this Key Vault in our Bicep template.

The first step is easy: add the Key Vault AVM module to our main.bicep file. In addition, let’s also ensure it’s hooked into our Log Analytics workspace (we will do this for every new module from here on out).

➕ Expand Code
 1param location string = 'westus2'
 2
 3var addressPrefix = '10.0.0.0/16'
 4var prefix = 'VM-AVM-Ex1'
 5
 6module logAnalyticsWorkspace 'br/public:avm/res/operational-insights/workspace:0.11.1' = {
 7  name: 'logAnalyticsWorkspace'
 8  params: {
 9    // Required parameters
10    name: '${prefix}-law'
11    // Non-required parameters
12    location: location
13  }
14}
15
16module natGwPublicIp 'br/public:avm/res/network/public-ip-address:0.8.0' = {
17  name: 'natGwPublicIpDeployment'
18  params: {
19    // Required parameters
20    name: '${prefix}-natgwpip'
21    // Non-required parameters
22    location: location
23    diagnosticSettings: [
24      {
25        name: 'natGwPublicIpDiagnostics'
26        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
27      }
28    ]
29  }
30}
31
32module natGateway 'br/public:avm/res/network/nat-gateway:1.2.2' = {
33  name: 'natGatewayDeployment'
34  params: {
35    // Required parameters
36    name: '${prefix}-natgw'
37    zone: 1
38    // Non-required parameters
39    publicIpResourceIds: [
40      natGwPublicIp.outputs.resourceId
41    ]
42  }
43}
44
45module virtualNetwork 'br/public:avm/res/network/virtual-network:0.6.1' = {
46  name: 'virtualNetworkDeployment'
47  params: {
48    // Required parameters
49    addressPrefixes: [
50      addressPrefix
51    ]
52    name: '${prefix}-vnet'
53    // Non-required parameters
54    location: location
55    diagnosticSettings: [
56      {
57        name: 'vNetDiagnostics'
58        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
59      }
60    ]
61    subnets: [
62      {
63        name: 'VMSubnet'
64        addressPrefix: cidrSubnet(addressPrefix, 24, 0) // first subnet in address space
65        natGatewayResourceId: natGateway.outputs.resourceId
66      }
67    ]
68  }
69}
70
71module keyVault 'br/public:avm/res/key-vault/vault:0.12.1' = {
72  name: 'keyVaultDeployment'
73  params: {
74    // Required parameters
75    name: '${uniqueString(resourceGroup().id)}-kv'
76    // Non-required parameters
77    location: location
78    diagnosticSettings: [
79      {
80        name: 'keyVaultDiagnostics'
81        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
82      }
83    ]
84  }
85}

The name of the Key Vault we will deploy uses the uniqueString() Bicep function. Key Vault names must be globally unique. We will therefore deviate from our standard naming convention thus far and make an exception for the Key Vault. Note how we are still adding a suffix to the Key Vault name, so its name remains recognizable; you can use a combination of concatenating unique strings, prefixes, or suffixes to follow your own naming standard preferences.

When we generate our unique string, we will pass in the resourceGroup().id as the seed for the uniqueString() function so that every time you deploy this main.bicep to the same resource group, it will use the same randomly generated name for your Key Vault (since resourceGroup().id will be the same).

Tip

Bicep has many built-in functions available. We used two here: uniqueString() and resourceGroup(). The resourceGroup(), subscription(), and deployment() functions are very useful when seeding uniqueString() or guid() functions. Just be cautious about name length limitations for each Azure service! Visit this page to learn more about Bicep functions.

We will use this Key Vault later on when we create a VM and need to store its password. Now that we have it, a Virtual Network, Subnet, and Log Analytics prepared, we should have everything we need to deploy a Virtual Machine!

Info

In the future, we will update this guide to show how to generate and store a certificate in the Key Vault, then use that certificate to authenticate into the Virtual Machine.

Virtual Machine

Warning

The AVM Virtual Machine module enables the EncryptionAtHost feature by default. You must enable this feature within your Azure subscription successfully deploy this example code. To do so, run the following:

Deploy with
# Wait a few minutes after running the command to allow it to propagate
Register-AzProviderFeature -FeatureName "EncryptionAtHost" -ProviderNamespace "Microsoft.Compute"
az feature register --namespace Microsoft.Compute --name EncryptionAtHost

# Propagate the change
az provider register --namespace Microsoft.Compute

For our Virtual Machine (VM) deployment, we need to add the following to our main.bicep file:

➕ Expand Code
  1param location string = 'westus2'
  2
  3// START add-password-param
  4@description('Required. A password for the VM admin user.')
  5@secure()
  6param vmAdminPass string
  7// END add-password-param
  8
  9var addressPrefix = '10.0.0.0/16'
 10var prefix = 'VM-AVM-Ex1'
 11
 12module logAnalyticsWorkspace 'br/public:avm/res/operational-insights/workspace:0.11.1' = {
 13  name: 'logAnalyticsWorkspace'
 14  params: {
 15    // Required parameters
 16    name: '${prefix}-law'
 17    // Non-required parameters
 18    location: location
 19  }
 20}
 21
 22module natGwPublicIp 'br/public:avm/res/network/public-ip-address:0.8.0' = {
 23  name: 'natGwPublicIpDeployment'
 24  params: {
 25    // Required parameters
 26    name: '${prefix}-natgwpip'
 27    // Non-required parameters
 28    location: location
 29    diagnosticSettings: [
 30      {
 31        name: 'natGwPublicIpDiagnostics'
 32        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
 33      }
 34    ]
 35  }
 36}
 37
 38module natGateway 'br/public:avm/res/network/nat-gateway:1.2.2' = {
 39  name: 'natGatewayDeployment'
 40  params: {
 41    // Required parameters
 42    name: '${prefix}-natgw'
 43    zone: 1
 44    // Non-required parameters
 45    publicIpResourceIds: [
 46      natGwPublicIp.outputs.resourceId
 47    ]
 48  }
 49}
 50
 51module virtualNetwork 'br/public:avm/res/network/virtual-network:0.6.1' = {
 52  name: 'virtualNetworkDeployment'
 53  params: {
 54    // Required parameters
 55    addressPrefixes: [
 56      addressPrefix
 57    ]
 58    name: '${prefix}-vnet'
 59    // Non-required parameters
 60    location: location
 61    diagnosticSettings: [
 62      {
 63        name: 'vNetDiagnostics'
 64        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
 65      }
 66    ]
 67    subnets: [
 68      {
 69        name: 'VMSubnet'
 70        addressPrefix: cidrSubnet(addressPrefix, 24, 0) // first subnet in address space
 71        natGatewayResourceId: natGateway.outputs.resourceId
 72      }
 73    ]
 74  }
 75}
 76
 77module keyVault 'br/public:avm/res/key-vault/vault:0.12.1' = {
 78  name: 'keyVaultDeployment'
 79  params: {
 80    // Required parameters
 81    name: '${uniqueString(resourceGroup().id)}-kv'
 82    // Non-required parameters
 83    location: location
 84    diagnosticSettings: [
 85      {
 86        name: 'keyVaultDiagnostics'
 87        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
 88      }
 89    ]
 90    // START add-keyvault-secret
 91    secrets: [
 92      {
 93        name: 'vmAdminPassword'
 94        value: vmAdminPass
 95      }
 96    ]
 97    // END add-keyvault-secret
 98  }
 99}
100
101module virtualMachine 'br/public:avm/res/compute/virtual-machine:0.13.1' = {
102  name: 'linuxVirtualMachineDeployment'
103  params: {
104    // Required parameters
105    adminUsername: 'localAdminUser'
106    adminPassword: vmAdminPass
107    imageReference: {
108      offer: '0001-com-ubuntu-server-jammy'
109      publisher: 'Canonical'
110      sku: '22_04-lts-gen2'
111      version: 'latest'
112    }
113    name: '${prefix}-vm1'
114    // START vm-subnet-reference
115    nicConfigurations: [
116      {
117        ipConfigurations: [
118          {
119            name: 'ipconfig01'
120            subnetResourceId: virtualNetwork.outputs.subnetResourceIds[0] // VMSubnet
121          }
122        ]
123        nicSuffix: '-nic-01'
124      }
125    ]
126    // END vm-subnet-reference
127    osDisk: {
128      caching: 'ReadWrite'
129      diskSizeGB: 128
130      managedDisk: {
131        storageAccountType: 'Standard_LRS'
132      }
133    }
134    osType: 'Linux'
135    vmSize: 'Standard_B2s_v2'
136    zone: 0
137    // Non-required parameters
138    location: location
139  }
140}

The VM module is one of the more complex modules in AVM—behind the scenes, it takes care of a lot of heavy lifting that, without AVM, would require multiple Bicep resources to be deployed and referenced.

For example, look at the nicConfigurations parameter: normally, you would need to deploy a separate NIC resource, which itself also requires an IP resource, then attach them to each other, and finally, attach them all to your VM.

With the AVM VM module, the nicConfigurations parameter accepts an object, allowing you to create any number of NICs to attach to your VM from within the VM resource deployment itself. It handles all the naming, creation of other necessary dependencies, and attaches them all together, so you don’t have to. The osDisk parameter is similar, though slightly less complex. There are many more parameters within the VM module that you can leverage if needed, that share a similar ease-of-use.

Since this is the real highlight of our main.bicep file, we need to take a closer look at some of the other changes that were made.

  • VM Admin Password Parameter

    1@description('Required. A password for the VM admin user.')
    2@secure()
    3param vmAdminPass string

    First, we added a new parameter. The value of this will be provided when the main.bicep template is deployed. We don’t want any passwords stored as text in code; for our purposes, the safest way to do this is to prompt the end user for the password at the time of deployment.

    Warning

    The supplied password must be between 6-72 characters long and must satisfy at least 3 of password complexity requirements from the following: Contains an uppercase character; Contains a lowercase character; Contains a numeric digit; Contains a special character. Control characters are not allowed

    Also note how we are using the @secure() decorator on the password parameter. This will ensure the value of the password is never displayed in any of the deployment logs or in Azure. We have also added the @description() decorator and started the description with “Required.” It’s a good habit and recommended practice to document your parameters in Bicep. This will ensure that VS Code’s built-in Bicep linter can provide end-users insightful information when deploying your Bicep templates.

    Info

    Always use the @secure() decorator when creating a parameter that will hold sensitive data!

  • Add the VM Admin Password to Key Vault

    1    secrets: [
    2      {
    3        name: 'vmAdminPassword'
    4        value: vmAdminPass
    5      }
    6    ]

    The next thing we have done is save the value of our vmAdminPass parameter to our Key Vault. We have done this by adding a secrets parameter to the Key Vault module. Adding secrets to Key Vaults is very simple when using the AVM module.

    By adding our password to the Key Vault, it will ensure that we never lose the password and that it is stored securely. As long as a user has appropriate permissions on the vault, the password can be fetched easily.

  • Reference the VM Subnet

     1    nicConfigurations: [
     2      {
     3        ipConfigurations: [
     4          {
     5            name: 'ipconfig01'
     6            subnetResourceId: virtualNetwork.outputs.subnetResourceIds[0] // VMSubnet
     7          }
     8        ]
     9        nicSuffix: '-nic-01'
    10      }
    11    ]

    Here, we reference another built-in output, this time from the AVM Virtual Network module. This example shows how to use an output that is part of an array. When the Virtual Network module creates subnets, it automatically creates a set of pre-defined outputs for them, one of which is an array that contains each subnet’s subnetResourceId. Our VM Subnet was the first one created which is position [0] in the array.

    Other AVM modules may make use of arrays to store outputs. If you are unsure what type of outputs a module provides, you can always reference the Outputs section of each module’s README.md.

Storage Account

The last major component we need to add is a Storage Account. Because this Storage Account will be used as a backend storage to hold blobs for the hypothetical application that runs on our VM, we’ll also create a blob container within it using the same AVM Storage Account module.

➕ Expand Code
  1param location string = 'westus2'
  2
  3@description('Required. A password for the VM admin user.')
  4@secure()
  5param vmAdminPass string
  6
  7var addressPrefix = '10.0.0.0/16'
  8var prefix = 'VM-AVM-Ex1'
  9
 10module logAnalyticsWorkspace 'br/public:avm/res/operational-insights/workspace:0.11.1' = {
 11  name: 'logAnalyticsWorkspace'
 12  params: {
 13    // Required parameters
 14    name: '${prefix}-law'
 15    // Non-required parameters
 16    location: location
 17  }
 18}
 19
 20module natGwPublicIp 'br/public:avm/res/network/public-ip-address:0.8.0' = {
 21  name: 'natGwPublicIpDeployment'
 22  params: {
 23    // Required parameters
 24    name: '${prefix}-natgwpip'
 25    // Non-required parameters
 26    location: location
 27    diagnosticSettings: [
 28      {
 29        name: 'natGwPublicIpDiagnostics'
 30        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
 31      }
 32    ]
 33  }
 34}
 35
 36module natGateway 'br/public:avm/res/network/nat-gateway:1.2.2' = {
 37  name: 'natGatewayDeployment'
 38  params: {
 39    // Required parameters
 40    name: '${prefix}-natgw'
 41    zone: 1
 42    // Non-required parameters
 43    publicIpResourceIds: [
 44      natGwPublicIp.outputs.resourceId
 45    ]
 46  }
 47}
 48
 49module virtualNetwork 'br/public:avm/res/network/virtual-network:0.6.1' = {
 50  name: 'virtualNetworkDeployment'
 51  params: {
 52    // Required parameters
 53    addressPrefixes: [
 54      addressPrefix
 55    ]
 56    name: '${prefix}-vnet'
 57    // Non-required parameters
 58    location: location
 59    diagnosticSettings: [
 60      {
 61
 62        name: 'vNetDiagnostics'
 63        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
 64      }
 65    ]
 66    subnets: [
 67      {
 68        name: 'VMSubnet'
 69        addressPrefix: cidrSubnet(addressPrefix, 24, 0) // first subnet in address space
 70        natGatewayResourceId: natGateway.outputs.resourceId
 71      }
 72    ]
 73  }
 74}
 75
 76module keyVault 'br/public:avm/res/key-vault/vault:0.12.1' = {
 77  name: 'keyVaultDeployment'
 78  params: {
 79    // Required parameters
 80    name: '${uniqueString(resourceGroup().id)}-kv'
 81    // Non-required parameters
 82    location: location
 83    diagnosticSettings: [
 84      {
 85        name: 'keyVaultDiagnostics'
 86        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
 87      }
 88    ]
 89    enablePurgeProtection: false // disable purge protection for this example so we can more easily delete it
 90    secrets: [
 91      {
 92        name: 'vmAdminPassword'
 93        value: vmAdminPass
 94      }
 95    ]
 96  }
 97}
 98
 99module virtualMachine 'br/public:avm/res/compute/virtual-machine:0.13.1' = {
100  name: 'linuxVirtualMachineDeployment'
101  params: {
102    // Required parameters
103    adminUsername: 'localAdminUser'
104    adminPassword: vmAdminPass
105    imageReference: {
106      offer: '0001-com-ubuntu-server-jammy'
107      publisher: 'Canonical'
108      sku: '22_04-lts-gen2'
109      version: 'latest'
110    }
111    name: '${prefix}-vm1'
112    nicConfigurations: [
113      {
114        ipConfigurations: [
115          {
116            name: 'ipconfig01'
117            subnetResourceId: virtualNetwork.outputs.subnetResourceIds[0] // VMSubnet
118          }
119        ]
120        nicSuffix: '-nic-01'
121      }
122    ]
123    osDisk: {
124      caching: 'ReadWrite'
125      diskSizeGB: 128
126      managedDisk: {
127        storageAccountType: 'Standard_LRS'
128      }
129    }
130
131    osType: 'Linux'
132    vmSize: 'Standard_B2s_v2'
133    zone: 0
134    // Non-required parameters
135    location: location
136  }
137}
138
139module storageAccount 'br/public:avm/res/storage/storage-account:0.19.0' = {
140  name: 'storageAccountDeployment'
141  params: {
142    // Required parameters
143    name: '${uniqueString(resourceGroup().id)}sa'
144    // Non-required parameters
145    location: location
146    skuName: 'Standard_LRS'
147    diagnosticSettings: [
148      {
149        name: 'storageAccountDiagnostics'
150        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
151      }
152    ]
153    blobServices: {
154      containers: [
155        {
156          name: 'vmstorage'
157          publicAccess: 'None'
158        }
159      ]
160    }
161  }
162}

We now have all the major components of our Architecture diagram built!

The last steps we need to take to meet our requirements is to ensure our networking resources are secure and that we are using least privileged access by leveraging Role-Based Access Control (RBAC). Let’s get to it!

Network Security Groups

We’ll add a Network Security Group (NSG) to our VM subnet. This will act as a layer 3 and layer 4 firewall for networked resources. This implementation includes an appropriate inbound rule to allow SSH traffic from the Bastion host:

➕ Expand Code
  1param location string = 'westus2'
  2
  3@description('Required. A password for the VM admin user.')
  4@secure()
  5param vmAdminPass string
  6
  7var addressPrefix = '10.0.0.0/16'
  8var prefix = 'VM-AVM-Ex1'
  9
 10module logAnalyticsWorkspace 'br/public:avm/res/operational-insights/workspace:0.11.1' = {
 11  name: 'logAnalyticsWorkspace'
 12  params: {
 13    // Required parameters
 14    name: '${prefix}-law'
 15    // Non-required parameters
 16    location: location
 17  }
 18}
 19
 20module natGwPublicIp 'br/public:avm/res/network/public-ip-address:0.8.0' = {
 21  name: 'natGwPublicIpDeployment'
 22  params: {
 23    // Required parameters
 24    name: '${prefix}-natgwpip'
 25    // Non-required parameters
 26    location: location
 27    diagnosticSettings: [
 28      {
 29        name: 'natGwPublicIpDiagnostics'
 30        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
 31      }
 32    ]
 33  }
 34}
 35
 36module natGateway 'br/public:avm/res/network/nat-gateway:1.2.2' = {
 37  name: 'natGatewayDeployment'
 38  params: {
 39    // Required parameters
 40    name: '${prefix}-natgw'
 41    zone: 1
 42    // Non-required parameters
 43    publicIpResourceIds: [
 44      natGwPublicIp.outputs.resourceId
 45    ]
 46  }
 47}
 48
 49module virtualNetwork 'br/public:avm/res/network/virtual-network:0.6.1' = {
 50  name: 'virtualNetworkDeployment'
 51  params: {
 52    // Required parameters
 53    addressPrefixes: [
 54      addressPrefix
 55    ]
 56    name: '${prefix}-vnet'
 57    // Non-required parameters
 58    location: location
 59    diagnosticSettings: [
 60      {
 61        name: 'vNetDiagnostics'
 62        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
 63      }
 64    ]
 65    subnets: [
 66      {
 67        name: 'VMSubnet'
 68        addressPrefix: cidrSubnet(addressPrefix, 24, 0) // first subnet in address space
 69        natGatewayResourceId: natGateway.outputs.resourceId
 70        networkSecurityGroupResourceId: nsgVM.outputs.resourceId
 71      }
 72    ]
 73  }
 74}
 75
 76module nsgVM 'br/public:avm/res/network/network-security-group:0.5.1' = {
 77  name: 'nsgVmDeployment'
 78  params: {
 79    name: '${prefix}-NSG-VM'
 80    location: location
 81    securityRules: [
 82      {
 83        name: 'AllowBastionSSH'
 84        properties: {
 85          access: 'Allow'
 86          direction: 'Inbound'
 87          priority: 100
 88          protocol: 'Tcp'
 89          sourceAddressPrefix: 'virtualNetwork'
 90          sourcePortRange: '*'
 91          destinationAddressPrefix: '*'
 92          destinationPortRange: '22'
 93        }
 94      }
 95    ]
 96  }
 97}
 98
 99module keyVault 'br/public:avm/res/key-vault/vault:0.12.1' = {
100  name: 'keyVaultDeployment'
101  params: {
102    // Required parameters
103    name: '${uniqueString(resourceGroup().id)}-kv'
104    // Non-required parameters
105    location: location
106    diagnosticSettings: [
107      {
108        name: 'keyVaultDiagnostics'
109        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
110      }
111    ]
112    enablePurgeProtection: false // disable purge protection for this example so we can more easily delete it
113    secrets: [
114      {
115        name: 'vmAdminPassword'
116        value: vmAdminPass
117      }
118    ]
119  }
120}
121
122module virtualMachine 'br/public:avm/res/compute/virtual-machine:0.13.1' = {
123  name: 'linuxVirtualMachineDeployment'
124  params: {
125    // Required parameters
126    adminUsername: 'localAdminUser'
127    adminPassword: vmAdminPass
128    imageReference: {
129      offer: '0001-com-ubuntu-server-jammy'
130      publisher: 'Canonical'
131      sku: '22_04-lts-gen2'
132      version: 'latest'
133    }
134    name: '${prefix}-vm1'
135    nicConfigurations: [
136      {
137        ipConfigurations: [
138          {
139            name: 'ipconfig01'
140            subnetResourceId: virtualNetwork.outputs.subnetResourceIds[0] // VMSubnet
141          }
142        ]
143        nicSuffix: '-nic-01'
144      }
145    ]
146    osDisk: {
147      caching: 'ReadWrite'
148      diskSizeGB: 128
149      managedDisk: {
150        storageAccountType: 'Standard_LRS'
151      }
152    }
153
154    osType: 'Linux'
155    vmSize: 'Standard_B2s_v2'
156    zone: 0
157    // Non-required parameters
158    location: location
159  }
160}
161
162module storageAccount 'br/public:avm/res/storage/storage-account:0.19.0' = {
163  name: 'storageAccountDeployment'
164  params: {
165    // Required parameters
166    name: '${uniqueString(resourceGroup().id)}sa'
167    // Non-required parameters
168    location: location
169    skuName: 'Standard_LRS'
170    diagnosticSettings: [
171      {
172        name: 'storageAccountDiagnostics'
173        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
174      }
175    ]
176    blobServices: {
177      containers: [
178        {
179          name: 'vmstorage'
180          publicAccess: 'None'
181        }
182      ]
183    }
184  }
185}

Disable Public Access to Storage Account

Since the Storage Account serves as a backend resource exclusively for the Virtual Machine, it will be secured as much as possible. This involves adding a Private Endpoint and disabling public internet access. AVM makes creation and assignment of Private Endpoints to resources incredibly easy. Take a look:

➕ Expand Code
  1param location string = 'westus2'
  2
  3@description('Required. A password for the VM admin user.')
  4@secure()
  5param vmAdminPass string
  6
  7var addressPrefix = '10.0.0.0/16'
  8var prefix = 'VM-AVM-Ex1'
  9
 10module logAnalyticsWorkspace 'br/public:avm/res/operational-insights/workspace:0.11.1' = {
 11  name: 'logAnalyticsWorkspace'
 12  params: {
 13    // Required parameters
 14    name: '${prefix}-law'
 15    // Non-required parameters
 16    location: location
 17  }
 18}
 19
 20module natGwPublicIp 'br/public:avm/res/network/public-ip-address:0.8.0' = {
 21  name: 'natGwPublicIpDeployment'
 22  params: {
 23    // Required parameters
 24    name: '${prefix}-natgwpip'
 25    // Non-required parameters
 26    location: location
 27    diagnosticSettings: [
 28      {
 29        name: 'natGwPublicIpDiagnostics'
 30        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
 31      }
 32    ]
 33  }
 34}
 35
 36module natGateway 'br/public:avm/res/network/nat-gateway:1.2.2' = {
 37  name: 'natGatewayDeployment'
 38  params: {
 39    // Required parameters
 40    name: '${prefix}-natgw'
 41    zone: 1
 42    // Non-required parameters
 43    publicIpResourceIds: [
 44      natGwPublicIp.outputs.resourceId
 45    ]
 46  }
 47}
 48
 49module virtualNetwork 'br/public:avm/res/network/virtual-network:0.6.1' = {
 50  name: 'virtualNetworkDeployment'
 51  params: {
 52    // Required parameters
 53    addressPrefixes: [
 54      addressPrefix
 55    ]
 56    name: '${prefix}-vnet'
 57    // Non-required parameters
 58    location: location
 59    diagnosticSettings: [
 60      {
 61        name: 'vNetDiagnostics'
 62        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
 63      }
 64    ]
 65    subnets: [
 66      {
 67        name: 'VMSubnet'
 68        addressPrefix: cidrSubnet(addressPrefix, 24, 0) // first subnet in address space
 69        natGatewayResourceId: natGateway.outputs.resourceId
 70        networkSecurityGroupResourceId: nsgVM.outputs.resourceId
 71      }
 72      {
 73        name: 'PrivateEndpointSubnet'
 74        addressPrefix: cidrSubnet(addressPrefix, 24, 1) // second subnet in address space
 75      }
 76    ]
 77  }
 78}
 79
 80module nsgVM 'br/public:avm/res/network/network-security-group:0.5.1' = {
 81  name: 'nsgVmDeployment'
 82  params: {
 83    name: '${prefix}-NSG-VM'
 84    location: location
 85    securityRules: [
 86      {
 87        name: 'AllowBastionSSH'
 88        properties: {
 89          access: 'Allow'
 90          direction: 'Inbound'
 91          priority: 100
 92          protocol: 'Tcp'
 93          sourceAddressPrefix: 'virtualNetwork'
 94          sourcePortRange: '*'
 95          destinationAddressPrefix: '*'
 96          destinationPortRange: '22'
 97        }
 98      }
 99    ]
100  }
101}
102
103module keyVault 'br/public:avm/res/key-vault/vault:0.12.1' = {
104  name: 'keyVaultDeployment'
105  params: {
106    // Required parameters
107    name: '${uniqueString(resourceGroup().id)}-kv'
108    // Non-required parameters
109    location: location
110    diagnosticSettings: [
111      {
112        name: 'keyVaultDiagnostics'
113        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
114      }
115    ]
116    enablePurgeProtection: false // disable purge protection for this example so we can more easily delete it
117    secrets: [
118      {
119        name: 'vmAdminPassword'
120        value: vmAdminPass
121      }
122    ]
123  }
124}
125
126module virtualMachine 'br/public:avm/res/compute/virtual-machine:0.14.0' = {
127  name: 'linuxVirtualMachineDeployment'
128  params: {
129    // Required parameters
130    adminUsername: 'localAdminUser'
131    adminPassword: vmAdminPass
132    imageReference: {
133      offer: '0001-com-ubuntu-server-jammy'
134      publisher: 'Canonical'
135      sku: '22_04-lts-gen2'
136      version: 'latest'
137    }
138    name: '${prefix}-vm1'
139    nicConfigurations: [
140      {
141        ipConfigurations: [
142          {
143            name: 'ipconfig01'
144            subnetResourceId: virtualNetwork.outputs.subnetResourceIds[0] // VMSubnet
145          }
146        ]
147        nicSuffix: '-nic-01'
148      }
149    ]
150    osDisk: {
151      caching: 'ReadWrite'
152      diskSizeGB: 128
153      managedDisk: {
154        storageAccountType: 'Standard_LRS'
155      }
156    }
157    osType: 'Linux'
158    vmSize: 'Standard_B2s_v2'
159    zone: 0
160    // Non-required parameters
161    location: location
162  }
163}
164
165module storageAccount 'br/public:avm/res/storage/storage-account:0.19.0' = {
166  name: 'storageAccountDeployment'
167  params: {
168    // Required parameters
169    name: '${uniqueString(resourceGroup().id)}sa'
170    // Non-required parameters
171    location: location
172    skuName: 'Standard_LRS'
173    diagnosticSettings: [
174      {
175        name: 'storageAccountDiagnostics'
176        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
177      }
178    ]
179    publicNetworkAccess: 'Disabled'
180    allowBlobPublicAccess: false
181    blobServices: {
182      containers: [
183        {
184          name: 'vmstorage'
185          publicAccess: 'None'
186        }
187      ]
188    }
189    privateEndpoints: [
190      {
191        service: 'Blob'
192        subnetResourceId: virtualNetwork.outputs.subnetResourceIds[1] // Private Endpoint Subnet
193        privateDnsZoneGroup: {
194          privateDnsZoneGroupConfigs: [
195            {
196              privateDnsZoneResourceId: privateDnsBlob.outputs.resourceId
197            }
198          ]
199        }
200      }
201    ]
202  }
203}
204
205module privateDnsBlob 'br/public:avm/res/network/private-dns-zone:0.7.1' = {
206  name: '${prefix}-privatedns-blob'
207  params: {
208    name: 'privatelink.blob.${environment().suffixes.storage}'
209    location: 'global'
210    virtualNetworkLinks: [
211      {
212        name: '${virtualNetwork.outputs.name}-vnetlink'
213        virtualNetworkResourceId: virtualNetwork.outputs.resourceId
214      }
215    ]
216  }
217}

This implementation adds a dedicated subnet for Private Endpoints following the recommended practice of isolating Private Endpoints in their own subnet.

The addition of just a few lines of code in the privateEndpoints parameter handles the complex tasks of creating the Private Endpoint, associating it with the VNet, and attaching it to the resource. AVM drastically simplifies the creation of Private Endpoints for just about every Azure Resource that supports them.

The implementation also disables all public network connectivity to the Storage Account, ensuring it only accepts traffic via the Private Endpoint.

Finally, a Private DNS zone is added and linked to the VNet, enabling the VM to resolve the Private IP address associated with the Storage Account.

Bastion

To securely access the Virtual Machine without exposing its SSH port to the public internet, we’ll create an Azure Bastion host. The Bastion Host requires a subnet with the exact name AzureBastionSubnet which cannot contain anything other than Bastion Hosts.

➕ Expand Code
  1param location string = 'westus2'
  2
  3@description('Required. A password for the VM admin user.')
  4@secure()
  5param vmAdminPass string
  6
  7var addressPrefix = '10.0.0.0/16'
  8var prefix = 'VM-AVM-Ex1'
  9
 10module logAnalyticsWorkspace 'br/public:avm/res/operational-insights/workspace:0.11.1' = {
 11  name: 'logAnalyticsWorkspace'
 12  params: {
 13    // Required parameters
 14    name: '${prefix}-law'
 15    // Non-required parameters
 16    location: location
 17  }
 18}
 19
 20module natGwPublicIp 'br/public:avm/res/network/public-ip-address:0.8.0' = {
 21  name: 'natGwPublicIpDeployment'
 22  params: {
 23    // Required parameters
 24    name: '${prefix}-natgwpip'
 25    // Non-required parameters
 26    location: location
 27    diagnosticSettings: [
 28      {
 29        name: 'natGwPublicIpDiagnostics'
 30        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
 31      }
 32    ]
 33  }
 34}
 35
 36module natGateway 'br/public:avm/res/network/nat-gateway:1.2.2' = {
 37  name: 'natGatewayDeployment'
 38  params: {
 39    // Required parameters
 40    name: '${prefix}-natgw'
 41    zone: 1
 42    // Non-required parameters
 43    publicIpResourceIds: [
 44      natGwPublicIp.outputs.resourceId
 45    ]
 46  }
 47}
 48
 49module virtualNetwork 'br/public:avm/res/network/virtual-network:0.6.1' = {
 50  name: 'virtualNetworkDeployment'
 51  params: {
 52    // Required parameters
 53    addressPrefixes: [
 54      addressPrefix
 55    ]
 56    name: '${prefix}-vnet'
 57    // Non-required parameters
 58    location: location
 59    diagnosticSettings: [
 60      {
 61        name: 'vNetDiagnostics'
 62        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
 63      }
 64    ]
 65    subnets: [
 66      {
 67        name: 'VMSubnet'
 68        addressPrefix: cidrSubnet(addressPrefix, 24, 0) // first subnet in address space
 69        natGatewayResourceId: natGateway.outputs.resourceId
 70        networkSecurityGroupResourceId: nsgVM.outputs.resourceId
 71      }
 72      {
 73        name: 'PrivateEndpointSubnet'
 74        addressPrefix: cidrSubnet(addressPrefix, 24, 1) // second subnet in address space
 75      }
 76      {
 77        name: 'AzureBastionSubnet' // Azure Bastion Host requires this subnet to be named exactly "AzureBastionSubnet"
 78        addressPrefix: cidrSubnet(addressPrefix, 24, 2) // third subnet in address space
 79      }
 80    ]
 81  }
 82}
 83
 84module nsgVM 'br/public:avm/res/network/network-security-group:0.5.1' = {
 85  name: 'nsgVmDeployment'
 86  params: {
 87    name: '${prefix}-NSG-VM'
 88    location: location
 89    securityRules: [
 90      {
 91        name: 'AllowBastionSSH'
 92        properties: {
 93          access: 'Allow'
 94          direction: 'Inbound'
 95          priority: 100
 96          protocol: 'Tcp'
 97          sourceAddressPrefix: 'virtualNetwork'
 98          sourcePortRange: '*'
 99          destinationAddressPrefix: '*'
100          destinationPortRange: '22'
101        }
102      }
103    ]
104  }
105}
106
107module keyVault 'br/public:avm/res/key-vault/vault:0.12.1' = {
108  name: 'keyVaultDeployment'
109  params: {
110    // Required parameters
111    name: '${uniqueString(resourceGroup().id)}-kv'
112    // Non-required parameters
113    location: location
114    diagnosticSettings: [
115      {
116        name: 'keyVaultDiagnostics'
117        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
118      }
119    ]
120    enablePurgeProtection: false // disable purge protection for this example so we can more easily delete it
121    secrets: [
122      {
123        name: 'vmAdminPassword'
124        value: vmAdminPass
125      }
126    ]
127  }
128}
129
130module virtualMachine 'br/public:avm/res/compute/virtual-machine:0.14.0' = {
131  name: 'linuxVirtualMachineDeployment'
132  params: {
133    // Required parameters
134    adminUsername: 'localAdminUser'
135    adminPassword: vmAdminPass
136    imageReference: {
137      offer: '0001-com-ubuntu-server-jammy'
138      publisher: 'Canonical'
139      sku: '22_04-lts-gen2'
140      version: 'latest'
141    }
142    name: '${prefix}-vm1'
143    nicConfigurations: [
144      {
145        ipConfigurations: [
146          {
147            name: 'ipconfig01'
148            subnetResourceId: virtualNetwork.outputs.subnetResourceIds[0] // VMSubnet
149          }
150        ]
151        nicSuffix: '-nic-01'
152      }
153    ]
154    osDisk: {
155      caching: 'ReadWrite'
156      diskSizeGB: 128
157      managedDisk: {
158        storageAccountType: 'Standard_LRS'
159      }
160    }
161    osType: 'Linux'
162    vmSize: 'Standard_B2s_v2'
163    zone: 0
164    // Non-required parameters
165    location: location
166  }
167}
168
169module storageAccount 'br/public:avm/res/storage/storage-account:0.19.0' = {
170  name: 'storageAccountDeployment'
171  params: {
172    // Required parameters
173    name: '${uniqueString(resourceGroup().id)}sa'
174    // Non-required parameters
175    location: location
176    skuName: 'Standard_LRS'
177    diagnosticSettings: [
178      {
179        name: 'storageAccountDiagnostics'
180        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
181      }
182    ]
183    publicNetworkAccess: 'Disabled'
184    allowBlobPublicAccess: false
185    blobServices: {
186      containers: [
187        {
188          name: 'vmstorage'
189          publicAccess: 'None'
190        }
191      ]
192    }
193    privateEndpoints: [
194      {
195        service: 'Blob'
196        subnetResourceId: virtualNetwork.outputs.subnetResourceIds[1] // Private Endpoint Subnet
197        privateDnsZoneGroup: {
198          privateDnsZoneGroupConfigs: [
199            {
200              privateDnsZoneResourceId: privateDnsBlob.outputs.resourceId
201            }
202          ]
203        }
204      }
205    ]
206  }
207}
208
209module privateDnsBlob 'br/public:avm/res/network/private-dns-zone:0.7.1' = {
210  name: '${prefix}-privatedns-blob'
211  params: {
212    name: 'privatelink.blob.${environment().suffixes.storage}'
213    location: 'global'
214    virtualNetworkLinks: [
215      {
216        name: '${virtualNetwork.outputs.name}-vnetlink'
217        virtualNetworkResourceId: virtualNetwork.outputs.resourceId
218      }
219    ]
220  }
221}
222
223// Note: Deploying a Bastion Host will automatically create a Public IP and use the subnet named "AzureBastionSubnet"
224// within our VNet. This subnet is required and must be named exactly "AzureBastionSubnet" for the Bastion Host to work.
225module bastion 'br/public:avm/res/network/bastion-host:0.6.1' = {
226  name: 'bastionDeployment'
227  params: {
228    name: '${prefix}-bastion'
229    virtualNetworkResourceId: virtualNetwork.outputs.resourceId
230    skuName: 'Basic'
231    location: location
232    diagnosticSettings: [
233      {
234        name: 'bastionDiagnostics'
235        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
236      }
237    ]
238  }
239}

This simple addition of the bastion-host AVM module completes the secure access component of our architecture. You can now access the Virtual Machine by way of the Bastion Host in the Azure Portal.

Role-Based Access Control

To complete our solution, we have one final task: to apply Role-Based Access Control (RBAC) restrictions on our services, namely the Key Vault and Storage Account. The goal is to explicitly allow only the Virtual Machine to have Create, Read, Update, or Delete (CRUD) permissions on these two services.

This is accomplished by enabling a System-assigned Managed Identity on the Virtual Machine, then granting the VM’s Managed Identity appropriate permissions on the Storage Account and Key Vault:

➕ Expand Code
  1param location string = 'westus2'
  2
  3@description('Required. A password for the VM admin user.')
  4@secure()
  5param vmAdminPass string
  6
  7var addressPrefix = '10.0.0.0/16'
  8var prefix = 'VM-AVM-Ex1'
  9
 10module logAnalyticsWorkspace 'br/public:avm/res/operational-insights/workspace:0.11.1' = {
 11  name: 'logAnalyticsWorkspace'
 12  params: {
 13    // Required parameters
 14    name: '${prefix}-law'
 15    // Non-required parameters
 16    location: location
 17  }
 18}
 19
 20module natGwPublicIp 'br/public:avm/res/network/public-ip-address:0.8.0' = {
 21  name: 'natGwPublicIpDeployment'
 22  params: {
 23    // Required parameters
 24    name: '${prefix}-natgwpip'
 25    // Non-required parameters
 26    location: location
 27    diagnosticSettings: [
 28      {
 29        name: 'natGwPublicIpDiagnostics'
 30        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
 31      }
 32    ]
 33  }
 34}
 35
 36module natGateway 'br/public:avm/res/network/nat-gateway:1.2.2' = {
 37  name: 'natGatewayDeployment'
 38  params: {
 39    // Required parameters
 40    name: '${prefix}-natgw'
 41    zone: 1
 42    // Non-required parameters
 43    publicIpResourceIds: [
 44      natGwPublicIp.outputs.resourceId
 45    ]
 46  }
 47}
 48
 49module virtualNetwork 'br/public:avm/res/network/virtual-network:0.6.1' = {
 50  name: 'virtualNetworkDeployment'
 51  params: {
 52    // Required parameters
 53    addressPrefixes: [
 54      addressPrefix
 55    ]
 56    name: '${prefix}-vnet'
 57    // Non-required parameters
 58    location: location
 59    diagnosticSettings: [
 60      {
 61        name: 'vNetDiagnostics'
 62        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
 63      }
 64    ]
 65    subnets: [
 66      {
 67        name: 'VMSubnet'
 68        addressPrefix: cidrSubnet(addressPrefix, 24, 0) // first subnet in address space
 69        natGatewayResourceId: natGateway.outputs.resourceId
 70        networkSecurityGroupResourceId: nsgVM.outputs.resourceId
 71      }
 72      {
 73        name: 'PrivateEndpointSubnet'
 74        addressPrefix: cidrSubnet(addressPrefix, 24, 1) // second subnet in address space
 75      }
 76      {
 77        name: 'AzureBastionSubnet' // Azure Bastion Host requires this subnet to be named exactly "AzureBastionSubnet"
 78        addressPrefix: cidrSubnet(addressPrefix, 24, 2) // third subnet in address space
 79      }
 80    ]
 81  }
 82}
 83
 84module nsgVM 'br/public:avm/res/network/network-security-group:0.5.1' = {
 85  name: 'nsgVmDeployment'
 86  params: {
 87    name: '${prefix}-NSG-VM'
 88    location: location
 89    securityRules: [
 90      {
 91        name: 'AllowBastionSSH'
 92        properties: {
 93          access: 'Allow'
 94          direction: 'Inbound'
 95          priority: 100
 96          protocol: 'Tcp'
 97          sourceAddressPrefix: 'virtualNetwork'
 98          sourcePortRange: '*'
 99          destinationAddressPrefix: '*'
100          destinationPortRange: '22'
101        }
102      }
103    ]
104  }
105}
106
107module keyVault 'br/public:avm/res/key-vault/vault:0.12.1' = {
108  name: 'keyVaultDeployment'
109  params: {
110    // Required parameters
111    name: '${uniqueString(resourceGroup().id)}-kv'
112    // Non-required parameters
113    location: location
114    diagnosticSettings: [
115      {
116        name: 'keyVaultDiagnostics'
117        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
118      }
119    ]
120    enablePurgeProtection: false // disable purge protection for this example so we can more easily delete it
121    secrets: [
122      {
123        name: 'vmAdminPassword'
124        value: vmAdminPass
125      }
126    ]
127    roleAssignments: [
128      {
129        principalId: virtualMachine.outputs.systemAssignedMIPrincipalId
130        principalType: 'ServicePrincipal'
131        roleDefinitionIdOrName: 'Key Vault Secrets User' // Allows read access to secrets
132      }
133    ]
134  }
135}
136
137module virtualMachine 'br/public:avm/res/compute/virtual-machine:0.14.0' = {
138  name: 'linuxVirtualMachineDeployment'
139  params: {
140    // Required parameters
141    adminUsername: 'localAdminUser'
142    adminPassword: vmAdminPass
143    imageReference: {
144      offer: '0001-com-ubuntu-server-jammy'
145      publisher: 'Canonical'
146      sku: '22_04-lts-gen2'
147      version: 'latest'
148    }
149    name: '${prefix}-vm1'
150    nicConfigurations: [
151      {
152        ipConfigurations: [
153          {
154            name: 'ipconfig01'
155            subnetResourceId: virtualNetwork.outputs.subnetResourceIds[0] // VMSubnet
156          }
157        ]
158        nicSuffix: '-nic-01'
159      }
160    ]
161    osDisk: {
162      caching: 'ReadWrite'
163      diskSizeGB: 128
164      managedDisk: {
165        storageAccountType: 'Standard_LRS'
166      }
167    }
168    osType: 'Linux'
169    vmSize: 'Standard_B2s_v2'
170    zone: 0
171    // Non-required parameters
172    location: location
173    managedIdentities: {
174      systemAssigned: true
175    }
176  }
177}
178
179module storageAccount 'br/public:avm/res/storage/storage-account:0.19.0' = {
180  name: 'storageAccountDeployment'
181  params: {
182    // Required parameters
183    name: '${uniqueString(resourceGroup().id)}sa'
184    // Non-required parameters
185    location: location
186    skuName: 'Standard_LRS'
187    diagnosticSettings: [
188      {
189        name: 'storageAccountDiagnostics'
190        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
191      }
192    ]
193    publicNetworkAccess: 'Disabled'
194    allowBlobPublicAccess: false
195    blobServices: {
196      containers: [
197        {
198          name: 'vmstorage'
199          publicAccess: 'None'
200        }
201      ]
202      roleAssignments:[
203        {
204          principalId: virtualMachine.outputs.systemAssignedMIPrincipalId
205          principalType: 'ServicePrincipal'
206          roleDefinitionName: 'Storage Blob Data Contributor' // Allows read/write/delete on blob containers
207        }
208      ]
209    }
210    privateEndpoints: [
211      {
212        service: 'Blob'
213        subnetResourceId: virtualNetwork.outputs.subnetResourceIds[1] // Private Endpoint Subnet
214        privateDnsZoneGroup: {
215          privateDnsZoneGroupConfigs: [
216            {
217              privateDnsZoneResourceId: privateDnsBlob.outputs.resourceId
218            }
219          ]
220        }
221      }
222    ]
223  }
224}
225
226module privateDnsBlob 'br/public:avm/res/network/private-dns-zone:0.7.1' = {
227  name: '${prefix}-privatedns-blob'
228  params: {
229    name: 'privatelink.blob.${environment().suffixes.storage}'
230    location: 'global'
231    virtualNetworkLinks: [
232      {
233        name: '${virtualNetwork.outputs.name}-vnetlink'
234        virtualNetworkResourceId: virtualNetwork.outputs.resourceId
235      }
236    ]
237  }
238}
239
240// Note: Deploying a Bastion Host will automatically create a Public IP and use the subnet named "AzureBastionSubnet"
241// within our VNet. This subnet is required and must be named exactly "AzureBastionSubnet" for the Bastion Host to work.
242module bastion 'br/public:avm/res/network/bastion-host:0.6.1' = {
243  name: 'bastionDeployment'
244  params: {
245    name: '${prefix}-bastion'
246    virtualNetworkResourceId: virtualNetwork.outputs.resourceId
247    skuName: 'Basic'
248    location: location
249    diagnosticSettings: [
250      {
251        name: 'bastionDiagnostics'
252        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
253      }
254    ]
255  }
256}
Info

The Azure Subscription owner will have CRUD permissions for the Storage Account but not for the Key Vault. The Key Vault requires explicit RBAC permissions assigned to a user to grant them access: Provide access to Key Vaults using RBAC. Important!: at this point, you will only be able to access the Storage Account from the Bastion Host. Remember, public internet access has been disabled!

The RBAC policies have been successfully applied using a System-assigned Managed Identity on the Virtual Machine. This identity has been granted permissions on both the Key Vault and Storage Account. Now the VM can read secrets from the Key Vault and Read, Create, or Delete blobs in the Storage Account.

In a real production environment, the principle of least privileged access should be applied, providing only the exact permissions each service needs to carry out its functions. Learn more about Microsoft’s recommendations for identity and access management.

Conclusion

In this tutorial, we’ve explored how to leverage Azure Verified Modules (AVM) to build a secure, well-architected solution in Azure. AVM modules significantly simplify the deployment of Azure resources by abstracting away much of the complexity involved in configuring individual resources.

Your final, deployable Bicep template file should now look like this:

➕ Expand Code
  1param location string = 'westus2'
  2
  3@description('Required. A password for the VM admin user.')
  4@secure()
  5param vmAdminPass string
  6
  7var addressPrefix = '10.0.0.0/16'
  8var prefix = 'VM-AVM-Ex1'
  9
 10module logAnalyticsWorkspace 'br/public:avm/res/operational-insights/workspace:0.11.1' = {
 11  name: 'logAnalyticsWorkspace'
 12  params: {
 13    // Required parameters
 14    name: '${prefix}-law'
 15    // Non-required parameters
 16    location: location
 17  }
 18}
 19
 20module natGwPublicIp 'br/public:avm/res/network/public-ip-address:0.8.0' = {
 21  name: 'natGwPublicIpDeployment'
 22  params: {
 23    // Required parameters
 24    name: '${prefix}-natgwpip'
 25    // Non-required parameters
 26    location: location
 27    diagnosticSettings: [
 28      {
 29        name: 'natGwPublicIpDiagnostics'
 30        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
 31      }
 32    ]
 33  }
 34}
 35
 36module natGateway 'br/public:avm/res/network/nat-gateway:1.2.2' = {
 37  name: 'natGatewayDeployment'
 38  params: {
 39    // Required parameters
 40    name: '${prefix}-natgw'
 41    zone: 1
 42    // Non-required parameters
 43    publicIpResourceIds: [
 44      natGwPublicIp.outputs.resourceId
 45    ]
 46  }
 47}
 48
 49module virtualNetwork 'br/public:avm/res/network/virtual-network:0.6.1' = {
 50  name: 'virtualNetworkDeployment'
 51  params: {
 52    // Required parameters
 53    addressPrefixes: [
 54      addressPrefix
 55    ]
 56    name: '${prefix}-vnet'
 57    // Non-required parameters
 58    location: location
 59    diagnosticSettings: [
 60      {
 61        name: 'vNetDiagnostics'
 62        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
 63      }
 64    ]
 65    subnets: [
 66      {
 67        name: 'VMSubnet'
 68        addressPrefix: cidrSubnet(addressPrefix, 24, 0) // first subnet in address space
 69        natGatewayResourceId: natGateway.outputs.resourceId
 70        networkSecurityGroupResourceId: nsgVM.outputs.resourceId
 71      }
 72      {
 73        name: 'PrivateEndpointSubnet'
 74        addressPrefix: cidrSubnet(addressPrefix, 24, 1) // second subnet in address space
 75      }
 76      {
 77        name: 'AzureBastionSubnet' // Azure Bastion Host requires this subnet to be named exactly "AzureBastionSubnet"
 78        addressPrefix: cidrSubnet(addressPrefix, 24, 2) // third subnet in address space
 79      }
 80    ]
 81  }
 82}
 83
 84module nsgVM 'br/public:avm/res/network/network-security-group:0.5.1' = {
 85  name: 'nsgVmDeployment'
 86  params: {
 87    name: '${prefix}-NSG-VM'
 88    location: location
 89    securityRules: [
 90      {
 91        name: 'AllowBastionSSH'
 92        properties: {
 93          access: 'Allow'
 94          direction: 'Inbound'
 95          priority: 100
 96          protocol: 'Tcp'
 97          sourceAddressPrefix: 'virtualNetwork'
 98          sourcePortRange: '*'
 99          destinationAddressPrefix: '*'
100          destinationPortRange: '22'
101        }
102      }
103    ]
104  }
105}
106
107module keyVault 'br/public:avm/res/key-vault/vault:0.12.1' = {
108  name: 'keyVaultDeployment'
109  params: {
110    // Required parameters
111    name: '${uniqueString(resourceGroup().id)}-kv'
112    // Non-required parameters
113    location: location
114    diagnosticSettings: [
115      {
116        name: 'keyVaultDiagnostics'
117        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
118      }
119    ]
120    enablePurgeProtection: false // disable purge protection for this example so we can more easily delete it
121    secrets: [
122      {
123        name: 'vmAdminPassword'
124        value: vmAdminPass
125      }
126    ]
127    roleAssignments: [
128      {
129        principalId: virtualMachine.outputs.systemAssignedMIPrincipalId
130        principalType: 'ServicePrincipal'
131        roleDefinitionIdOrName: 'Key Vault Secrets User' // Allows read access to secrets
132      }
133    ]
134  }
135}
136
137module virtualMachine 'br/public:avm/res/compute/virtual-machine:0.14.0' = {
138  name: 'linuxVirtualMachineDeployment'
139  params: {
140    // Required parameters
141    adminUsername: 'localAdminUser'
142    adminPassword: vmAdminPass
143    imageReference: {
144      offer: '0001-com-ubuntu-server-jammy'
145      publisher: 'Canonical'
146      sku: '22_04-lts-gen2'
147      version: 'latest'
148    }
149    name: '${prefix}-vm1'
150    nicConfigurations: [
151      {
152        ipConfigurations: [
153          {
154            name: 'ipconfig01'
155            subnetResourceId: virtualNetwork.outputs.subnetResourceIds[0] // VMSubnet
156          }
157        ]
158        nicSuffix: '-nic-01'
159      }
160    ]
161    osDisk: {
162      caching: 'ReadWrite'
163      diskSizeGB: 128
164      managedDisk: {
165        storageAccountType: 'Standard_LRS'
166      }
167    }
168    osType: 'Linux'
169    vmSize: 'Standard_B2s_v2'
170    zone: 0
171    // Non-required parameters
172    location: location
173    managedIdentities: {
174      systemAssigned: true
175    }
176  }
177}
178
179module storageAccount 'br/public:avm/res/storage/storage-account:0.19.0' = {
180  name: 'storageAccountDeployment'
181  params: {
182    // Required parameters
183    name: '${uniqueString(resourceGroup().id)}sa'
184    // Non-required parameters
185    location: location
186    skuName: 'Standard_LRS'
187    diagnosticSettings: [
188      {
189        name: 'storageAccountDiagnostics'
190        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
191      }
192    ]
193    publicNetworkAccess: 'Disabled'
194    allowBlobPublicAccess: false
195    blobServices: {
196      containers: [
197        {
198          name: 'vmstorage'
199          publicAccess: 'None'
200        }
201      ]
202      roleAssignments:[
203        {
204          principalId: virtualMachine.outputs.systemAssignedMIPrincipalId
205          principalType: 'ServicePrincipal'
206          roleDefinitionName: 'Storage Blob Data Contributor' // Allows read/write/delete on blob containers
207        }
208      ]
209    }
210    privateEndpoints: [
211      {
212        service: 'Blob'
213        subnetResourceId: virtualNetwork.outputs.subnetResourceIds[1] // Private Endpoint Subnet
214        privateDnsZoneGroup: {
215          privateDnsZoneGroupConfigs: [
216            {
217              privateDnsZoneResourceId: privateDnsBlob.outputs.resourceId
218            }
219          ]
220        }
221      }
222    ]
223  }
224}
225
226module privateDnsBlob 'br/public:avm/res/network/private-dns-zone:0.7.1' = {
227  name: '${prefix}-privatedns-blob'
228  params: {
229    name: 'privatelink.blob.${environment().suffixes.storage}'
230    location: 'global'
231    virtualNetworkLinks: [
232      {
233        name: '${virtualNetwork.outputs.name}-vnetlink'
234        virtualNetworkResourceId: virtualNetwork.outputs.resourceId
235      }
236    ]
237  }
238}
239
240// Note: Deploying a Bastion Host will automatically create a Public IP and use the subnet named "AzureBastionSubnet"
241// within our VNet. This subnet is required and must be named exactly "AzureBastionSubnet" for the Bastion Host to work.
242module bastion 'br/public:avm/res/network/bastion-host:0.6.1' = {
243  name: 'bastionDeployment'
244  params: {
245    name: '${prefix}-bastion'
246    virtualNetworkResourceId: virtualNetwork.outputs.resourceId
247    skuName: 'Basic'
248    location: location
249    diagnosticSettings: [
250      {
251        name: 'bastionDiagnostics'
252        workspaceResourceId: logAnalyticsWorkspace.outputs.resourceId
253      }
254    ]
255  }
256}

AVM modules provide several key advantages over writing raw Bicep templates:

  1. Simplified Resource Configuration: AVM modules handle much of the complex configuration work behind the scenes
  2. Built-in Recommended Practices: The modules implement many of Microsoft’s recommended practices by default
  3. Consistent Outputs: Each module exposes a consistent set of outputs that can be easily referenced
  4. Reduced Boilerplate Code: What would normally require hundreds of lines of Bicep code can be accomplished in a fraction of the space

As you continue your journey with Azure and AVM, remember that this approach can be applied to more complex architectures as well. The modular nature of AVM allows you to mix and match components to build solutions that meet your specific needs while adhering to Microsoft’s Well-Architected Framework.

By using AVM modules as building blocks, you can focus more on your solution architecture and less on the intricacies of individual resource configurations, ultimately leading to faster development cycles and more reliable deployments.

Clean up your environment

When you are ready, you can remove the infrastructure deployed in this example. Key Vaults are set to a soft-delete state so you will also need to purge the one we created in order to fully delete it. The following commands will remove all resources created by your deployment:

Clean up with
# Delete the resource group
Remove-AzResourceGroup -Name "avm-bicep-vmexample1" -Force

# Purge the Key Vault
Remove-AzKeyVault -VaultName "<keyVaultName>" -Location "<location>" -InRemovedState -Force
# Delete the resource group
az group delete --name 'avm-bicep-vmexample1' --yes --no-wait

# Purge the Key Vault
az keyvault purge --name '<keyVaultName>' --no-wait

Congratulations, you have successfully leveraged AVM Bicep modules to deploy resources in Azure!

Tip

We welcome your contributions and feedback to help us improve the AVM modules and the overall experience for the community!

Terraform - Solution Development

Introduction

Azure Verified Modules (AVM) for Terraform are a powerful tool that leverage the Terraform domain-specific language (DSL), industry knowledge, and an Open Source community, which altogether enable developers to quickly deploy Azure resources that follow Microsoft’s recommended practices for Azure.
In this article, we will walk through the Terraform specific considerations and recommended practices on developing your solution leveraging Azure Verified Modules. We’ll review some of the design features and trade-offs and include sample code to illustrate each discussion point.

Prerequisites

You will need the following tools and components to complete this guide:

Before you begin, ensure you have these tools installed in your development environment.

Planning

Good module development should start with a good plan. Let’s first review the architecture and module design prior to developing our solution.

Solution Architecture

Before we begin coding, it is important to have details about what the infrastructure architecture will include. For our example, we will be building a solution that will host a simple application on a Linux virtual machine (VM).

In our design, the resource group for our solution will require appropriate tagging to comply with our corporate standards. Resources that support Diagnostic Settings must also send metric data to a Log Analytics workspace, so that the infrastructure support teams can get metric telemetry. The virtual machine will require outbound internet access to allow the application to properly function. A Key Vault will be included to store any secrets and key artifacts, and we will include a Bastion instance to allow support personnel to access the virtual machine if needed. Finally, the VM is intended to run without interaction, so we will auto-generate an SSH private key and store it in the Key Vault for the rare event of someone needing to log into the VM.

Based on this narrative, we will create the following resources:

  • A resource group to contain all the resources with tagging
  • A random string resource for use in resources with global naming (Key Vault)
  • A Log Analytics workspace for diagnostic data
  • A Key Vault with:
    • Role-Based Access Control (RBAC) to allow data access
    • Logging to the Log Analytics workspace
  • A virtual network with:
    • A virtual machine subnet
    • A Bastion subnet
    • Network Security Group on the VM subnet allowing SSH traffic
    • Logging to the Log Analytics workspace
  • A NAT Gateway for enabling outbound internet access
    • Associated to the virtual machine subnet
  • A Bastion service for secure remote access to the Virtual Machine
    • Logging to the Log Analytics workspace
  • A virtual machine resource with
    • A single private IPv4 interface attached to the VM subnet
    • A randomly generated admin account private key stored in the Key Vault
    • Metrics sent to the log Analytics workspace
Azure VM Solution Architecture

Solution template (root module) design

Since our solution template (root module) is intended to be deployed multiple times, we want to develop it in a way that provides flexibility while minimizing the amount of input necessary to deploy the solution. For these reasons, we will create our module with a small set of variables that allow for deployment differentiation while still populating solution-specific defaults to minimize input. We will also separate our content into variables.tf, outputs.tf, terraform.tf, and main.tf files to simplify future maintenance.

Based on this, our file system will take the following structure:

  • Module Directory
    • terraform.tf - This file holds the provider definitions and versions.
    • variables.tf - This file contains the input variable definitions and defaults.
    • outputs.tf - This file contains the outputs and their descriptions for use by any external modules calling this root module.
    • main.tf - This file contains the core module code for creating the solutions infrastructure.
    • development.tfvars - This file will contain the inputs for the instance of the module that is being deployed. Content in this file will vary from instance to instance.
Note

Terraform will merge content from any file ending in a .tf extension in the module folder to create the full module content. Because of this, using different files is not required. We encourage file separation to allow for organizing code in a way that makes it easier to maintain. While the naming structure we’ve used is common, there are many other valid file naming and organization options that can be used.

In our example, we will use the following variables as inputs to allow for customization:

  • location - The location where our infrastructure will be deployed.
  • name_prefix - This will be used to preface all of the resource naming.
  • virtual_network_prefix - This will be used to ensure IP uniqueness for the deployment.
  • tags - The custom tags to use for each deployment.

Finally, we will export the following outputs:

  • resource_group_name - This will allow for finding this deployment if there are multiples.
  • virtual_machine_name - This can be used to find and login to the vm if needed.

Identifying AVM modules that match our solution

Now that we’ve determined our architecture and module configurations, we need to see what AVM modules exist for use in our solution. To do this, we will open the AVM Terraform pattern module index and check if there are any existing pattern modules that match our requirement. In this case, no pattern modules fit our needs. If this was a common pattern, we could open an issue on the AVM github repository to get assistance from the AVM project to create a pattern module matching our requirements. Since our architecture isn’t common, we’ll continue to the next step.

When a pattern module fitting our needs doesn’t exist for a solution, leveraging AVM resource modules to build our own solution is the next best option. Review the AVM Terraform published resource module index for each of the resource types included in your architecture. For each AVM module, capture a link to the module to allow for a review of the documentation details on the Terraform Registry website.

Note

Some of the published pattern modules cover multi-resource configurations that can sometimes be interpreted as a single resource. Be sure to check the pattern index for groups of resources that may be part of your architecture and that don’t exist in the resource module index. (e.g., Virtual WAN)

For our sample architecture, we have the following AVM resource modules at our disposal. Click on each module to explore its documentation on the Terraform Registry.

Develop the Solution Code

We can now begin coding our solution. We will create each element individually, to allow us to test our deployment as we build it out. This will also allow us to correct any bugs incrementally, so that we aren’t troubleshooting a large number of resources at the end.

Creating the terraform.tf file

Let’s begin by configuring the provider details necessary to build our solution. Since this is a root module, we want to include any provider and Terraform version constraints for this module. We’ll periodically come back and add any needed additional providers if our design includes a resource from a new provider.

Open up your development IDE (Visual studio code in our example) and create a file named terraform.tf in your root directory.

Add the following code to your terraform.tf file:

➕ Expand Code
1terraform {
2  required_version = "~> 1.9"
3  required_providers {
4  }
5}
Note

Always click on the “Copy to clipboard” button in the top right corner of the Code sample area in order not to have the line numbers included in the copied code.

This specifies that the required Terraform binary version to run your module can be any version between 1.9 and 2.0. This is a good compromise for allowing a range of binary versions while also ensuring support for any required features that are used as part of the module. This can include things like newly introduced functions or support for new key words.

Since we are developing our solution incrementally, we should validate our code. To do this, we will take the following steps:

  1. Open up a terminal window if it is not already open. In some IDE’s this can be done as a function of the IDE.
  2. Change directory to the module directory by typing cd and then the path to the module. As an example, if the module directory was named example we would run cd example.
  3. Run terraform init to initialize your provider file.

You should now see a message indicating that Terraform has been successfully initialized. This indicates that our code is error free and we can continue on. If you get errors, examine the provider syntax for typos, missing quotes, or missing brackets.

Creating a variables.tf file

Because our module is intended to be reusable, we want to provide the capability to customize each module call with those items that will differ between them. This is done by using variables to accept inputs into the module. We’ll define these inputs in a separate file named variables.tf.

Go back to the IDE, and create a file named variables.tf in the working directory.

Add the following code to your variables.tf file to configure the inputs for our example:

➕ Expand Code
 1variable "name_prefix" {
 2  description = "Prefix for the name of the resources"
 3  type        = string
 4  default     = "example"
 5}
 6
 7variable "location" {
 8  description = "The Azure location to deploy the resources"
 9  type        = string
10  default     = "East US"
11}
12
13variable "virtual_network_cidr" {
14  description = "The CIDR prefix for the virtual network. This should be at least a /22. Example 10.0.0.0/22"
15  type        = string
16}
17
18variable "tags" {
19  description = "Tags to be applied to all resources"
20  type        = map(string)
21  default     = {}
22}
Note

Note that each variable definition includes a type definition to guide module users on how to properly define an input. Also note that it is possible to set a default value. This allows module consumers to avoid setting a value if they find the default to be acceptable.

We should now test the new content we’ve created for our module. To do this, first re-run terraform init on your command line. Note that nothing has changed and the initialization completes successfully. Since we now have module content, we will attempt to run the plan as the next step of the workflow.

Type terraform plan on your command line. Note that it now asks for us to provide a value for the var.virtual_network_cidr variable. This is because we don’t provide a default value for that input so Terraform must have a valid input before it can continue. Type 10.0.0.0/22 into the input and press enter to allow the plan to complete. You should now see a message indicating that Your infrastructure matches the configuration and that no changes are needed.

Creating a development.tfvars file

There are multiple ways to provide input to the module we’re creating. We will create a tfvars file that can be supplied during plan and apply stages to minimize the need for manual input. tfvars files are a nice way to document inputs as well as allow for deploying different versions of your module. This is useful if you have a pipeline where infrastructure code is deployed first for development, and then is deployed for QA, staging, or production with different input values.

In your IDE, create a new file named development.tfvars in your working directory.

Now add the following content to your development.tfvars file.

➕ Expand Code
1location = "westus2"
2prefix = dev
3virtual_network_cidr = "10.1.0.0/22"
4tags = {
5  environment = "development"
6  owner       = "dev-team"
7}
Note

Note that each variable has a value defined. Although, only inputs without default values are required, we include values for all of the inputs for clarity. Consider doing this in your environments so that someone looking at the tfvars files has a full picture of what values are being set.

Re-run the terraform apply, but this time, reference the .tfvars file by using the following command: terraform plan -var-file=development.tfvars. You should get a successful completion without needing to manually provide inputs.

Creating the main.tf file

Now that we’ve created the supporting files, we can start building the actual infrastructure code in our main file. We will add one AVM resource module at a time so that we can test each as we implement them.

Return to your IDE and create a new file named main.tf.

Add a resource group

In Azure, we need a resource group to hold any infrastructure resources we create. This is a simple resource that typically wouldn’t require an AVM module, but we’ll include the AVM module so we can take advantage of the Role-Based Access Control (RBAC) interface if we need to restrict access to the resource group in future versions.

First, let’s visit the Terraform registry documentation page for the resource group and explore several key sections.

  1. Note the Provision Instructions box on the right-hand side of the page. This contains the module source and version details which allows us to copy the latest version syntax without needing to type everything ourselves.
  2. Now review the Readme tab in the middle of the page. It contains details about all required and optional inputs, resources that are created with the module, and any outputs that are defined. If you want to explore any of these items in detail, each element has a tab that you can review as needed.
  3. Finally, in the middle of the page, there is a drop-down menu named Examples that contains functioning examples for the AVM module. These showcase a good example of using copy/paste to bootstrap module code and then modify it for your specific purpose.

Now that we’ve explored the registry content, let’s add a resource group to our module.

First, copy the content from the Provision Instructions box into our main.tf file.

➕ Expand Code
1module "avm-res-resources-resourcegroup" {
2  source  = "Azure/avm-res-resources-resourcegroup/azurerm"
3  version = "0.2.1"
4  # insert the 2 required variables here
5}

On the modules documentation page, go to the inputs tab. Review the Required Inputs tab. These are the values that don’t have defaults and are the minimum required values to deploy the module. There are additional inputs in the Optional Inputs section that can be used to configure additional module functionality. Review these inputs and determine which values you would like to define in your AVM module call.

Now, replace the # insert the 2 required variables here comment with the following code to define the module inputs. Our main.tf code should look like the following:

➕ Expand Code
1module "avm-res-resources-resourcegroup" {
2  source  = "Azure/avm-res-resources-resourcegroup/azurerm"
3  version = "0.2.1"
4
5  name = "${var.name_prefix}-rg"
6  location = var.location
7  tags = var.tags
8}
Note

Note how we’ve used the prefix variable and Terraform interpolation syntax to dynamically name the resource group. This allows for module customization and re-use. Also note that even though we chose to use the default module name of avm-res-resources-resourcegroup, we could modify the name of the module if needed.

After saving the file, we want to test our new content. To do this, return to the command line and first run terraform init. Notice how Terraform has downloaded the module code, as well as providers that the module requires. In this case, you can see the azurerm, random, and modtm providers were downloaded.

Let’s now deploy our resource group. First, let’s run a plan operation to review what will be created. Type terraform plan -var-file=development.tfvars and press enter to initiate the plan.

Add the features block

Notice that we get an error indicating that we are Missing required argument and that for the azurerm provider, we need to provide a features argument. The addition of the resource group AVM resource requires that the azurerm provider be installed to provision resources in our module. This provider requires a features block in its provider definition that is missing in our configuration.

Return to the terraform.tf file and add the following content to it. Note how the features block is currently empty. If we needed to activate any feature flags in our module, we could add them here.

➕ Expand Code
 1terraform {
 2  required_version = "~> 1.9"
 3  required_providers {
 4  }
 5}
 6
 7provider "azurerm" {
 8  features {
 9  }
10}

Re-run terraform plan -var-file=development.tfvars now that we have updated the features block.

Set the subscription ID

Note that we once again get an error. This time, the error indicates that subscription_id is a required provider property for plan/apply operations. This is a change that was introduced as part of the version 4 release of the AzureRM provider. We need to supply the ID of the deployment subscription where our resources will be created.

First, we need to get the subscription ID value. We will use the portal for this exercise, but using the Azure CLI, PowerShell, or the resource graph will also work to retrieve this value.

  1. Open the Azure portal.
  2. Enter Subscriptions in the search field at the top middle of the page.
  3. Select Subscriptions from the services menu in the search drop-down.
  4. Select the subscription you wish to deploy to, from the list of subscriptions.
  5. Find the Subscription ID field on the overview page and click the copy button to copy it to the clipboard.

Secondly, we need to update Terraform so that it can use the subscription ID. There are multiple ways to provide a subscription ID to the provider including adding it to the features block or using environment variables. For this scenario we’ll use environment variables to set the values so that we don’t have to re-enter them on each run. This also keeps us from storing the subscription ID in our code since it is considered a sensitive value. Select a command from the list below based on your operating system.

  1. (Linux/MacOS) - Run the following command with your subscription ID: export ARM_SUBSCRIPTION_ID=<your ID here>
  2. (Windows) - Run the following command with your subscription ID: set ARM_SUBSCRIPTION_ID=<your ID here>

Finally, we should now be able to complete our plan operation by re-running terraform plan -var-file=development.tfvars. Note that the plan will create three resources, two for telemetry and one for the resource group.

Deploy the resource group

We can complete testing by implementing the resource group. Run terraform apply -var-file=development.tfvars and type yes and press enter when prompted to accept the changes. Terraform will create the resource group and notify you with a Apply complete message and a summary of the resources that were added, changed, and destroyed.

Deploy the Log Analytics Workspace

We can now continue by adding the Log Analytics Workspace to our main.tf file. We will follow a workflow similar to what we did with the resource group.

  1. Browse to the AVM Log Analytics Workspace module page in the Terraform Registry.
  2. Copy the module content from the Provision Instructions portion of the page into the main.tf file.

This time, instead of manually supplying module inputs, we will copy module content from one of the examples to minimize the amount of typing required. In most examples, the AVM module call is located at the bottom of the example.

  1. Navigate to the Examples drop-down menu in the documentation and select the default example from the menu. You will see a fully functioning example code which includes the module and any supporting resources. Since we only care about the workspace resource from this example, we can scroll to the bottom of the code block and find the module "log_analytics_workspace" line.
  2. Copy the content between the module brackets with the exception of the line defining the module source. Because these examples are part of the testing methodology for the module, they use a dot reference value (../..) for the module source value which will not work in our module call. To work around this, we copied those values from the provision instructions section of the module documentation in a previous step.
  3. Update the location and resource group name values to reference outputs from the resource group module. Using implicit references such as these allow Terraform to determine the order in which resources should be built.
  4. Update the name field using the prefix variable to allow for customization using a similar pattern to what we used on the resource group.

The Log Analytics module content should look like the following code block. For simplicity, you can also copy this directly to avoid multiple copy/paste actions.

➕ Expand Code
 1module "avm-res-operationalinsights-workspace" {
 2  source  = "Azure/avm-res-operationalinsights-workspace/azurerm"
 3  version = "0.4.2"
 4
 5  enable_telemetry                          = true
 6  location                                  = module.avm-res-resources-resourcegroup.resource.location
 7  resource_group_name                       = module.avm-res-resources-resourcegroup.name
 8  name                                      = "${var.name_prefix}-law"
 9  log_analytics_workspace_retention_in_days = 30
10  log_analytics_workspace_sku               = "PerGB2018"
11}

Again, we will need to run terraform init to allow Terraform to initialize a copy of the AVM Log Analytics module.

Now, we can deploy the Log Analytics workspace by running terraform apply -var-file=development.tfvars, typing yes and pressing enter. Note that Terraform will only create the new Log Analytics resources since the resource group already exists. This is one of the key benefits of deploying using Infrastructure as Code (IAC) tools like Terraform.

Note

Note that we ran the terraform apply command without first running terraform plan. Because terraform apply runs a plan before prompting for the apply, we opted to shorten the instructions by skipping the explicit plan step. If you are testing in a live environment, you may want to run the plan step and save the plan as part of your governance or change control processes.

Deploy the Azure Key Vault

Our solution calls for a simple Key Vault implementation to store virtual machine secrets. We’ll follow the same workflow for deploying the Key Vault as we used for the previous resource group and Log Analytics workspace resources. However, since Key Vaults require data roles to manage secrets and keys, we will need to use the RBAC interface and a data resource to configure Role-Based Access Control (RBAC) during the deployment.

Note

For this exercise, we will provision the deployment user with data rights on the Key Vault. In your environment, you will likely want to either provide additional roles as inputs or statically assign users, or groups to the Key Vault data roles. For simplicity we also set the Key Vault to have public access enabled due to us not being able to dictate a private deployment environment. In your environment where your deployment machine will be on a private network it is recommended to restrict public access for the Key Vault.

Before we implement the AVM module for the Key Vault, we want to use a data resource to read the client details about the user context of the current Terraform deployment.

Add the following line to your main.tf file and save it.

➕ Expand Code
1data "azurerm_client_config" "this" {}

Key vaults use a global namespace which means that we will also need to add a randomization resource to allow us to randomize the name to avoid any potential name intersection issues with other Key Vault deployments. We will use Terraform’s random provider to generate the random string which we will append to the Key Vault name. Add the following code to your main module to create the random_string resource we will use for naming.

➕ Expand Code
1resource "random_string" "name_suffix" {
2  length  = 4
3  special = false
4  upper   = false
5}

Now we can continue with adding the AVM Key Vault module to our solution.

  1. Browse to the AVM Key Vault resource module page in the Terraform Registry.
  2. Copy the module content from the Provision Instructions portion of the page into the main.tf file.
  3. This time, we’re going to select relevant content from the Create secret example to fill out our module.
  4. Copy the name, location, enable_telemetry, resource_group_name, tenant_id, and role_assignments value content from the example and paste it into the new Key Vault module in your solution.
  5. Update the name value to be "${var.prefix}-kv-${random_string.name_suffix.result}"
  6. Update the location and resource_group_name values to the same implicit resource group module references we used in the Log Analytics workspace.
  7. Set the enable_telemetry value to true.
  8. Leave the tenant_id and role_assignments values to the same values that are in the example.

Our architecture calls for us to include a diagnostic settings configuration for each resource that supports it. We’ll use the diagnostic-settings example to copy this content.

  1. Return to the documentation page and select the diagnostic-settings option from the examples drop-down.
  2. Locate the Key Vault resource in the example’s code block and copy the diagnostic_settings value and paste it into the Key Vault module block we’re building in main.tf.
  3. Update the name value to use our prefix variable to allow for name customization.
  4. Update the workspace_resource_id value to be an implicit reference to the output from the previously implemented Log Analytics module (module.avm-res-operationalinsights-workspace.resource_id in our code).

Finally, we will allow public access, so that our deployer machine can add secrets to the Key Vault. If your environment doesn’t allow public access for Key Vault deployments, locate the public IP address of your deployer machine (this may be an external NAT IP for your network) and add it to the network_acls.ip_rules list value using CIDR notation.

  1. Set the network_acls input to null in your module block for the Key Vault.

Your Key Vault module definition should now look like the following:

➕ Expand Code
 1module "avm-res-keyvault-vault" {
 2  source  = "Azure/avm-res-keyvault-vault/azurerm"
 3  version = "0.10.0"
 4
 5  enable_telemetry    = true
 6  location            = module.avm-res-resources-resourcegroup.resource.location
 7  resource_group_name = module.avm-res-resources-resourcegroup.name
 8  name                = "${var.name_prefix}-kv-${random_string.name_suffix.result}"
 9  tenant_id           = data.azurerm_client_config.this.tenant_id
10  network_acls        = null
11
12  diagnostic_settings = {
13    to_la = {
14      name                  = "${var.name_prefix}-kv-diags"
15      workspace_resource_id = module.avm-res-operationalinsights-workspace.resource_id
16    }
17  }
18
19  role_assignments = {
20    deployment_user_kv_admin = {
21      role_definition_id_or_name = "Key Vault Administrator"
22      principal_id               = data.azurerm_client_config.this.object_id
23    }
24  }
25}
Note

One of the core values of AVM is the standard configuration for interfaces across modules. The Role Assignments interface we used as part of the Key Vault deployment is a good example of this.

Continue the incremental testing of your module by running another terraform init and terraform apply -var-file=development.tfvars sequence.

Deploy the NAT Gateway

Our architecture calls for a NAT Gateway to allow virtual machines to access the internet. We will use the NAT Gateway resource_id output in future modules to link the virtual machine subnet.

  1. Browse to the AVM NAT Gateway resource module page in the Terraform Registry.
  2. Copy the module definition and source from the Provision Instructions card from the module main page.
  3. Copy the remaining module content from the default example excluding the subnet associations map, as we will do the association when we build the vnet.
  4. Update the location and resource_group_nameusing implicit references from our resource group module.
  5. Then update each of the name values to use the name_prefix variables.

Review the following code to see each of these changes.

➕ Expand Code
 1module "avm-res-network-natgateway" {
 2  source  = "Azure/avm-res-network-natgateway/azurerm"
 3  version = "0.2.1"
 4
 5  name                = "${var.name_prefix}-natgw"
 6  enable_telemetry    = true
 7  location            = module.avm-res-resources-resourcegroup.resource.location
 8  resource_group_name = module.avm-res-resources-resourcegroup.name
 9
10  public_ips = {
11    public_ip_1 = {
12      name = "${var.name_prefix}-natgw-pip"
13    }
14  }
15}

Continue the incremental testing of your module by running another terraform init and terraform apply -var-file=development.tfvars sequence.

Deploy the Network Security Group

Our architecture calls for a Network Security Group (NSG) allowing SSH access to the virtual machine subnet. We will use the NSG AVM resource module to accomplish this task.

  1. Browse to the AVM Network Security Group resource module page in the Terraform Registry.
  2. Copy the module definition and source from the Provision Instructions card from the module main page.
  3. Copy the remaining module content from the example_with_NSG_rule example.
  4. Update the location and resource_group_nameusing implicit references from our resource group module.
  5. Update the name value using the name_prefix variable interpolation as we did with the other modules.
  6. Copy the map entry labeled rule02 from the locals nsg_rules map and paste it between two curly braces to create the security_rules attribute in the NSG module we’re building.
  7. Make the following updates to the rule details:
    1. Rename the map key to "rule01" from "rule02".
    2. Update the name to use the var.prefix interpolation and SSH to describe the rule.
    3. Update the destination_port_ranges list to be ["22"].

Upon completion the code for the NSG module should be as follows:

➕ Expand Code
 1module "avm-res-network-networksecuritygroup" {
 2  source  = "Azure/avm-res-network-networksecuritygroup/azurerm"
 3  version = "0.4.0"
 4  resource_group_name = module.avm-res-resources-resourcegroup.name
 5  name                = "${var.name_prefix}-vm-subnet-nsg"
 6  location            = module.avm-res-resources-resourcegroup.resource.location
 7
 8  security_rules = {
 9    "rule01" = {
10      name                       = "${var.name_prefix}-ssh"
11      access                     = "Allow"
12      destination_address_prefix = "*"
13      destination_port_ranges    = ["22"]
14      direction                  = "Inbound"
15      priority                   = 200
16      protocol                   = "Tcp"
17      source_address_prefix      = "*"
18      source_port_range          = "*"
19    }
20  }
21}

Continue the incremental testing of your module by running another terraform init and terraform apply -var-file=development.tfvars sequence.

Deploy the Virtual Network

We can now continue the build-out of our architecture by configuring the virtual network (vnet) deployment. This will follow a similar pattern as the previous resource modules, but this time, we will also add some network functions to help us customize the subnet configurations.

  1. Browse to the AVM Virtual Network resource module page in the Terraform Registry.
  2. Copy the module definition and source from the Provision Instructions card from the module main page.
  3. After looking through the examples, this time, we’ll use the complete example as a source to copy our content.
  4. Copy the resource_group_name, location, name, and address_space lines and replace their values with our deployment specific variables or module references.
  5. We’ll copy the subnets map and duplicate the subnet0 map for each subnet.
  6. Now we will update the map key and name values for each subnet so that they are unique.
  7. Then we’ll use the cidrsubnet function to dynamically generate the CIDR range for each subnet. You can explore the function documentation for more details on how it can be used.
  8. We will also populate the nat_gateway object on subnet0 with the resource_id output from our NAT Gateway module.
  9. To configure the NSG on the VM subnet we need to link it. Add a network_security_group attribute to the subnet0 definition and replace the value with the resource_id output from the NSG module.
  10. Finally, we’ll copy the diagnostic settings from the example and update the implicit references to point to our previously deployed Log Analytics workspace.

After making these changes our virtual network module call code will be as follows:

➕ Expand Code
 1module "avm-res-network-virtualnetwork" {
 2  source  = "Azure/avm-res-network-virtualnetwork/azurerm"
 3  version = "0.8.1"
 4
 5  resource_group_name = module.avm-res-resources-resourcegroup.name
 6  location            = module.avm-res-resources-resourcegroup.resource.location
 7  name                = "${var.name_prefix}-vnet"
 8
 9  address_space = [var.virtual_network_cidr]
10
11  subnets = {
12    subnet0 = {
13      name                            = "${var.name_prefix}-vm-subnet"
14      default_outbound_access_enabled = false
15      address_prefixes = [cidrsubnet(var.virtual_network_cidr, 1, 0)]
16      nat_gateway = {
17        id = module.avm-res-network-natgateway.resource_id
18      }
19      network_security_group = {
20        id = module.avm-res-network-networksecuritygroup.resource_id
21      }
22    }
23    bastion = {
24      name                            = "AzureBastionSubnet"
25      default_outbound_access_enabled = false
26      address_prefixes = [cidrsubnet(var.virtual_network_cidr, 1, 1)]
27    }
28  }
29
30  diagnostic_settings = {
31    sendToLogAnalytics = {
32      name                           = "${var.name_prefix}-vnet-diagnostic"
33      workspace_resource_id          = module.avm-res-operationalinsights-workspace.resource_id
34      log_analytics_destination_type = "Dedicated"
35    }
36  }
37}
Note

Note how the Log Analytics workspace reference ends in resource_id. Each AVM module is required to export its Azure resource ID with the resource_id name to allow for consistent references.

Continue the incremental testing of your module by running another terraform init and terraform apply -var-file=development.tfvars sequence.

Deploy the Bastion service

We want to allow for secure remote access to the virtual machine for configuration and troubleshooting tasks. We’ll use Azure Bastion to accomplish this objective following a similar workflow to our other resources.

  1. Browse to the AVM Bastion resource module page in the Terraform Registry.
  2. Copy the module definition and source from the Provision Instructions card from the module main page.
  3. Copy the remaining module content from the Simple Deployment example.
  4. Update the location and resource_group_nameusing implicit references from our resource group module.
  5. Update the name value using the name_prefix variable interpolation as we did with the other modules.
  6. Finally, update the subnet_id value to include an implicit reference to the bastion keyed subnet from our virtual network module.

Our architecture calls for diagnostic settings to be configured on the Azure Bastion resource. In this case, there aren’t any examples that include this configuration. However, since the diagnostic settings interface is one of the standard interfaces in Azure Verified Modules, we can just copy the interface definition from our virtual network module.

  1. Locate the virtual network module and copy the diagnostic_settings value from it.
  2. Paste the diagnostic_settings value into the code for our Bastion module.
  3. Update the diagnostic setting’s name value from vnet to Bastion.

The new code we added for the Bastion resource will be as follows:

➕ Expand Code
 1module "avm-res-network-bastionhost" {
 2  source  = "Azure/avm-res-network-bastionhost/azurerm"
 3  version = "0.7.2"
 4
 5  name                = "${var.name_prefix}-bastion"
 6  resource_group_name = module.avm-res-resources-resourcegroup.name
 7  location            = module.avm-res-resources-resourcegroup.resource.location
 8  ip_configuration = {
 9    subnet_id = module.avm-res-network-virtualnetwork.subnets["bastion"].resource_id
10  }
11
12  diagnostic_settings = {
13    sendToLogAnalytics = {
14      name                           = "${var.name_prefix}-bastion-diagnostic"
15      workspace_resource_id          = module.avm-res-operationalinsights-workspace.resource_id
16      log_analytics_destination_type = "Dedicated"
17    }
18  }
19}
Note

Pay attention to the subnet_id syntax. In the virtual network module, the subnets are created as a sub-module allowing us to reference each of them using the map key that was defined in the subnets input. Again, we see the consistent output naming with the resource_id output for the sub-module.

Continue the incremental testing of your module by running another terraform init and terraform apply -var-file=development.tfvars sequence.

Deploy the virtual machine

The final step in our deployment will be our application virtual machine. We’ve had good success with our workflow so far, so we’ll use it for this step as well.

  1. Browse to the AVM Virtual Machine resource module page in the Terraform Registry.
  2. Copy the module definition and source from the Provision Instructions card from the module main page.
  3. Copy the remaining module content from the linux_default example.
  4. Update the location and resource_group_nameusing implicit references from our resource group module.
  5. To be compliant with Well Architected Framework guidance we encourage defining a zone if your region supports it. Update the zone input to 1.
  6. Update the sku_size input to “Standard_D2s_v5”.
  7. Update the name values using the name_prefix variable interpolation as we did with the other modules and include the output from the random_string.name_suffix resource to add uniqueness.
  8. Set the account_credentials.key_vault_configuration.resource_id value to reference the resource_id output from the Key Vault module.
  9. Update the private_ip_subnet_resource_id value to an implicit reference to the subnet0 subnet output from the virtual network module.

Because the default Linux example doesn’t include diagnostic settings, we need to add that content in a different way. Since the diagnostic settings interface has a standard schema, we can copy the diagnostic_settings input from our virtual network module.

  1. Locate the virtual network module in your code and copy the diagnostic_settings map from it.
  2. Paste the diagnostic_settings content into your virtual machine module code.
  3. Update the name value to reflect that it applies to the virtual machine.

The new code we added for the virtual machine resource will be as follows:

➕ Expand Code
 1module "avm-res-compute-virtualmachine" {
 2  source  = "Azure/avm-res-compute-virtualmachine/azurerm"
 3  version = "0.19.1"
 4
 5  enable_telemetry    = true
 6  location            = module.avm-res-resources-resourcegroup.resource.location
 7  resource_group_name = module.avm-res-resources-resourcegroup.name
 8  name                = "${var.name_prefix}-vm"
 9  os_type             = "Linux"
10  sku_size            = "Standard_D2s_v5"
11  zone                = 1
12
13  source_image_reference = {
14    publisher = "Canonical"
15    offer     = "0001-com-ubuntu-server-focal"
16    sku       = "20_04-lts-gen2"
17    version   = "latest"
18  }
19
20  network_interfaces = {
21    network_interface_1 = {
22      name = "${var.name_prefix}-nic-${random_string.name_suffix.result}"
23      ip_configurations = {
24        ip_configuration_1 = {
25          name                          = "${var.name_prefix}-ipconfig-${random_string.name_suffix.result}"
26          private_ip_subnet_resource_id = module.avm-res-network-virtualnetwork.subnets["subnet0"].resource_id
27        }
28      }
29    }
30  }
31
32  diagnostic_settings = {
33    sendToLogAnalytics = {
34      name                           = "${var.name_prefix}-vm-diagnostic"
35      workspace_resource_id          = module.avm-res-operationalinsights-workspace.resource_id
36      log_analytics_destination_type = "Dedicated"
37    }
38  }
39}

Continue the incremental testing of your module by running another terraform init and terraform apply -var-file=development.tfvars sequence.

Creating the outputs.tf file

The final piece of our module is to export any values that may need to be consumed by module users. From our architecture, we’ll export the resource group name and the virtual machine resource name.

  1. Create an outputs.tf file in your IDE.
  2. Create an output named resource_group_name and set the value to an implicit reference to the resource group modules name output. Include a brief description for the output.
  3. Create an output named virtual_machine_name and set the value to an implicit reference to the virtual machine module’s name output. Include a brief description for the output.

The new code we added for the outputs will be as follows:

➕ Expand Code
1output "resource_group_name" {
2  value =  module.avm-res-resources-resourcegroup.name
3  description = "The resource group name where the resources are deployed"
4}
5
6output "virtual_machine_name" {
7    value = module.avm-res-compute-virtualmachine.name
8    description = "The name of the virtual machine"
9}

Because no new modules were created, we don’t need to run terraform init to test this change. Run terraform apply -var-file=development.tfvars to see the new outputs that have been created.

Update the terraform.tf file

It is a recommended practice to define the required versions of the providers for your module to ensure consistent behavior when it is being run. In this case we are going to be slightly permissive and allow increases in minor and patch versions to fluctuate, since those are not supposed to include breaking changes. In a production environment, you would likely want to pin on a specific version to guarantee behavior.

  1. Run terraform init to review the providers and versions that are currently installed.
  2. Update your terraform.tf file’s required providers field for each provider listed in the downloaded providers.

The updated code we added for the providers in the terraform.tf file will be as follows:

➕ Expand Code
 1terraform {
 2  required_version = "~> 1.9"
 3  required_providers {
 4    azapi = {
 5      source  = "azure/azapi"
 6      version = "~> 2.3"
 7    }
 8    azurerm = {
 9      source  = "hashicorp/azurerm"
10      version = "~> 4.27"
11    }
12    modtm = {
13      source  = "azure/modtm"
14      version = "~> 0.3"
15    }
16    random = {
17      source  = "hashicorp/random"
18      version = "~> 3.7"
19    }
20    time = {
21      source  = "hashicorp/time"
22      version = "~> 0.13"
23    }
24    tls = {
25      source  = "hashicorp/tls"
26      version = "~> 4.1"
27    }
28  }
29}
30
31provider "azurerm" {
32  features {
33  }
34}

Conclusion

Congratulations on successfully implementing a solution using Azure Verified Modules! You were able to build out our sample architecture using module documentation and taking advantage of features like standard interfaces and pre-defined defaults to simplify the development experience.

Note

This was a long exercise and mistakes can happen. If you’re getting errors or a resource is incomplete and you want to see the final main.tf, expand the following code block to see the full file.

➕ Expand Code
  1module "avm-res-resources-resourcegroup" {
  2  source  = "Azure/avm-res-resources-resourcegroup/azurerm"
  3  version = "0.2.1"
  4
  5  name = "${var.name_prefix}-rg"
  6  location = var.location
  7  tags = var.tags
  8}
  9
 10module "avm-res-operationalinsights-workspace" {
 11  source  = "Azure/avm-res-operationalinsights-workspace/azurerm"
 12  version = "0.4.2"
 13
 14  enable_telemetry                          = true
 15  location                                  = module.avm-res-resources-resourcegroup.resource.location
 16  resource_group_name                       = module.avm-res-resources-resourcegroup.name
 17  name                                      = "${var.name_prefix}-law"
 18  log_analytics_workspace_retention_in_days = 30
 19  log_analytics_workspace_sku               = "PerGB2018"
 20}
 21
 22data "azurerm_client_config" "this" {}
 23
 24resource "random_string" "name_suffix" {
 25  length  = 4
 26  special = false
 27  upper   = false
 28}
 29
 30module "avm-res-keyvault-vault" {
 31  source  = "Azure/avm-res-keyvault-vault/azurerm"
 32  version = "0.10.0"
 33
 34  enable_telemetry    = true
 35  location            = module.avm-res-resources-resourcegroup.resource.location
 36  resource_group_name = module.avm-res-resources-resourcegroup.name
 37  name                = "${var.name_prefix}-kv-${random_string.name_suffix.result}"
 38  tenant_id           = data.azurerm_client_config.this.tenant_id
 39  network_acls        = null
 40
 41  diagnostic_settings = {
 42    to_la = {
 43      name                  = "${var.name_prefix}-kv-diags"
 44      workspace_resource_id = module.avm-res-operationalinsights-workspace.resource_id
 45    }
 46  }
 47
 48  role_assignments = {
 49    deployment_user_kv_admin = {
 50      role_definition_id_or_name = "Key Vault Administrator"
 51      principal_id               = data.azurerm_client_config.this.object_id
 52    }
 53  }
 54}
 55
 56module "avm-res-network-natgateway" {
 57  source  = "Azure/avm-res-network-natgateway/azurerm"
 58  version = "0.2.1"
 59
 60  name                = "${var.name_prefix}-natgw"
 61  enable_telemetry    = true
 62  location            = module.avm-res-resources-resourcegroup.resource.location
 63  resource_group_name = module.avm-res-resources-resourcegroup.name
 64
 65  public_ips = {
 66    public_ip_1 = {
 67      name = "${var.name_prefix}-natgw-pip"
 68    }
 69  }
 70}
 71
 72module "avm-res-network-virtualnetwork" {
 73  source  = "Azure/avm-res-network-virtualnetwork/azurerm"
 74  version = "0.8.1"
 75
 76  resource_group_name = module.avm-res-resources-resourcegroup.name
 77  location            = module.avm-res-resources-resourcegroup.resource.location
 78  name                = "${var.name_prefix}-vnet"
 79
 80  address_space = [var.virtual_network_cidr]
 81
 82  subnets = {
 83    subnet0 = {
 84      name                            = "${var.name_prefix}-vm-subnet"
 85      default_outbound_access_enabled = false
 86      address_prefixes = [cidrsubnet(var.virtual_network_cidr, 1, 0)]
 87      nat_gateway = {
 88        id = module.avm-res-network-natgateway.resource_id
 89      }
 90      network_security_group = {
 91        id = module.avm-res-network-networksecuritygroup.resource_id
 92      }
 93    }
 94    bastion = {
 95      name                            = "AzureBastionSubnet"
 96      default_outbound_access_enabled = false
 97      address_prefixes = [cidrsubnet(var.virtual_network_cidr, 1, 1)]
 98    }
 99  }
100
101  diagnostic_settings = {
102    sendToLogAnalytics = {
103      name                           = "${var.name_prefix}-vnet-diagnostic"
104      workspace_resource_id          = module.avm-res-operationalinsights-workspace.resource_id
105      log_analytics_destination_type = "Dedicated"
106    }
107  }
108}
109
110module "avm-res-network-bastionhost" {
111  source  = "Azure/avm-res-network-bastionhost/azurerm"
112  version = "0.7.2"
113
114  name                = "${var.name_prefix}-bastion"
115  resource_group_name = module.avm-res-resources-resourcegroup.name
116  location            = module.avm-res-resources-resourcegroup.resource.location
117  ip_configuration = {
118    subnet_id = module.avm-res-network-virtualnetwork.subnets["bastion"].resource_id
119  }
120
121  diagnostic_settings = {
122    sendToLogAnalytics = {
123      name                           = "${var.name_prefix}-bastion-diagnostic"
124      workspace_resource_id          = module.avm-res-operationalinsights-workspace.resource_id
125      log_analytics_destination_type = "Dedicated"
126    }
127  }
128}
129
130module "avm-res-network-networksecuritygroup" {
131  source  = "Azure/avm-res-network-networksecuritygroup/azurerm"
132  version = "0.4.0"
133  resource_group_name = module.avm-res-resources-resourcegroup.name
134  name                = "${var.name_prefix}-vm-subnet-nsg"
135  location            = module.avm-res-resources-resourcegroup.resource.location
136
137  security_rules = {
138    "rule01" = {
139      name                       = "${var.name_prefix}-ssh"
140      access                     = "Allow"
141      destination_address_prefix = "*"
142      destination_port_ranges    = ["22"]
143      direction                  = "Inbound"
144      priority                   = 200
145      protocol                   = "Tcp"
146      source_address_prefix      = "*"
147      source_port_range          = "*"
148    }
149  }
150}
151
152module "avm-res-compute-virtualmachine" {
153  source  = "Azure/avm-res-compute-virtualmachine/azurerm"
154  version = "0.19.1"
155
156  enable_telemetry    = true
157  location            = module.avm-res-resources-resourcegroup.resource.location
158  resource_group_name = module.avm-res-resources-resourcegroup.name
159  name                = "${var.name_prefix}-vm"
160  os_type             = "Linux"
161  sku_size            = "Standard_D2s_v5"
162  zone                = 1
163
164  source_image_reference = {
165    publisher = "Canonical"
166    offer     = "0001-com-ubuntu-server-focal"
167    sku       = "20_04-lts-gen2"
168    version   = "latest"
169  }
170
171  network_interfaces = {
172    network_interface_1 = {
173      name = "${var.name_prefix}-nic-${random_string.name_suffix.result}"
174      ip_configurations = {
175        ip_configuration_1 = {
176          name                          = "${var.name_prefix}-ipconfig-${random_string.name_suffix.result}"
177          private_ip_subnet_resource_id = module.avm-res-network-virtualnetwork.subnets["subnet0"].resource_id
178        }
179      }
180    }
181  }
182
183  diagnostic_settings = {
184    sendToLogAnalytics = {
185      name                           = "${var.name_prefix}-vm-diagnostic"
186      workspace_resource_id          = module.avm-res-operationalinsights-workspace.resource_id
187      log_analytics_destination_type = "Dedicated"
188    }
189  }
190}

AVM modules provide several key advantages over writing raw Terraform templates:

  1. Simplified Resource Configuration: AVM modules handle much of the complex configuration work behind the scenes
  2. Built-in Recommended Practices: The modules implement many of Microsoft’s recommended practices by default
  3. Consistent Outputs: Each module exposes a consistent set of outputs that can be easily referenced
  4. Reduced Boilerplate Code: What would normally require hundreds of lines of Terraform code can be accomplished in a fraction of the space

As you continue your journey with Azure and AVM, remember that this approach can be applied to more complex architectures as well. The modular nature of AVM allows you to mix and match components to build solutions that meet your specific needs while adhering to Microsoft’s Well-Architected Framework.

By using AVM modules as building blocks, you can focus more on your solution architecture and less on the intricacies of individual resource configurations, ultimately leading to faster development cycles and more reliable deployments.

Additional exercises

For additional learning, it can be helpful to experiment with modifying this solution. Here are some ideas you can try if you have time and would like to experiment further.

  1. Use the managed_identities interface to add a system assigned managed identity to the virtual machine and give it Key Vault Administrator rights on the Key Vault.
  2. Use the tags interface to assign tags directly to one or more resources.
  3. Add an Azure Monitoring Agent extension to the virtual machine resource.
  4. Add additional inputs like VM sku to your module to make it more customizable. Be sure to update the code and tfvars files to match.

Clean up your environment

Once you have completed this set of exercises, it is a good idea to clean up your resources to avoid incurring costs for them. This can be done typing terraform destroy -var-file=development.tfvars and entering yes when prompted.

Solution Development

Considerations and steps of Solution Development

  • Decide on the IaC language (Bicep or Terraform)
  • Decide on the module sourcing method (public registry, private registry, inner-sourcing)
  • Decide on the orchestration method (template or pipeline)
  • Identify the resources needed for the solution (are they all available in AVM?)
  • Implement, validate, deploy, test the solution

Questions to cover on this page

  • Pick a realistically complex solution and demonstrate how to build it using AVM modules
  • Best practices for coding (link to official language specific guidance AND AVM specs where/if applicable)
  • Best practices for input and output parameters

Next steps

To be covered in separate, future articles.

To make this solution enterprise-ready, you need to consider the following:

  • Deploy with DevOps tools and practices (e.g., CI/CD in Azure DevOps, GitHub Actions, etc.)
  • Deploy into Azure Landing Zones (ALZ)
  • Make sure the solution follows the recommendations of the Well-Architected Framework (WAF) and it’s compliant with and integrates into your organization’s policies and standards, e.g.:
    • Security & Identity (e.g., RBAC, Entra ID, service principals, secrets management, MFA, etc.)
    • Networking (e.g., Azure Firewall, NSGs, etc.)
    • Monitoring (e.g., Azure Monitor, Log Analytics, etc.)
    • Cost management (e.g., Azure Cost Management, budgets, etc.)
    • Governance (e.g., Azure Policy, etc.)

Other recommendations

  • Don’t use latest, but a specific version of the module
  • Don’t expose secrets in output parameters/command line/logs/etc.
  • Don’t use hard-coded values, but use parameters and variables

Quickstart Guide

This QuickStart guide offers step-by-step instructions for integrating Azure Verified Modules (AVM) into your solutions. It includes the initial setup, essential tools, and configurations required to deploy and manage your Azure resources efficiently using AVM.

The AVM Key Vault resource module, used as an example in this chapter, simplifies the deployment and management of Azure Key Vaults, ensuring secure storage and access to your secrets, keys, and certificates.

Leveraging Azure Verified Modules

Using AVM ensures that your infrastructure-as-code deployments follow Microsoft’s best practices and guidelines, providing a consistent and reliable foundation for your cloud solutions. AVM helps accelerate your development process, reduce the risk of misconfigurations, and enhance the security and compliance of your applications.

Using default values

The default values provided by AVM are generally safe, as they follow best practices and ensure a secure and reliable setup. However, it is important to review these values to ensure they meet your specific requirements and compliance needs. Customizing the default values may be necessary to align with your organization’s policies and the specific needs of your solution.

Exploring examples and module features

You can find examples and detailed documentation for each AVM module in their respective code repository’s README.MD file, which details features, input parameters, and outputs. The module’s documentation also provides comprehensive usage examples, covering various scenarios and configurations. Additionally, you can explore the module’s source code repository. This information will help you understand the full capabilities of the module and how to effectively integrate it into your solutions.

Subsections of Quickstart

Bicep Quickstart Guide

Introduction

This guide explains how to use an Azure Verified Modules (AVM) in your Bicep workflow. By leveraging AVM modules, you can rapidly deploy and manage Azure infrastructure without having to write extensive code from scratch.

In this guide, you will deploy a Key Vault resource and a Personal Access Token as a secret.

This article is intended for a typical ‘infra-dev’ user (cloud infrastructure professional) who has a basic understanding of Azure and Bicep but is new to Azure Verified Modules and wants to learn how to deploy a module in the easiest way using AVM.

For additional Bicep learning resources use the Bicep documentation on the Microsoft Learn platform, or leverage the Fundamentals of Bicep learning path.

Prerequisites

You will need the following tools and components to complete this guide:

Make sure you have these tools set up before proceeding.

Module Discovery

Find your module

In this scenario, you need to deploy a Key Vault resource and some of its child resources, such as a secret. Let’s find the AVM module that will help us achieve this.

There are two primary ways for locating published Bicep Azure Verified Modules:

  • Option 1 (preferred): Using IntelliSense in the Bicep extension of Visual Studio Code, and
  • Option 2: browsing the AVM Bicep module index.

Option 1: Use the Bicep Extension in VS Code

  1. In VS Code, create a new file called main.bicep.
  2. Start typing module, then give your module a symbolic name, such as myModule.
  3. Use IntelliSense to select br/public.
  4. The list of all AVM modules published in the Bicep Public Registry will show up. Use this to explore the published modules.
    Note

    The Bicep VSCode extension is reading metadata through this JSON file. All modules are added to this file, as part of the publication process. This lists all the modules marked as Published or Orphaned on the AVM Bicep module index pages.

  5. Select the module you want to use and the version you want to deploy. Note how you can type full or partial module names to filter the list.
  6. Right click on the module’s path and select Go to definition or hit F12 to see the module’s source code. You can toggle between the Bicep and the JSON view.
  7. Hover over the module’s symbolic name to view its documentation URL. By clicking on it, you will be directed to the module’s GitHub folder in the bicep-registry-modules (BRM) repository. There, you can access the source code and documentation, as illustrated below.

Option 2: Use the AVM Bicep Module Index

Searching the Azure Verified Modules indexes is the most complete way to discover published as well as planned (proposed) modules. As shown in the video above, use the following steps to locate a specific module on the AVM website:

  1. Open the AVM website in your favorite web browser: https://aka.ms/avm.
  2. Expand the Module Indexes menu item and select the Bicep sub-menu item.
  3. Select the menu item for the module type you are searching for: Resource, Pattern, or Utility.
    Note

    Since the Key Vault module used as an example in this guide is published as an AVM resource module, it can be found under the resource modules section in the AVM Bicep module index.

  4. A detailed description of module classification types can be found under the related section here.
  5. Select the Published modules link from the table of contents at the top of the page.
  6. Use the in-page search feature of your browser. In most Windows browsers you can access it using the CTRL + F keyboard shortcut.
  7. Enter a search term to find the module you are looking for - e.g., Key Vault.
  8. Move through the search results until you locate the desired module. If you are unable to find a published module, return to the table of contents and expand the All modules link to search both published and proposed modules - i.e., modules that are planned, likely in development but not published yet.
  9. After finding the desired module, click on the module’s name. This link will lead you to the module’s folder in the bicep-registry-modules (BRM) repository, where the module’s source code and documentation can be found, including usage examples.

Module details and examples

In the module’s documentation, you can find detailed information about the module’s functionality, components, input parameters, outputs and more. The documentation also provides comprehensive usage examples, covering various scenarios and configurations.

Explore the Key Vault module’s documentation for usage examples and to understand its functionality, input parameters, and outputs.

  1. Note the mandatory and optional parameters in the Parameters section.

  2. Review the Usage examples section. AVM modules include multiple tests that can be found under the tests folder. These tests are also used as the basis of the usage examples ensuring they are always up-to-date and deployable.

In this example, you will deploy a secret in a new Key Vault instance with minimal input. AVM provides default parameter values with security and reliability being core principles. These settings apply the recommendations of the Well Architected Framework where possible and appropriate.

Note how Example 2 does most of what you need to achieve.

Create your new solution using AVM

In this section, you will develop a Bicep template that references the AVM Key Vault module and its child resources and features. These include secret and role based access control configurations that grant permissions to a user.

  1. Start VSCode (make sure the Bicep extension is installed) and open a folder in which you want to work.
  2. Create a main.bicep and a dev.bicepparam file, which will hold parameters for your Key Vault deployment.
  3. Copy the content below into your main.bicep file. We have included comments to distinguish between the two different occurrences of the names attribute.
module myKeyVault 'br/public:avm/res/key-vault/vault:0.11.0' = {
  name: // the name of the module's deployment
  params: {
    name: '<keyVaultName>' // the name of the Key Vault instance - length and character limits apply
  }
}
Note

For Azure Key Vaults, the name must be globally unique. When you deploy the Key Vault, ensure you select a name that is alphanumeric, twenty-four characters or less, and unique enough to ensure no one else has used the name for their Key Vault. If the name has been previously taken, you will get an error.

After setting the values for the required properties, the module can be deployed. This minimal configuration automatically applies the security and reliability recommendations of the Well Architected Framework where possible and appropriate. These settings can be overridden if needed.

Bicep-specific configuration

It is recommended to create a bicepconfig.json file, and enable use-recent-module-versions, which warns you to use the latest available version of the AVM module.

// This is a Bicep configuration file. It can be used to control how Bicep operates and to customize
// validation settings for the Bicep linter. The linter relies on these settings when evaluating your
// Bicep files for best practices. For further information, please refer to the official documentation at:
// https://learn.microsoft.com/en-us/azure/azure-resource-manager/bicep/bicep-config
{
  "analyzers": {
    "core": {
      "rules": {
        "use-recent-module-versions": {
          "level": "warning",
          "message": "The module version is outdated. Please consider updating to the latest version."
        }
      }
    }
  }
}

Define the Key Vault instance

In this scenario - and every other real-world setup - there is more that you need to configure. You can open the module’s documentation by hovering over its symbolic name to see all of the module’s capabilities - including supported parameters.

Note

The Bicep extension facilitates code-completion, enabling you to easily locate and utilize the Azure Verified Module. This feature also provides the necessary properties for a module, allowing you to begin typing and leverage IntelliSense for completion.

  1. Add parameters and values to the main.bicep file to customize your configuration. These parameters are used for passing in the Key Vault name and enabling purge protection. You might not want to enable the latter in a non-production environment, as it makes it harder to delete and recreate resources.

The main.bicep file will now look like this:

// the scope, the deployment deploys resources to
targetScope = 'resourceGroup'

// parameters and default values
param keyVaultName string

@description('Disable for development deployments.')
param enablePurgeProtection bool = true

// the resources to deploy
module myKeyVault 'br/public:avm/res/key-vault/vault:0.11.0' = {
  name: 'key-vault-deployment'
  params: {
    name: keyVaultName
    enablePurgeProtection: enablePurgeProtection
    // more properties are not needed, as AVM provides default values
  }
}

Note that the Key Vault instance will be deployed within a resource group scope in our example.

  1. Create a dev.bicepparam file (this is optional) and set parameter values for your environment. You can now pass these values by referencing this file at the time of deployment (using PowerShell or Azure CLI).
using 'main.bicep'

// environment specific values
param keyVaultName = '<keyVaultName>'
param enablePurgeProtection = false

Create a secret and set permissions

Add a secret to the Key Vault instance and grant permissions to a user to work with the secret. Sample role assignments can be found in Example 3: Using large parameter set. See Parameter: roleAssignments for a list of pre-defined roles that you can reference by name instead of a GUID. This is a key benefit of using AVM, as the code is easy to read and increases the maintainability.

You can also leverage User-defined data types and simplify the parameterization of the modules instead of guessing or looking up parameters. Therefore, first import UDTs from the Key Vault and common types module and leverage the UDTs in your Bicep and parameter files.

For a role assignment, the principal ID is needed, that will be granted a role (specified by its name) on the resource. Your own ID can be found out with az ad signed-in-user show --query id.

// the scope, the deployment deploys resources to
targetScope = 'resourceGroup'

// parameters and default values
param keyVaultName string
// the PAT token is a secret and should not be stored in the Bicep(parameter) file.
// It can be passed via the commandline, if you don't use a parameter file.
@secure()
param patToken string = newGuid()

@description('Enabled by default. Disable for development deployments')
param enablePurgeProtection bool = true

import { roleAssignmentType } from 'br/public:avm/utl/types/avm-common-types:0.4.0'
// the role assignments are optional in the Key Vault module
param roleAssignments roleAssignmentType[]?

// the resources to deploy
module myKeyVault 'br/public:avm/res/key-vault/vault:0.11.0' = {
  name: 'key-vault-deployment'
  params: {
    name: keyVaultName
    enablePurgeProtection: enablePurgeProtection
    secrets: [
      {
        name: 'PAT'
        value: patToken
      }
    ]
    roleAssignments: roleAssignments
  }
}

The secrets parameter references a UDT (User-defined data type) that is part of the Key Vault module and enables code completion for easy usage. There is no need to look up what attributes the secret object might have. Start typing and tab-complete what you need from the content offered by the Bicep extension’s integration with AVM.

The bicep parameter file now looks like this:

// reference to the Bicep file to set the context
using 'main.bicep'

// environment specific values
param keyVaultName = '<keyVaultName>'
param enablePurgeProtection = false
// for security reasons, the secret value must not be stored in this file.
// You can change it later in the deployed Key Vault instance, where you also renew it after expiration.

param roleAssignments = [
  {
    principalId: '<principalId>'
    // using the name of the role instead of looking up the GUID (which can also be used)
    roleDefinitionIdOrName: 'Key Vault Secrets Officer'
  }
]
Note

The display names for roleDefinitionIdOrName can be acquired the following two ways:

  • From the parameters section of the module’s documentation.
  • From the builtInRoleNames variable in the module’s source code. To get there, hit F12 while the cursor is on the part of the module path starting with br/public:.

Boost your development with VS Code IntelliSense

Leverage the IntelliSense feature in VS Code to speed up your development process. IntelliSense provides code completion, possible parameter values and structure. It helps you write code more efficiently by providing context-aware suggestions as you type.

Here is how quickly you can deliver the solution detailed in this section:

Deploy your solution

Now that your template and parameter file is ready, you can deploy your solution to Azure. Use PowerShell or the Azure CLI to deploy your solution.

Deploy with
# Log in to Azure
Connect-AzAccount

# Select your subscription
Set-AzContext -SubscriptionId '<subscriptionId>'

# Deploy a resource group
New-AzResourceGroup -Name 'avm-quickstart-rg' -Location 'germanywestcentral'

# Invoke your deployment
New-AzResourceGroupDeployment -DeploymentName 'avm-quickstart-deployment' -ResourceGroupName 'avm-quickstart-rg' -TemplateParameterFile 'dev.bicepparam' -TemplateFile 'main.bicep'
# Log in to Azure
az login

# Select your subscription
az account set --subscription '<subscriptionId>'

# Deploy a resource group
az group create --name 'avm-quickstart-rg' --location 'germanywestcentral'

# Invoke your deployment
az deployment group create --name 'avm-quickstart' --resource-group 'avm-quickstart-rg' --template-file 'main.bicep' --parameters 'dev.bicepparam'

Use the Azure portal, Azure PowerShell, or the Azure CLI to verify that the Key Vault instance and secret have been successfully created with the correct configuration.

Clean up your environment

When you are ready, you can remove the infrastructure deployed in this example. The following commands will remove all resources created by your deployment:

Clean up with
# Delete the resource group
Remove-AzResourceGroup -Name "avm-quickstart-rg" -Force

# Purge the Key Vault
Remove-AzKeyVault -VaultName "<keyVaultName>" -Location "germanywestcentral" -InRemovedState -Force
# Delete the resource group
az group delete --name 'avm-quickstart-rg' --yes --no-wait

# Purge the Key Vault
az keyvault purge --name '<keyVaultName>' --no-wait

Congratulations, you have successfully leveraged an AVM Bicep module to deploy resources in Azure!

Tip

We welcome your contributions and feedback to help us improve the AVM modules and the overall experience for the community!

Next Steps

For developing a more advanced solution, please see the lab titled “Introduction to using Azure Verified Modules for Bicep”.

Terraform Quickstart Guide

Introduction

This guide explains how to use an Azure Verified Modules (AVM) in your Terraform workflow. With AVM modules, you can quickly deploy and manage Azure infrastructure without writing extensive code from scratch.

In this guide, you will deploy a Key Vault resource and generate and store a key.

This article is intended for a typical ‘infra-dev’ user (cloud infrastructure professional) who is new to Azure Verified Modules and wants to learn how to deploy a module in the easiest way using AVM. The user has a basic understanding of Azure and Terraform.

For additional Terraform resources, try a tutorial on the HashiCorp website or study the detailed documentation.

Prerequisites

You will need the following tools and components to complete this guide:

Before you begin, ensure you have these tools installed in your development environment.

Module Discovery

Find your module

In this scenario, you need to deploy a Key Vault resource and some of its child resources, such as a key. Let’s find the AVM module that will help us achieve this.

There are two primary ways for locating published Terraform Azure Verified Modules:

Use the Terraform Registry

The easiest way to find published AVM Terraform modules is by searching the Terraform Registry. Follow these steps to locate a specific module, as shown in the video above.

  • Use your web browser to go to the HashiCorp Terraform Registry
  • In the search bar at the top of the screen type avm. Optionally, append additional search terms to narrow the search results. (e.g., avm key vault for AVM modules with Key Vault in the name.)
  • Select see all to display the full list of published modules matching your search criteria.
  • Find the module you wish to use and select it from the search results.
Note

It is possible to discover other unofficial modules with avm in the name using this search method. Look for the Partner tag in the module title to determine if the module is part of the official set.

Use the AVM Terraform Module Index

Searching the Azure Verified Modules indexes is the most complete way to discover published as well as planned modules - shown as proposed. As presented in the video above, use the following steps to locate a specific module on the AVM website:

  • Use your web browser to open the AVM website at https://aka.ms/avm.
  • Expand the Module Indexes menu item and select the Terraform sub-menu item.
  • Select the menu item for the module type you are searching for: Resource, Pattern, or Utility.
    Note

    Since the Key Vault module used as an example in this guide is published as an AVM resource module, it can be found under the resource modules section in the AVM Terraform module index.

  • A detailed description of each module classification type can be found under the related section here.
  • Select the Published modules link from the table of contents at the top of the page.
  • Use the in-page search feature of your browser (in most Windows browsers you can access it using the CTRL + F keyboard shortcut).
  • Enter a search term to find the module you are looking for - e.g., Key Vault.
  • Move through the search results until you locate the desired module. If you are unable to find a published module, return to the table of contents and expand the All modules link to search both published and proposed modules - i.e., modules that are planned, likely in development but not published yet.
  • After finding the desired module, click on the module’s name. This link will lead you to the official HashiCorp Terraform Registry page for the module where you can find the module’s documentation and examples.

Module details and examples

Once you have identified the AVM module in the Terraform Registry you can find detailed information about the module’s functionality, components, input parameters, outputs and more. The documentation also provides comprehensive usage examples, covering various scenarios and configurations.

Explore the Key Vault module’s documentation and usage examples to understand its functionality, input variables, and outputs.

  • Note the Examples drop-down list and explore each example
  • Review the Readme tab to see module provider minimums, a list of resources and data sources used by the module, a nicely formatted version of the inputs and outputs, and a reference to any submodules that may be called.
  • Explore the Inputs tab and observe how each input has a detailed description and a type definition for you to use when adding input values to your module configuration.
  • Explore the Outputs tab and review each of the outputs that are exported by the AVM module for use by other modules in your deployment.
  • Finally, review the Resources tab to get a better understanding of the resources defined in the module.

In this example, your will to deploy a secret in a new Key Vault instance without needing to provide other parameters. The AVM Key Vault resource module provides these capabilities and does so with security and reliability being core principles. The default settings of the module also apply the recommendations of the Well Architected Framework where possible and appropriate.

Note how the create-key example seems to do what you need to achieve.

Create your new solution using AVM

Now that you have found the module details, you can use the content from the Terraform Registry to speed up your development in the following ways:

  1. Option 1: Create a solution using AVM module examples: duplicate a module example and edit it for your needs. This is useful if you are starting without any existing infrastructure and need to create supporting resources like resource groups as part of your deployment.
  2. Option 2: Create a solution by changing the AVM module input values: add the AVM module to an existing solution that already includes other resources. This method requires some knowledge of the resource(s) being deployed so that you can make choices about optional features configured in your solution’s version of the module.

Each deployment method includes a section below so that you can choose the method which best fits your needs.

Note

For Azure Key Vaults, the name must be globally unique. When you deploy the Key Vault, ensure you select a name that is alphanumeric, twenty-four characters or less, and unique enough to ensure no one else has used the name for their Key Vault. If the name has been used previously, you will get an error.

Option 1: Create a solution using AVM module examples

Leverage the following steps as a template for how to leverage examples for bootstrapping your new solution code. The Key Vault resource module is used here as an example, but in practice you may choose any module that applies to your scenario.

  • Locate and select the Examples drop down menu in the middle of the Key Vault module page.
  • From the drop-down list select an example whose name most closely aligns with your scenario - e.g., create-key.
  • When the example page loads, read the example description to determine if this is the desired example. If it is not, return to the module main page, and select a different example until you are satisfied that the example covers the scenario you are trying to deploy. If you are unable to find a suitable example, leverage the last two steps in the option 2 instructions to modify the inputs of the selected example to match your requirements.
  • Scroll to the code block for the example and select the Copy button on the top right of the block to copy the content to the clipboard.
➕ Click here to copy the sample code from the video.
provider "azurerm" {
  features {}
}

terraform {
  required_version = "~> 1.9"
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = ">= 3.71"
    }
    http = {
      source  = "hashicorp/http"
      version = "~> 3.4"
    }
    random = {
      source  = "hashicorp/random"
      version = "~> 3.5"
    }
  }
}

module "regions" {
  source  = "Azure/avm-utl-regions/azurerm"
  version = "0.1.0"
}

# This allows us to randomize the region for the resource group.
resource "random_integer" "region_index" {
  max = length(module.regions.regions) - 1
  min = 0
}

# This ensures you have unique CAF compliant names for our resources.
module "naming" {
  source  = "Azure/naming/azurerm"
  version = "0.3.0"
}

resource "azurerm_resource_group" "this" {
  location = module.regions.regions[random_integer.region_index.result].name
  name     = module.naming.resource_group.name_unique
}

# Get current IP address for use in KV firewall rules
data "http" "ip" {
  url = "https://api.ipify.org/"
  retry {
    attempts     = 5
    max_delay_ms = 1000
    min_delay_ms = 500
  }
}

data "azurerm_client_config" "current" {}

module "key_vault" {
  source                        = "Azure/avm-res-keyvault-vault/azurerm"
  name                          = module.naming.key_vault.name_unique
  location                      = azurerm_resource_group.this.location
  enable_telemetry              = var.enable_telemetry
  resource_group_name           = azurerm_resource_group.this.name
  tenant_id                     = data.azurerm_client_config.current.tenant_id
  public_network_access_enabled = true
  keys = {
    cmk_for_storage_account = {
      key_opts = [
        "decrypt",
        "encrypt",
        "sign",
        "unwrapKey",
        "verify",
        "wrapKey"
      ]
      key_type: "RSA"
      name     = "cmk-for-storage-account"
      key_size = 2048
    }
  }
  role_assignments = {
    deployment_user_kv_admin = {
      role_definition_id_or_name = "Key Vault Administrator"
      principal_id               = data.azurerm_client_config.current.object_id
    }
  }
  wait_for_rbac_before_key_operations = {
    create = "60s"
  }
  network_acls = {
    bypass   = "AzureServices"
    ip_rules = ["${data.http.ip.response_body}/32"]
  }
}
  • In your IDE - Visual Studio Code in our example - create the main.tf file for your new solution.

  • Paste the content from the clipboard into main.tf.

  • AVM examples frequently use naming and/or region selection AVM utility modules to generate deployment region and/or naming values as well as any default values for required fields. If you want to use a specific region name or other custom resource values, remove the existing region and naming module calls and replace example input values with the new desired custom input values.

  • Once supporting resources such as resource groups have been modified, locate the module call for the AVM module - i.e., module "keyvault".

  • AVM module examples use dot notation for a relative reference that is useful during module testing. However, you will need to replace the relative reference with a source reference that points to the Terraform Registry source location. In most cases, this source reference has been left as a comment in the module example to simplify replacing the existing source dot reference. Perform the following two actions to update the source:

    • Delete the existing source definition that uses a dot reference - i.e., source = "../../".
    • Uncomment the Terraform Registry source reference by deleting the # sign at the start of the commented source line - i.e., source = "Azure/avm-res-keyvault-vault/azurerm".
    Note

    If the module example does not include a commented Terraform Registry source reference, you will need to copy it from the module’s main documentation page. Use the following steps to do so:

    • Use the breadcrumbs to leave the example documentation and return to the module’s primary Terraform Registry documentation page.
    • Locate the Provision Instructions box on the right side of the module’s Terraform Registry page in your web browser.
    • Select the second line that starts with source = from the code block - e.g., source = "Azure/avm-res-keyvault-vault/azurerm". Copy it onto the clipboard.
    • Return to your code solution and Paste the clipboard’s content where you previously deleted the source dot reference - e.g., source = "../../".
  • AVM module examples use a variable to enable or disable the telemetry collection. Update the enable_telemetry input value to true or false. - e.g. enable_telemetry = true

  • Save your main.tf file changes and then proceed to the guide section for running your solution code.

Option 2: Create a solution by changing the AVM module input values

Click here to copy the sample code from the video.
module "avm-res-keyvault-vault" {
  source                        = "Azure/avm-res-keyvault-vault/azurerm"
  version                       = "0.9.1"
  name                          = "<custom_name_here>"
  resource_group_name           = azurerm_resource_group.this.name
  location                      = azurerm_resource_group.this.location
  tenant_id                     = data.azurerm_client_config.this.tenant_id

  keys = {
    cmk_for_storage_account = {
      key_opts = [
        "decrypt",
        "encrypt",
        "sign",
        "unwrapKey",
        "verify",
        "wrapKey"
      ]
      key_type: "RSA"
      name     = "cmk-for-storage-account"
      key_size = 2048
    }
  }
  role_assignments = {
    deployment_user_kv_admin = {
      role_definition_id_or_name = "Key Vault Administrator"
      principal_id               = data.azurerm_client_config.current.object_id
    }
  }
  wait_for_rbac_before_key_operations = {
    create = "60s"
  }
}

Use the following steps as a guide for the custom implementation of an AVM Module in your solution code. This instruction path assumes that you have an existing Terraform file that you want to add the AVM module to.

  • Locate the Provision Instructions box on the right side of the module’s Terraform Registry page in your web browser.
  • Select the module template code from the code block and Copy it onto the clipboard.
  • Switch to your IDE and Paste the contents of the clipboard into your solution’s .tf Terraform file - main.tf in our example.
  • Return to the module’s Terraform Registry page in the browser and select the Inputs tab.
  • Review each input and add the inputs with the desired target value to the solution’s code - i.e., name = "custom_name".
  • Once you are satisfied that you have included all required inputs and any optional inputs, Save your file and continue to the next section.

Deploy your solution

After completing your solution development, you can move to the deployment stage. Follow these steps for a basic Terraform workflow:

  • Open the command line and login to Azure using the Azure cli

    az login
  • If your account has access to multiple tenants, you may need to modify the command to az login --tenant <tenant id> where “<tenant id>” is the guid for the target tenant.

  • After logging in, select the target subscription from the list of subscriptions that you have access to.

  • Change the path to the directory where your completed terraform solution files reside.

    Note

    Many AVM modules depend on the AzureRM 4.0 Terraform provider which mandates that a subscription id is configured. If you receive an error indicating that subscription_id is a required provider property, you will need to set a subscription id value for the provider. For Unix based systems (Linux or MacOS) you can configure this by running export ARM_SUBSCRIPTION_ID=<your subscription guid> on the command line. On Microsoft Windows, you can perform the same operation by running set ARM_SUBSCRIPTION_ID="<your subscription guid>" from the Windows command prompt or by running $env:ARM_SUBSCRIPTION_ID="<your subscription guid>" from a powershell prompt. Replace the “<your subscription id>” notation in each command with your Azure subscription’s unique id value.

  • Initialize your Terraform project. This command downloads the necessary providers and modules to the working directory.

    terraform init
  • Before applying the configuration, it is good practice to validate it to ensure there are no syntax errors.

    terraform validate
  • Create a deployment plan. This step shows what actions Terraform will take to reach the desired state defined in your configuration.

    terraform plan
  • Review the plan to ensure that only the desired actions are in the plan output.

  • Apply the configuration and create the resources defined in your configuration file. This command will prompt you to confirm the deployment prior to making changes. Type yes to create your solution’s infrastructure.

    terraform apply
    Info

    If you are confident in your changes, you can add the -auto-approve switch to bypass manual approval: terraform apply -auto-approve

  • Once the deployment completes, validate that the infrastructure is configured as desired.

    Info

    A local terraform.tfstate file and a state backup file have been created during the deployment. The use of local state is acceptable for small temporary configurations, but production or long-lived installations should use a remote state configuration where possible. Configuring remote state is out of scope for this guide, but you can find details on using an Azure storage account for this purpose in the Microsoft Learn documentation.

Clean up your environment

When you are ready, you can remove the infrastructure deployed in this example. Use the following command to delete all resources created by your deployment:

terraform destroy
Note

Most Key Vault deployment examples activate soft-delete functionality as a default. The terraform destroy command will remove the Key Vault resource but does not purge a soft-deleted vault. You may encounter errors if you attempt to re-deploy a Key Vault with the same name during the soft-delete retention window. If you wish to purge the soft-delete for this example you can run az keyvault purge -n <keyVaultName> -l <regionName> using the Azure CLI, or Remove-AzKeyVault -VaultName "<keyVaultName>" -Location "<regionName>" -InRemovedState using Azure PowerShell.

Congratulations, you have successfully leveraged Terraform and AVM to deploy resources in Azure!

Tip

We welcome your contributions and feedback to help us improve the AVM modules and the overall experience for the community!

Next Steps

For developing a more advanced solution, please see the lab titled “Introduction to using Azure Verified Modules for Terraform”.