To find out more about future calls and watch the recordings of previous ones, see the Community Calls page!
AVM Site Updates
AVM has just gone through a major website overhaul - Check out our features!
New theme: Change between light and dark mode in the bottom left corner ( ) or leave it on dynamic mode to switch automatically based on your system settings.
New navigation option: Use the arrows ( and ) in the top right corner to navigate back and forth between pages.
New search: Look for a module - e.g., “virtual network” or “vnet” - using the search bar ( ) in the top right corner and follow the search results to the related Bicep or Terraform index page.
New TOC menu: Try our new table of contents ( ) now moved to the top left corner of the page to provide easier in-page navigation consistently.
New print functionality: Click the printer icon ( ) in the top right corner to generate a PDF or print parts of the documentation. Note, this feature will show all content in the hierarchy below the current page - i.e., when invoked from the home page, it will include he entire AVM documentation.
Other minor updates and bug fixes, such as:
More compact, zoomable mermaid diagrams for better viewing.
Last modified date on each page is now a clickable link to the GitHub commit history for that page.
Minor menu updates, including the default collapsed/expanded configuration.
Try out things and let us know what you think!
Introduction
Value Proposition
Azure Verified Modules (AVM) is an initiative to consolidate and set the standards for what a good Infrastructure-as-Code module looks like.
Modules will then align to these standards, across languages (Bicep, Terraform etc.) and will then be classified as AVMs and available from their respective language specific registries.
AVM is a common code base, a toolkit for our Customers, our Partners, and Microsoft. It’s an official, Microsoft driven initiative, with a devolved ownership approach to develop modules, leveraging internal & external communities.
Azure Verified Modules enable and accelerate consistent solution development and delivery of cloud-native or migrated applications and their supporting infrastructure by codifying Microsoft guidance (WAF), with best practice configurations.
Modules
Azure Verified Modules provides two types of modules: Resource and Pattern modules.
AVM modules are used to deploy Azure resources and their extensions, as well as reusable architectural patterns consistently.
Modules are composable building blocks that encapsulate groups of resources dedicated to one task.
Flexible, generalized, multi-purpose
Integrates child resources
Integrates extension resources
AVM improves code quality and provides a unified customer experience.
Important
AVM is owned, developed & supported by Microsoft, you may raise a GitHub issue on this repository or the module’s repository directly to get support or log feature requests.
You can also log a support ticket and these will be redirected to the AVM team and the module owner(s).
Azure Verified Modules (AVM), as “One Microsoft”, we want to provide and define the single definition of what a good IaC module is;
How they should be constructed and built
Enforcing consistency and testing where possible
How they are to be consumed
What they deliver for consumers in terms of resources deployed and configured
And where appropriate aligned across IaC languages (e.g. Bicep, Terraform, etc.).
Mission Statement
Our mission is to deliver a comprehensive Azure Verified Modules library in multiple IaC languages, following the principles of the well-architected framework, serving as the trusted Microsoft source of truth. Supported by Microsoft, AVM will accelerate deployment time for Azure resources and architectural patterns, empowering every person and organization on the planet on their IaC journey.
Definition of “Verified” Summary
The modules are supported by Microsoft, across it’s many internal organizations, as described in Module Support
Modules are aligned to clear specifications that enforces consistency between all AVM modules. See the ‘Specifications & Definitions’ section in the menu
Modules will continue to stay up-to-date with product/service roadmaps owned by the module owners and contributors
Modules will provide clear documentation alongside examples to promote self-service consumption
Modules will be tested to ensure they comply with the specifications for AVM and their examples deploy as intended
Why Azure Verified Modules?
This effort to create Azure Verified Modules, with a strategy and definition, is required based on the sheer number of existing attempts from all areas across Microsoft to try and address this same area for our customers and partners. Across Microsoft there are many initiatives, projects and repositories that host and provide IaC modules in several languages, for example Bicep and Terraform. Each of these come with differing code styling and standards, consumption methods and approaches, testing frameworks, target personas, contribution guidelines, module definitions and most importantly support statements from their owners and maintainers.
However, none of these existing attempts have ever made it all the way through to becoming a brand and the go to place for IaC modules from Microsoft that consumers can trust (mainly around longevity and support), build upon and contribute back to.
Performing this effort now to create a shared single aligned strategy and definition for IaC modules from Microsoft, as One Microsoft, will allow us to accelerate existing and future projects, such as Application Landing Zone Accelerators (LZAs), as well as providing the building blocks via a library of modules, in the language of the consumers choice, that is consistent, trusted and supported by Microsoft. This all leads to consumers being able to accelerate faster, no matter what stage of their IaC journey they are on.
We also know, from our customers, that well defined support statements from Microsoft are required for initiatives like this to succeed at scale, especially in larger enterprise customers. We have seen over the past FY that this topic alone is important and is one that has led to confusion and frustration to customers who are consuming modules developed by individuals that in the end are not “officially” Microsoft supported and this unfortunately normally occurs at a critical point in time for the project being worked on, which amplifies frustrations.
How will we create, support and enforce Azure Verified Modules?
Azure Verified Modules will achieve this, and its mission statement, by implementing and enforcing the following; driven by the AVM Core Team:
Publishing AVM modules to their respective public registries for consumption
While some pipelines can momentarily show as red, a new module version cannot be published without a successful test run. A failing test may indicate a recent change to the platform that is causing a break in the module or any intermittent errors, such as a periodic test deployment attempting to create a resource with a name already taken in another Azure region.
This page contains various views of the module index (catalog) for Bicep Resource Modules. To see these views, click on the expandable sections with the “β” sign below.
To see the full, unfiltered, unformatted module index on GitHub, click here.
Modules listed below that aren’t shown with the status of Module Available π’, are currently in development and are not yet available for use. For proposed modules, see the Proposed modules section below.
Published modules - π’ & π
β Published Modules - Module names, status and owners
This section is mainly intended for module owners and contributors as it contains information important for module development, such as telemetry ID prefix, and GitHub Teams for Owners & Contributors.
Module name, Telemetry ID prefix, GitHub Teams for Owners & Contributors
β All Modules - Module name, Telemetry ID prefix, GitHub Teams for Owners & Contributors
This page contains various views of the module index (catalog) for Bicep Pattern Modules. To see these views, click on the expandable sections with the “β” sign below.
To see the full, unfiltered, unformatted module index on GitHub, click here.
Modules listed below that aren’t shown with the status of Module Available π’, are currently in development and are not yet available for use. For proposed modules, see the Proposed modules section below.
Published modules - π’ & π
β Published Modules - Module names, status and owners
This section is mainly intended for module owners and contributors as it contains information important for module development, such as telemetry ID prefix, and GitHub Teams for Owners & Contributors.
Module name, Telemetry ID prefix, GitHub Teams for Owners & Contributors
β All Modules - Module name, Telemetry ID prefix, GitHub Teams for Owners & Contributors
This page contains various views of the module index (catalog) for Bicep Utility Modules. To see these views, click on the expandable sections with the “β” sign below.
To see the full, unfiltered, unformatted module index on GitHub, click here.
Modules listed below that aren’t shown with the status of Module Available π’, are currently in development and are not yet available for use. For proposed modules, see the Proposed modules section below.
Published modules - π’ & π
β Published Modules - Module names, status and owners
This section is mainly intended for module owners and contributors as it contains information important for module development, such as telemetry ID prefix, and GitHub Teams for Owners & Contributors.
Module name, Telemetry ID prefix, GitHub Teams for Owners & Contributors
β All Modules - Module name, Telemetry ID prefix, GitHub Teams for Owners & Contributors
This page contains various views of the module index (catalog) for Terraform Resource Modules. To see these views, click on the expandable sections with the “β” sign below.
To see the full, unfiltered, unformatted module index on GitHub, click here.
Modules listed below that aren’t shown with the status of Module Available π’, are currently in development and are not yet available for use. For proposed modules, see the Proposed modules section below.
Published modules - π’ & π
β Published Modules - Module names, status and owners
This section is mainly intended for module owners and contributors as it contains information important for module development, such as telemetry ID prefix, and GitHub Teams for Owners & Contributors.
Module name, Telemetry ID prefix, GitHub Teams for Owners & Contributors
β All Modules - Module name, Telemetry ID prefix, GitHub Teams for Owners & Contributors
This page contains various views of the module index (catalog) for Terraform Pattern Modules. To see these views, click on the expandable sections with the “β” sign below.
To see the full, unfiltered, unformatted module index on GitHub, click here.
Modules listed below that aren’t shown with the status of Module Available π’, are currently in development and are not yet available for use. For proposed modules, see the Proposed modules section below.
Published modules - π’ & π
β Published Modules - Module names, status and owners
This section is mainly intended for module owners and contributors as it contains information important for module development, such as telemetry ID prefix, and GitHub Teams for Owners & Contributors.
Module name, Telemetry ID prefix, GitHub Teams for Owners & Contributors
β All Modules - Module name, Telemetry ID prefix, GitHub Teams for Owners & Contributors
This page contains various views of the module index (catalog) for Terraform Utility Modules. To see these views, click on the expandable sections with the “β” sign below.
To see the full, unfiltered, unformatted module index on GitHub, click here.
Modules listed below that aren’t shown with the status of Module Available π’, are currently in development and are not yet available for use. For proposed modules, see the Proposed modules section below.
Published modules - π’ & π
β Published Modules - Module names, status and owners
This section is mainly intended for module owners and contributors as it contains information important for module development, such as telemetry ID prefix, and GitHub Teams for Owners & Contributors.
Module name, Telemetry ID prefix, GitHub Teams for Owners & Contributors
β All Modules - Module name, Telemetry ID prefix, GitHub Teams for Owners & Contributors
This QuickStart guide offers step-by-step instructions for integrating Azure Verified Modules (AVM) into your solutions. It includes the initial setup, essential tools, and configurations required to deploy and manage your Azure resources efficiently using AVM.
The AVM Key Vault resource module, used as an example in this chapter, simplifies the deployment and management of Azure Key Vaults, ensuring secure storage and access to your secrets, keys, and certificates.
Leveraging Azure Verified Modules
Using AVM ensures that your infrastructure-as-code deployments follow Microsoft’s best practices and guidelines, providing a consistent and reliable foundation for your cloud solutions. AVM helps accelerate your development process, reduce the risk of misconfigurations, and enhance the security and compliance of your applications.
Using default values
The default values provided by AVM are generally safe, as they follow best practices and ensure a secure and reliable setup. However, it is important to review these values to ensure they meet your specific requirements and compliance needs. Customizing the default values may be necessary to align with your organization’s policies and the specific needs of your solution.
Exploring examples and module features
You can find examples and detailed documentation for each AVM module in their respective code repository’s README.MD file, which details features, input parameters, and outputs. The module’s documentation also provides comprehensive usage examples, covering various scenarios and configurations. Additionally, you can explore the module’s source code repository. This information will help you understand the full capabilities of the module and how to effectively integrate it into your solutions.
Subsections of Quickstart
Bicep Quickstart Guide
Introduction
This guide explains how to use an Azure Verified Module (AVM) in your Bicep workflow. By leveraging AVM modules, you can rapidly deploy and manage Azure infrastructure without having to write extensive code from scratch.
In this guide, you will deploy a Key Vault resource and a Personal Access Token as a secret.
This article is intended for a typical ‘infra-dev’ user (cloud infrastructure professional) who has a basic understanding of Azure and Bicep but is new to Azure Verified Modules and wants to learn how to deploy a module in the easiest way using AVM.
Make sure you have these tools set up before proceeding.
Module Discovery
Find your module
In this scenario, you need to deploy a Key Vault resource and some of its child resources, such as a secret. Let’s find the AVM module that will help us achieve this.
There are two primary ways for locating published Bicep Azure Verified Modules:
Option 1 (preferred): Using IntelliSense in the Bicep extension of Visual Studio Code, and
Start typing module, then give your module a symbolic name, such as myModule.
Use IntelliSense to select br/public.
The list of all AVM modules published in the Bicep Public Registry will show up. Use this to explore the published modules.
Note
The Bicep VSCode extension is reading metadata through this JSON file. All modules are added to this file, as part of the publication process. This lists all the modules marked as Published or Orphaned on the AVM Bicep module index pages.
Select the module you want to use and the version you want to deploy. Note how you can type full or partial module names to filter the list.
Right click on the module’s path and select Go to definition or hit F12 to see the module’s source code. You can toggle between the Bicep and the JSON view.
Hover over the module’s symbolic name to view its documentation URL. By clicking on it, you will be directed to the module’s GitHub folder in the bicep-registry-modules (BRM) repository. There, you can access the source code and documentation, as illustrated below.
Option 2: Use the AVM Bicep Module Index
Searching the Azure Verified Module indexes is the most complete way to discover published as well as planned (proposed) modules. As shown in the video above, use the following steps to locate a specific module on the AVM website:
Expand the Module Indexes menu item and select the Bicep sub-menu item.
Select the menu item for the module type you are searching for: Resource, Pattern, or Utility.
Note
Since the Key Vault module used as an example in this guide is published as an AVM resource module, it can be found under the resource modules section in the AVM Bicep module index.
A detailed description of module classification types can be found under the related section here.
Select the Published modules link from the table of contents at the top of the page.
Use the in-page search feature of your browser. In most Windows browsers you can access it using the CTRL + F keyboard shortcut.
Enter a search term to find the module you are looking for - e.g., Key Vault.
Move through the search results until you locate the desired module. If you are unable to find a published module, return to the table of contents and expand the All modules link to search both published and proposed modules - i.e., modules that are planned, likely in development but not published yet.
In the module’s documentation, you can find detailed information about the module’s functionality, components, input parameters, outputs and more. The documentation also provides comprehensive usage examples, covering various scenarios and configurations.
Explore the Key Vault moduleβs documentation for usage examples and to understand its functionality, input parameters, and outputs.
Note the mandatory and optional parameters in the Parameters section.
Review the Usage examples section. AVM modules include multiple tests that can be found under the tests folder. These tests are also used as the basis of the usage examples ensuring they are always up-to-date and deployable.
In this example, you will deploy a secret in a new Key Vault instance with minimal input. AVM provides default parameter values with security and reliability being core principles. These settings apply the recommendations of the Well Architected Framework where possible and appropriate.
Note how Example 2 does most of what you need to achieve.
Create your new solution using AVM
In this section, you will develop a Bicep template that references the AVM Key Vault module and its child resources and features. These include secret and role based access control configurations that grant permissions to a user.
Start VSCode (make sure the Bicep extension is installed) and open a folder in which you want to work.
Create a main.bicep and a dev.bicepparam file, which will hold parameters for your Key Vault deployment.
Copy the content below into your main.bicep file. We have included comments to distinguish between the two different occurrences of the names attribute.
module myKeyVault 'br/public:avm/res/key-vault/vault:0.11.0' = {
name: // the name of the module's deployment params: {
name: '<keyVaultName>'// the name of the Key Vault instance - length and character limits apply }
}
Note
For Azure Key Vaults, the name must be globally unique. When you deploy the Key Vault, ensure you select a name that is alphanumeric, twenty-four characters or less, and unique enough to ensure no one else has used the name for their Key Vault. If the name has been previously taken, you will get an error.
After setting the values for the required properties, the module can be deployed. This minimal configuration automatically applies the security and reliability recommendations of the Well Architected Framework where possible and appropriate. These settings can be overridden if needed.
Bicep-specific configuration
It is recommended to create a bicepconfig.json file, and enable use-recent-module-versions, which warns you to use the latest available version of the AVM module.
// This is a Bicep configuration file. It can be used to control how Bicep operates and to customize
// validation settings for the Bicep linter. The linter relies on these settings when evaluating your
// Bicep files for best practices. For further information, please refer to the official documentation at:
// https://learn.microsoft.com/en-us/azure/azure-resource-manager/bicep/bicep-config
{
"analyzers": {
"core": {
"rules": {
"use-recent-module-versions": {
"level": "warning",
"message": "The module version is outdated. Please consider updating to the latest version." }
}
}
}
}
Define the Key Vault instance
In this scenario - and every other real-world setup - there is more that you need to configure. You can open the module’s documentation by hovering over its symbolic name to see all of the moduleβs capabilities - including supported parameters.
Note
The Bicep extension facilitates code-completion, enabling you to easily locate and utilize the Azure Verified Module. This feature also provides the necessary properties for a module, allowing you to begin typing and leverage IntelliSense for completion.
Add parameters and values to the main.bicep file to customize your configuration. These parameters are used for passing in the Key Vault name and enabling purge protection. You might not want to enable the latter in a non-production environment, as it makes it harder to delete and recreate resources.
The main.bicep file will now look like this:
// the scope, the deployment deploys resources totargetScope = 'resourceGroup'// parameters and default valuesparam keyVaultName string
@description('Disable for development deployments.')
param enablePurgeProtection bool = true// the resources to deploymodule myKeyVault 'br/public:avm/res/key-vault/vault:0.11.0' = {
name: 'key-vault-deployment' params: {
name: keyVaultName
enablePurgeProtection: enablePurgeProtection
// more properties are not needed, as AVM provides default values }
}
Note that the Key Vault instance will be deployed within a resource group scope in our example.
Create a dev.bicepparam file (this is optional) and set parameter values for your environment. You can now pass these values by referencing this file at the time of deployment (using PowerShell or Azure CLI).
using'main.bicep'// environment specific valuesparam keyVaultName = '<keyVaultName>'param enablePurgeProtection = false
Create a secret and set permissions
Add a secret to the Key Vault instance and grant permissions to a user to work with the secret. Sample role assignments can be found in Example 3: Using large parameter set. See Parameter: roleAssignments for a list of pre-defined roles that you can reference by name instead of a GUID. This is a key benefit of using AVM, as the code is easy to read and increases the maintainability.
You can also leverage User-defined data types and simplify the parameterization of the modules instead of guessing or looking up parameters. Therefore, first import UDTs from the Key Vault and common types module and leverage the UDTs in your Bicep and parameter files.
For a role assignment, the principal ID is needed, that will be granted a role (specified by its name) on the resource. Your own ID can be found out with az ad signed-in-user show --query id.
// the scope, the deployment deploys resources totargetScope = 'resourceGroup'// parameters and default valuesparam keyVaultName string
// the PAT token is a secret and should not be stored in the Bicep(parameter) file.// It can be passed via the commandline, if you don't use a parameter file.@secure()
param patToken string = newGuid()
@description('Enabled by default. Disable for development deployments')
param enablePurgeProtection bool = trueimport { roleAssignmentType } from 'br/public:avm/utl/types/avm-common-types:0.4.0'// the role assignments are optional in the Key Vault moduleparam roleAssignments roleAssignmentType[]?
// the resources to deploymodule myKeyVault 'br/public:avm/res/key-vault/vault:0.11.0' = {
name: 'key-vault-deployment' params: {
name: keyVaultName
enablePurgeProtection: enablePurgeProtection
secrets: [
{
name: 'PAT' value: patToken
}
]
roleAssignments: roleAssignments
}
}
The secrets parameter references a UDT (User-defined data type) that is part of the Key Vault module and enables code completion for easy usage. There is no need to look up what attributes the secret object might have. Start typing and tab-complete what you need from the content offered by the Bicep extension’s integration with AVM.
The bicep parameter file now looks like this:
// reference to the Bicep file to set the contextusing'main.bicep'// environment specific valuesparam keyVaultName = '<keyVaultName>'param enablePurgeProtection = false// for security reasons, the secret value must not be stored in this file.// You can change it later in the deployed Key Vault instance, where you also renew it after expiration.param roleAssignments = [
{
principalId: '<principalId>'// using the name of the role instead of looking up the GUID (which can also be used) roleDefinitionIdOrName: 'Key Vault Secrets Officer' }
]
Note
The display names for roleDefinitionIdOrName can be acquired the following two ways:
From the builtInRoleNames variable in the module’s source code. To get there, hit F12 while the cursor is on the part of the module path starting with br/public:.
Boost your development with VS Code IntelliSense
Leverage the IntelliSense feature in VS Code to speed up your development process. IntelliSense provides code completion, possible parameter values and structure. It helps you write code more efficiently by providing context-aware suggestions as you type.
Here is how quickly you can deliver the solution detailed in this section:
Deploy your solution
Now that your template and parameter file is ready, you can deploy your solution to Azure. Use PowerShell or the Azure CLI to deploy your solution.
Deploy with
# Log in to AzureConnect-AzAccount
# Select your subscriptionSet-AzContext -SubscriptionId '<subscriptionId>'# Deploy a resource groupNew-AzResourceGroup -Name 'avm-quickstart-rg' -Location 'germanywestcentral'# Invoke your deploymentNew-AzResourceGroupDeployment -DeploymentName 'avm-quickstart-deployment' -ResourceGroupName 'avm-quickstart-rg' -TemplateParameterFile 'dev.bicepparam' -TemplateFile 'main.bicep'
# Log in to Azureaz login
# Select your subscriptionaz account set --subscription '<subscriptionId>'# Deploy a resource groupaz group create --name 'avm-quickstart-rg' --location 'germanywestcentral'# Invoke your deploymentaz deployment group create --name 'avm-quickstart' --resource-group 'avm-quickstart-rg' --template-file 'main.bicep' --parameters 'dev.bicepparam'
Use the Azure portal, Azure PowerShell, or the Azure CLI to verify that the Key Vault instance and secret have been successfully created with the correct configuration.
Clean up your environment
When you are ready, you can remove the infrastructure deployed in this example. The following commands will remove all resources created by your deployment:
Clean up with
# Delete the resource groupRemove-AzResourceGroup -Name "avm-quickstart-rg" -Force
# Purge the Key VaultRemove-AzKeyVault -VaultName "<keyVaultName>" -Location "germanywestcentral" -InRemovedState -Force
# Delete the resource groupaz group delete --name 'avm-quickstart-rg' --yes --no-wait
# Purge the Key Vaultaz keyvault purge --name '<keyVaultName>' --no-wait
Congratulations, you have successfully leveraged an AVM Bicep module to deploy resources in Azure!
Tip
We welcome your contributions and feedback to help us improve the AVM modules and the overall experience for the community!
This guide explains how to use an Azure Verified Module (AVM) in your Terraform workflow. With AVM modules, you can quickly deploy and manage Azure infrastructure without writing extensive code from scratch.
In this guide, you will deploy a Key Vault resource and generate and store a key.
This article is intended for a typical ‘infra-dev’ user (cloud infrastructure professional) who is new to Azure Verified Modules and wants to learn how to deploy a module in the easiest way using AVM. The user has a basic understanding of Azure and Terraform.
Before you begin, ensure you have these tools installed in your development environment.
Module Discovery
Find your module
In this scenario, you need to deploy a Key Vault resource and some of its child resources, such as a key. Let’s find the AVM module that will help us achieve this.
There are two primary ways for locating published Terraform Azure Verified Modules:
The easiest way to find published AVM Terraform modules is by searching the Terraform Registry. Follow these steps to locate a specific module, as shown in the video above.
In the search bar at the top of the screen type avm. Optionally, append additional search terms to narrow the search results. (e.g., avm key vault for AVM modules with Key Vault in the name.)
Select see all to display the full list of published modules matching your search criteria.
Find the module you wish to use and select it from the search results.
Note
It is possible to discover other unofficial modules with avm in the name using this search method. Look for the Partner tag in the module title to determine if the module is part of the official set.
Use the AVM Terraform Module Index
Searching the Azure Verified Module indexes is the most complete way to discover published as well as planned modules - shown as proposed. As presented in the video above, use the following steps to locate a specific module on the AVM website:
Expand the Module Indexes menu item and select the Terraform sub-menu item.
Select the menu item for the module type you are searching for: Resource, Pattern, or Utility.
Note
Since the Key Vault module used as an example in this guide is published as an AVM resource module, it can be found under the resource modules section in the AVM Terraform module index.
A detailed description of each module classification type can be found under the related section here.
Select the Published modules link from the table of contents at the top of the page.
Use the in-page search feature of your browser (in most Windows browsers you can access it using the CTRL + F keyboard shortcut).
Enter a search term to find the module you are looking for - e.g., Key Vault.
Move through the search results until you locate the desired module. If you are unable to find a published module, return to the table of contents and expand the All modules link to search both published and proposed modules - i.e., modules that are planned, likely in development but not published yet.
After finding the desired module, click on the module’s name. This link will lead you to the official Hashicorp Terraform Registry page for the module where you can find the module’s documentation and examples.
Module details and examples
Once you have identified the AVM module in the Terraform Registry you can find detailed information about the moduleβs functionality, components, input parameters, outputs and more. The documentation also provides comprehensive usage examples, covering various scenarios and configurations.
Explore the Key Vault moduleβs documentation and usage examples to understand its functionality, input variables, and outputs.
Note the Examples drop-down list and explore each example
Review the Readme tab to see module provider minimums, a list of resources and data sources used by the module, a nicely formatted version of the inputs and outputs, and a reference to any submodules that may be called.
Explore the Inputs tab and observe how each input has a detailed description and a type definition for you to use when adding input values to your module configuration.
Explore the Outputs tab and review each of the outputs that are exported by the AVM module for use by other modules in your deployment.
Finally, review the Resources tab to get a better understanding of the resources defined in the module.
In this example, your will to deploy a secret in a new Key Vault instance without needing to provide other parameters. The AVM Key Vault resource module provides these capabilities and does so with security and reliability being core principles. The default settings of the module also apply the recommendations of the Well Architected Framework where possible and appropriate.
Note how the create-key example seems to do what you need to achieve.
Create your new solution using AVM
Now that you have found the module details, you can use the content from the Terraform Registry to speed up your development in the following ways:
Option 1: Create a solution using AVM module examples: duplicate a module example and edit it for your needs. This is useful if you are starting without any existing infrastructure and need to create supporting resources like resource groups as part of your deployment.
Option 2: Create a solution by changing the AVM module input values: add the AVM module to an existing solution that already includes other resources. This method requires some knowledge of the resource(s) being deployed so that you can make choices about optional features configured in your solution’s version of the module.
Each deployment method includes a section below so that you can choose the method which best fits your needs.
Note
For Azure Key Vaults, the name must be globally unique. When you deploy the Key Vault, ensure you select a name that is alphanumeric, twenty-four characters or less, and unique enough to ensure no one else has used the name for their Key Vault. If the name has been used previously, you will get an error.
Option 1: Create a solution using AVM module examples
Leverage the following steps as a template for how to leverage examples for bootstrapping your new solution code. The Key Vault resource module is used here as an example, but in practice you may choose any module that applies to your scenario.
Locate and select the Examples drop down menu in the middle of the Key Vault module page.
From the drop-down list select an example whose name most closely aligns with your scenario - e.g., create-key.
When the example page loads, read the example description to determine if this is the desired example. If it is not, return to the module main page, and select a different example until you are satisfied that the example covers the scenario you are trying to deploy. If you are unable to find a suitable example, leverage the last two steps in the option 2 instructions to modify the inputs of the selected example to match your requirements.
Scroll to the code block for the example and select the Copy button on the top right of the block to copy the content to the clipboard.
β Click here to copy the sample code from the video.
provider"azurerm" {
features {}
}
terraform {
required_version = "~> 1.9"required_providers {
azurerm = {
source = "hashicorp/azurerm"version = ">= 3.71" }
http = {
source = "hashicorp/http"version = "~> 3.4" }
random = {
source = "hashicorp/random"version = "~> 3.5" }
}
}
module"regions" {
source = "Azure/avm-utl-regions/azurerm"version = "0.1.0"}# This allows us to randomize the region for the resource group.
resource"random_integer""region_index" {
max = length(module.regions.regions) -1min = 0}# This ensures you have unique CAF compliant names for our resources.
module"naming" {
source = "Azure/naming/azurerm"version = "0.3.0"}
resource"azurerm_resource_group""this" {
location = module.regions.regions[random_integer.region_index.result].namename = module.naming.resource_group.name_unique}# Get current IP address for use in KV firewall rules
data"http""ip" {
url = "https://api.ipify.org/"retry {
attempts = 5max_delay_ms = 1000min_delay_ms = 500 }
}
data"azurerm_client_config""current" {}
module"key_vault" {
source = "Azure/avm-res-keyvault-vault/azurerm"name = module.naming.key_vault.name_uniquelocation = azurerm_resource_group.this.locationenable_telemetry = var.enable_telemetryresource_group_name = azurerm_resource_group.this.nametenant_id = data.azurerm_client_config.current.tenant_idpublic_network_access_enabled = truekeys = {
cmk_for_storage_account = {
key_opts = [
"decrypt",
"encrypt",
"sign",
"unwrapKey",
"verify",
"wrapKey" ]
key_type:"RSA"name = "cmk-for-storage-account"key_size = 2048 }
}
role_assignments = {
deployment_user_kv_admin = {
role_definition_id_or_name = "Key Vault Administrator"principal_id = data.azurerm_client_config.current.object_id }
}
wait_for_rbac_before_key_operations = {
create = "60s" }
network_acls = {
bypass = "AzureServices"ip_rules = ["${data.http.ip.response_body}/32"]
}
}
In your IDE - Visual Studio Code in our example - create the main.tf file for your new solution.
Paste the content from the clipboard into main.tf.
AVM examples frequently use naming and/or region selection AVM utility modules to generate deployment region and/or naming values as well as any default values for required fields. If you want to use a specific region name or other custom resource values, remove the existing region and naming module calls and replace example input values with the new desired custom input values.
Once supporting resources such as resource groups have been modified, locate the module call for the AVM module - i.e., module "keyvault".
AVM module examples use dot notation for a relative reference that is useful during module testing. However, you will need to replace the relative reference with a source reference that points to the Terraform Registry source location. In most cases, this source reference has been left as a comment in the module example to simplify replacing the existing source dot reference. Perform the following two actions to update the source:
Delete the existing source definition that uses a dot reference - i.e., source = "../../".
Uncomment the Terraform Registry source reference by deleting the # sign at the start of the commented source line - i.e., source = "Azure/avm-res-keyvault-vault/azurerm".
Note
If the module example does not include a commented Terraform Registry source reference, you will need to copy it from the module’s main documentation page. Use the following steps to do so:
Use the breadcrumbs to leave the example documentation and return to the module’s primary Terraform Registry documentation page.
Locate the Provision Instructions box on the right side of the module’s Terraform Registry page in your web browser.
Select the second line that starts with source = from the code block - e.g., source = "Azure/avm-res-keyvault-vault/azurerm". Copy it onto the clipboard.
Return to your code solution and Paste the clipboard’s content where you previously deleted the source dot reference - e.g., source = "../../".
AVM module examples use a variable to enable or disable the telemetry collection. Update the enable_telemetry input value to true or false. - e.g. enable_telemetry = true
Save your main.tf file changes and then proceed to the guide section for running your solution code.
Option 2: Create a solution by changing the AVM module input values
Click here to copy the sample code from the video.
Use the following steps as a guide for the custom implementation of an AVM Module in your solution code. This instruction path assumes that you have an existing Terraform file that you want to add the AVM module to.
Locate the Provision Instructions box on the right side of the module’s Terraform Registry page in your web browser.
Select the module template code from the code block and Copy it onto the clipboard.
Switch to your IDE and Paste the contents of the clipboard into your solution’s .tf Terraform file - main.tf in our example.
Return to the module’s Terraform Registry page in the browser and select the Inputs tab.
Review each input and add the inputs with the desired target value to the solution’s code - i.e., name = "custom_name".
Once you are satisfied that you have included all required inputs and any optional inputs, Save your file and continue to the next section.
Deploy your solution
After completing your solution development, you can move to the deployment stage. Follow these steps for a basic Terraform workflow:
Open the command line and login to Azure using the Azure cli
azlogin
If your account has access to multiple tenants, you may need to modify the command to az login --tenant <tenant id> where “<tenant id>” is the guid for the target tenant.
After logging in, select the target subscription from the list of subscriptions that you have access to.
Change the path to the directory where your completed terraform solution files reside.
Note
Many AVM modules depend on the AzureRM 4.0 Terraform provider which mandates that a subscription id is configured. If you receive an error indicating that subscription_id is a required provider property, you will need to set a subscription id value for the provider. For Unix based systems (Linux or MacOS) you can configure this by running export ARM_SUBSCRIPTION_ID=<your subscription guid> on the command line. On Microsoft Windows, you can perform the same operation by running set ARM_SUBSCRIPTION_ID="<your subscription guid>" from the Windows command prompt or by running $env:ARM_SUBSCRIPTION_ID="<your subscription guid>" from a powershell prompt. Replace the “<your subscription id>” notation in each command with your Azure subscription’s unique id value.
Initialize your Terraform project. This command downloads the necessary providers and modules to the working directory.
terraforminit
Before applying the configuration, it is good practice to validate it to ensure there are no syntax errors.
terraformvalidate
Create a deployment plan. This step shows what actions Terraform will take to reach the desired state defined in your configuration.
terraformplan
Review the plan to ensure that only the desired actions are in the plan output.
Apply the configuration and create the resources defined in your configuration file. This command will prompt you to confirm the deployment prior to making changes. Type yes to create your solution’s infrastructure.
terraformapply
Info
If you are confident in your changes, you can add the -auto-approve switch to bypass manual approval: terraform apply -auto-approve
Once the deployment completes, validate that the infrastructure is configured as desired.
Info
A local terraform.tfstate file and a state backup file have been created during the deployment. The use of local state is acceptable for small temporary configurations, but production or long-lived installations should use a remote state configuration where possible. Configuring remote state is out of scope for this guide, but you can find details on using an Azure storage account for this purpose in the Microsoft Learn documentation.
Clean up your environment
When you are ready, you can remove the infrastructure deployed in this example. Use the following command to delete all resources created by your deployment:
terraformdestroy
Note
Most Key Vault deployment examples activate soft-delete functionality as a default. The terraform destroy command will remove the Key Vault resource but does not purge a soft-deleted vault. You may encounter errors if you attempt to re-deploy a Key Vault with the same name during the soft-delete retention window. If you wish to purge the soft-delete for this example you can run az keyvault purge -n <keyVaultName> -l <regionName> using the Azure CLI, or Remove-AzKeyVault -VaultName "<keyVaultName>" -Location "<regionName>" -InRemovedState using Azure PowerShell.
Congratulations, you have successfully leveraged Terraform and AVM to deploy resources in Azure!
Tip
We welcome your contributions and feedback to help us improve the AVM modules and the overall experience for the community!
The “Module Specifications” section uses tags to dynamically render content based on the selected attributes, such as the IaC language, module classification, category, severity and more. The tags are defined in hidden header of each specification page.
To make it easier for module owners and contributors to navigate the documentation, the specifications are grouped to distinct pages by the IaC language (Bicep | Terraform) and module classification ( resource | pattern | utility). The specifications on each page are further ordered by the category (e.g., Composition, CodeStyle, Testing, etc.), severity of the requirements (MUST | SHOULD | MAY) and at what stage of the module’s lifecycle the specification is typically applicable (Initial | BAU | EOL).
To find what you need, simply decide which IaC language you’d like develop in, and what classification your module falls under. Then, navigate to the respective page to find the specifications that are relevant to you.
Specification Tags
The following tags are used to qualify the specifications:
Each tag is a concatenation of exactly one of the keys and one of the values, e.g., Language-Bicep, Class-Resource, Type-Functional, etc. When it’s marked as Multiple, it means that the tag can have multiple values, e.g., Language-Bicep, Language-Terraform, or Persona-Owner, Persona-Contributor, etc. When it’s marked as Single, it means that the tag can have only one value, e.g., Type-Functional, Lifecycle-Initial, etc.
β Click here to see the definition of the Severity, Persona, Lifecycle and Validation tags...
Who is this specification for? The Owner is the module owner, while the Contributor is anyone who contributes to the module.
Lifecycle
When is this specification mostly relevant?
The Initial stage is when the module is being developed first - e.g., naming related specs are labeled with Lifecycle-Initial as the naming of the module only happens once: at the beginning of their life.
The BAU (business as usual) stage is at any time during the module’s typical lifecycle - e.g., specs that describe coding standards are relevant throughout the module’s life, for any time a new module version is released.
The EOL stage is when the module is being decommissioned - e.g., specs describing how a module should be retired are labeled with Lifecycle-EOL.
Validation
How is this specification checked/validated/enforced?
Manual means that the specification is manually enforced at the time of the module review (at the time of the first or any subsequent module version release).
CI/Informational means that the module is checked against the specification by a CI pipeline, but the failure is only informational and doesn’t block the module release.
CI/Enforced means that the specification is automatically enforced by a CI pipeline, and the failure blocks the module release.
Note: the BCP/ or TF/ prefix is required as shared (language-agnostic) specifications may have different level of validation/enforcement per each language - e.g., it is possible that a specification is enforced by a CI pipeline for Bicep modules, while it is manually enforced for Terraform modules.
Why are there language specific specifications?
While every effort is being made to standardize requirements and implementation details across all languages (and most specifications in fact, are applicable to all), it is expected that some of the specifications will be different between their respective languages to ensure we follow the best practices and leverage features of each language.
How to read the specifications?
Important
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDEDβ, βMAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119.
As you’re developing/maintaining a module as a module owner or contributor, you need to ensure that your module adheres to the specifications outlined in this section. The specifications are designed to ensure that all AVM modules are consistent, secure, and compliant with best practices.
There are 3 levels of specifications:
MUST: These are mandatory requirements that MUST be followed.
SHOULD: These are recommended requirements that SHOULD be followed, unless there are good reasons for not to.
MAY: These are optional requirements that MAY be followed at the module owner’s/contributor’s discretion.
Subsections of Module Specifications
Bicep Specifications
Specifications by Category and Module Classification
This chapter details the interfaces/schemas for the AVM Resource Modules features/extension resources as referenced in RMFR4 and RMFR5.
Diagnostic Settings
Important
Allowed values for logs and metric categories or category groups MUST NOT be specified to keep the module implementation evergreen for any new categories or category groups added by RPs, without module owners having to update a list of allowed values and cut a new release of their module.
In the provided example for Diagnostic Settings, both logs and metrics are enabled for the associated resource. However, it is IMPORTANT to note that certain resources may not support both diagnostic setting types/categories. In such cases, the resource configuration MUST be modified accordingly to ensure proper functionality and compliance with system requirements.
Role Assignments
// ============== //// Parameters //// ============== //import { roleAssignmentType } from 'br/public:avm/utl/types/avm-common-types:>version<' @description('Optional. Array of role assignments to create.')
param roleAssignments roleAssignmentType[]?
// ============= //// Variables //// ============= //var builtInRoleNames = {
// Add other relevant built-in roles here for your resource as per BCPNFR5 Contributor: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', 'b24988ac-6180-42a0-ab88-20f7382dd24c')
Owner: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '8e3af657-a8ff-443c-a75c-2fe8c4bcb635')
Reader: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', 'acdd72a7-3385-48ef-bd42-f606fba81ae7')
'Role Based Access Control Administrator (Preview)': subscriptionResourceId('Microsoft.Authorization/roleDefinitions', 'f58310d9-a9f6-439a-9e8d-f62e7b41a168')
'User Access Administrator': subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '18d7d88d-d35e-4fb5-a5c3-7773c20a72d9')
}
var formattedRoleAssignments = [
for (roleAssignment, index) in (roleAssignments ?? []): union(roleAssignment, {
roleDefinitionId: builtInRoleNames[?roleAssignment.roleDefinitionIdOrName] ?? (contains(roleAssignment.roleDefinitionIdOrName, '/providers/Microsoft.Authorization/roleDefinitions/')
? roleAssignment.roleDefinitionIdOrName
: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', roleAssignment.roleDefinitionIdOrName))
})
]
// ============= //// Resources //// ============= //resource>singularMainResourceType<_roleAssignments 'Microsoft.Authorization/roleAssignments@2022-04-01' = [
for (roleAssignment, index) in (formattedRoleAssignments ?? []): {
name: roleAssignment.?name ??guid(>singularMainResourceType<.id, roleAssignment.principalId, roleAssignment.roleDefinitionId)
properties: {
roleDefinitionId: roleAssignment.roleDefinitionId
principalId: roleAssignment.principalId
description: roleAssignment.?description
principalType: roleAssignment.?principalType
condition: roleAssignment.?condition
conditionVersion: !empty(roleAssignment.?condition) ? (roleAssignment.?conditionVersion ??'2.0') : null// Must only be set if condtion is set delegatedManagedIdentityResourceId: roleAssignment.?delegatedManagedIdentityResourceId
}
scope: >singularMainResourceType< }
]
Details on child, extension and cross-referenced resources:
Modules MUST support Role Assignments on child, extension and cross-referenced resources as well as the primary resource via parameters/variables
Resource Locks
// ============== //// Parameters //// ============== //import { lockType } from 'br/public:avm/utl/types/avm-common-types:>version<' @description('Optional. The lock settings of the service.')
param lock lockType?
// ============= //// Resources //// ============= //resource>singularMainResourceType<_lock 'Microsoft.Authorization/locks@2020-05-01' = if (!empty(lock ?? {}) && lock.?kind !='None') {
name: lock.?name ??'lock-${name}' properties: {
level: lock.?kind ??'' notes: lock.?kind =='CanNotDelete' ? 'Cannot delete resource or child resources.' : 'Cannot delete or modify the resource or child resources.' }
scope: >singularMainResourceType< }
lock: 'CanNotDelete'
Details on child and extension resources:
Locks SHOULD be able to be set for child resources of the primary resource in resource modules
Details on cross-referenced resources:
Locks MUST be automatically applied to cross-referenced resources if the primary resource has a lock applied.
This MUST also be able to be turned off for each of the cross-referenced resources by the module consumer via a parameter/variable if they desire
An example of this is a Key Vault module that has a Private Endpoints enabled. If a lock is applied to the Key Vault via the lock parameter/variable then the lock should also be applied to the Private Endpoint automatically, unless the privateEndpointLock/private_endpoint_lock (example name) parameter/variable is set to None
Tags
@description('Optional. Tags of the resource.')
param tags object?
Details on child, extension and cross-referenced resources:
Tags MUST be automatically applied to child, extension and cross-referenced resources, if tags are applied to the primary resource.
By default, all tags set for the primary resource will automatically be passed down to child, extension and cross-referenced resources.
This MUST be able to be overridden by the module consumer so they can specify alternate tags for child, extension and cross-referenced resources, if they desire via a parameter/variable
If overridden by the module consumer, no merge/union of tags will take place from the primary resource and only the tags specified for the child, extension and cross-referenced resources will be applied
Managed Identities
// ============== //// Parameters //// ============== //import { managedIdentityAllType } from 'br/public:avm/utl/types/avm-common-types:>version<' @description('Optional. The managed identity definition for this resource.')
param managedIdentities managedIdentityAllType?
// ============= //// Variables //// ============= //var formattedUserAssignedIdentities = reduce(map((managedIdentities.?userAssignedResourceIds ?? []), (id) => { '${id}': {} }), {}, (cur, next) => union(cur, next)) // Converts the flat array to an object like { '${id1}': {}, '${id2}': {} }var identity = !empty(managedIdentities) ? {
type: (managedIdentities.?systemAssigned ??false) ? (!empty(managedIdentities.?userAssignedResourceIds ?? {}) ? 'SystemAssigned,UserAssigned' : 'SystemAssigned') : (!empty(managedIdentities.?userAssignedResourceIds ?? {}) ? 'UserAssigned' : null)
userAssignedIdentities: !empty(formattedUserAssignedIdentities) ? formattedUserAssignedIdentities : null } : null// ============= //// Resources //// ============= //resource>singularMainResourceType<'>providerNamespace</>resourceType<@>apiVersion<' = {
name: name
identity: identity
properties: {
... // other properties }
}
// =========== //// Outputs //// =========== // @description('The principal ID of the system assigned identity.')
output systemAssignedMIPrincipalId string? = >singularMainResourceType<.?identity.?principalId
Reason for differences in User Assigned data type in languages:
We do not foresee the Managed Identity Resource Provider team to ever add additional properties within the empty object ({}) value required on the input of a User Assigned Managed Identity.
In Bicep we therefore have removed the need for this to be declared and just converted it to a simple array of Resource IDs
Private Endpoints
Private Endpoints
E.g., for services that only have one private endpoint type.
The properties defined in the schema above are the minimum amount of properties expected to be exposed for Private Endpoints in AVM Resource Modules.
A module owner MAY chose to expose additional properties of the Private Endpoint resource
However, module owners considering this SHOULD contact the AVM core team first to consult on how the property should be exposed to avoid future breaking changes to the schema that may be enforced upon them
Module owners MAY chose to define a list of allowed value for the ‘service’ (a.k.a. groupIds) property
However, they should do so with caution as should a new service appear for their resource module, a new release will need to be cut to add this new service to the allowed values
Whereas not specifying allowed values will allow flexibility from day 0 without the need for any changes and releases to be made
Secrets used inside a module can be exported to a Key Vault reference provided as per the below schema. This implementation provides a secure way around the current limitation of Bicep on providing a secure template output (that can be used for secrets).
The user MUST
provide the resource Id to a Key Vault. The principal used for the deployment MUST be allowed to set secrets in this Key Vault.
provide a name for each secret they want to store (opt-in). The module will suggest which secrets are available via the implemented user-defined type.
The module returns an output table where the key is the name of the secret the user provided, and the value contains both the secret’s resource Id and URI.
Important
The feature MUST be implemented as per the below schema. Diversions are only allowed in places marked as >text< to ensure a consistent user experience across modules.
User Defined Type, Parameter & Resource Example
// ============== //// Parameters //// ============== // @description('Optional. Key vault reference and secret settings for the module\'s secrets export.')
param secretsExportConfiguration secretsExportConfigurationType?
// ============= //// Resources //// ============= //module secretsExport 'modules/keyVaultExport.bicep' = if (secretsExportConfiguration !=null) {
name: '${uniqueString(deployment().name, location)}-secrets-kv' scope: resourceGroup(
split((secretsExportConfiguration.?keyVaultResourceId ??'//'), '/')[2],
split((secretsExportConfiguration.?keyVaultResourceId ??'////'), '/')[4]
)
params: {
keyVaultName: last(split(secretsExportConfiguration.?keyVaultResourceId ??'//', '/'))
secretsToSet: union(
[],
contains(secretsExportConfiguration!, '>secretToExport1<Name')
? [
{
name: secretsExportConfiguration!.?>secretToExport1<Name
value: >secretReference1<// e.g., >singularMainResourceType<.listKeys().primaryMasterKey }
]
: [],
contains(secretsExportConfiguration!, '>secretToExport2<Name')
? [
{
name: secretsExportConfiguration!.?>secretToExport2<Name
value:>secretReference2<// e.g., >singularMainResourceType<.listKeys().secondaryMasterKey }
]
: []
// (...) )
}
}
// =========== //// Outputs //// =========== //import { secretsOutputType } from 'br/public:avm/utl/types/avm-common-types:>version<' @description('A hashtable of references to the secrets exported to the provided Key Vault. The key of each reference is each secret\'s name.')
output exportedSecrets secretsOutputType = (secretsExportConfiguration !=null)
? toObject(secretsExport.outputs.secretsSet, secret => last(split(secret.secretResourceId, '/')), secret => secret)
: {}
// =============== //// Definitions //// =============== // @export()
type secretsExportConfigurationType = {
@description('Required. The resource ID of the key vault where to store the secrets of this module.')
keyVaultResourceId: string
@description('Optional. The >secretToExport1< secret name to create.')
>secretToExport1<Name: string?
@description('Optional. The >secretToExport2< secret name to create.')
>secretToExport2<Name: string?
// (...) }
Input Example with Values
// ============== //// Parameters //// ============== // @description('Optional. Key vault reference and secret settings for the module\'s secrets export.')
param secretsExportConfiguration secretsExportConfigurationType?
// ============= //// Resources //// ============= //module secretsExport 'modules/keyVaultExport.bicep' = if (secretsExportConfiguration !=null) {
name: '${uniqueString(deployment().name, location)}-secrets-kv' scope: resourceGroup(
split((secretsExportConfiguration.?keyVaultResourceId ??'//'), '/')[2],
split((secretsExportConfiguration.?keyVaultResourceId ??'////'), '/')[4]
)
params: {
keyVaultName: last(split(secretsExportConfiguration.?keyVaultResourceId ??'//', '/'))
secretsToSet: union(
[],
contains(secretsExportConfiguration!, '>secretToExport1<Name')
? [
{
name: secretsExportConfiguration!.?>secretToExport1<Name
value: >secretReference1<// e.g., >singularMainResourceType<.listKeys().primaryMasterKey }
]
: [],
contains(secretsExportConfiguration!, '>secretToExport2<Name')
? [
{
name: secretsExportConfiguration!.?>secretToExport2<Name
value:>secretReference2<// e.g., >singularMainResourceType<.listKeys().secondaryMasterKey }
]
: []
// (...) )
}
}
// =========== //// Outputs //// =========== //import { secretsOutputType } from 'br/public:avm/utl/types/avm-common-types:>version<' @description('A hashtable of references to the secrets exported to the provided Key Vault. The key of each reference is each secret\'s name.')
output exportedSecrets secretsOutputType = (secretsExportConfiguration !=null)
? toObject(secretsExport.outputs.secretsSet, secret => last(split(secret.secretResourceId, '/')), secret => secret)
: {}
// =============== //// Definitions //// =============== // @export()
type secretsExportConfigurationType = {
@description('Required. The resource ID of the key vault where to store the secrets of this module.')
keyVaultResourceId: string
@description('Optional. The >secretToExport1< secret name to create.')
>secretToExport1<Name: string?
@description('Optional. The >secretToExport2< secret name to create.')
>secretToExport2<Name: string?
// (...) }
[modules/keyVaultExport.bicep] file
// ============== //// Parameters //// ============== // @description('Required. The name of the Key Vault to set the secrets in.')
param keyVaultName string
import { secretToSetType } from 'br/public:avm/utl/types/avm-common-types:>version<' @description('Required. The secrets to set in the Key Vault.')
param secretsToSet secretToSetType[]
// ============= //// Resources //// ============= //resource keyVault 'Microsoft.KeyVault/vaults@2022-07-01'existing = {
name: keyVaultName
}
resource secrets 'Microsoft.KeyVault/vaults/secrets@2023-07-01' = [
for secret in secretsToSet: {
name: secret.name
parent: keyVault
properties: {
value: secret.value
}
}
]
// =========== //// Outputs //// =========== //import { secretSetOutputType } from 'br/public:avm/utl/types/avm-common-types:>version<' @description('The references to the secrets exported to the provided Key Vault.')
output secretsSet secretSetOutputType[] = [
#disable-next-line outputs-should-not-contain-secrets // Only returning the references, not a secret valuefor index inrange(0, length(secretsToSet ?? [])): {
secretResourceId: secrets[index].id
secretUri: secrets[index].properties.secretUri
secretUriWithVersion: secrets[index].properties.secretUriWithVersion
}
]
Output Usage Example
When using a module that implements the above interface, you can access its outputs for example in the following ways:
// ============== //// Parameters //// ============== // @description('Optional. Key vault reference and secret settings for the module\'s secrets export.')
param secretsExportConfiguration secretsExportConfigurationType?
// ============= //// Resources //// ============= //module secretsExport 'modules/keyVaultExport.bicep' = if (secretsExportConfiguration !=null) {
name: '${uniqueString(deployment().name, location)}-secrets-kv' scope: resourceGroup(
split((secretsExportConfiguration.?keyVaultResourceId ??'//'), '/')[2],
split((secretsExportConfiguration.?keyVaultResourceId ??'////'), '/')[4]
)
params: {
keyVaultName: last(split(secretsExportConfiguration.?keyVaultResourceId ??'//', '/'))
secretsToSet: union(
[],
contains(secretsExportConfiguration!, '>secretToExport1<Name')
? [
{
name: secretsExportConfiguration!.?>secretToExport1<Name
value: >secretReference1<// e.g., >singularMainResourceType<.listKeys().primaryMasterKey }
]
: [],
contains(secretsExportConfiguration!, '>secretToExport2<Name')
? [
{
name: secretsExportConfiguration!.?>secretToExport2<Name
value:>secretReference2<// e.g., >singularMainResourceType<.listKeys().secondaryMasterKey }
]
: []
// (...) )
}
}
// =========== //// Outputs //// =========== //import { secretsOutputType } from 'br/public:avm/utl/types/avm-common-types:>version<' @description('A hashtable of references to the secrets exported to the provided Key Vault. The key of each reference is each secret\'s name.')
output exportedSecrets secretsOutputType = (secretsExportConfiguration !=null)
? toObject(secretsExport.outputs.secretsSet, secret => last(split(secret.secretResourceId, '/')), secret => secret)
: {}
// =============== //// Definitions //// =============== // @export()
type secretsExportConfigurationType = {
@description('Required. The resource ID of the key vault where to store the secrets of this module.')
keyVaultResourceId: string
@description('Optional. The >secretToExport1< secret name to create.')
>secretToExport1<Name: string?
@description('Optional. The >secretToExport2< secret name to create.')
>secretToExport2<Name: string?
// (...) }
A module MUST have an owner that is defined and managed by a GitHub Team in the Azure GitHub organization.
Today this is only Microsoft FTEs, but everyone is welcome to contribute. The module just MUST be owned by a Microsoft FTE (today) so we can enforce and provide the long-term support required by this initiative.
Note
The names for the GitHub teams for each approved module are already defined in the respective Module Indexes. These teams MUST be created (and used) for each module.
ID: SNFR20 - Category: Contribution/Support - GitHub Teams Only
All GitHub repositories that AVM module are published from and hosted within MUST only assign GitHub repository permissions to GitHub teams only.
Each module MUST have separate GitHub teams assigned for module owners AND module contributors respectively. These GitHub teams MUST be created in the Azure organization in GitHub.
There MUST NOT be any GitHub repository permissions assigned to individual users.
Note
The names for the GitHub teams for each approved module are already defined in the respective Module Indexes. These teams MUST be created (and used) for each module.
The @Azure prefix in the last column of the tables linked above represents the “Azure” GitHub organization all AVM-related repositories exist in. DO NOT include this segment in the team’s name!
Important
Non-FTE / external contributors (subject matter experts that aren’t Microsoft employees) can’t be members of the teams described in this chapter, hence, they won’t gain any extra permissions on AVM repositories, therefore, they need to work in forks.
Naming Convention
The naming convention for the GitHub teams MUST follow the below pattern:
<hyphenated module name>-module-owners-<bicep/tf> - to be assigned as the GitHub repository’s Module Owners team
<hyphenated module name>-module-contributors-<bicep/tf> - to be assigned as the GitHub repository’s Module Contributors team
Note
The naming convention for Bicep modules is slightly different than the naming convention for their respective GitHub teams.
Segments:
<hyphenated module name> == the AVM Module’s name, with each segment separated by dashes, i.e., avm-res-<resource provider>-<ARM resource type>
All officially documented module owner(s) MUST be added to the -module-owners- team. The -module-owners- team MUST NOT have any other members.
Any additional module contributors whom the module owner(s) agreed to work with MUST be added to the -module-contributors- team.
Unless explicitly requested and agreed, members of the AVM core team or any PG teams MUST NOT be added to the -module-owners- or -module-contributors- teams as permissions for them are granted through the teams described in SNFR9.
Grant Permissions - Bicep
Team memberships
Note
In case of Bicep modules, permissions to the BRM repository (the repo of the Bicep Registry) are granted via assigning the -module-owners- and -module-contributors- teams to parent teams that already have the required level access configured. While it is the module owner’s responsibility to initiate the addition of their teams to the respective parents, only the AVM core team can approve this parent-child relationship.
Module owners MUST create their -module-owners- and -module-contributors- teams and as part of the provisioning process, they MUST request the addition of these teams to their respective parent teams (see the table below for details).
GitHub Team Name
Description
Permissions
Permissions granted through
Where to work?
<hyphenated module name>-module-owners-bicep
AVM Bicep Module Owners - <module name>
Write
Assignment to the avm-technical-reviewers-bicep parent team.
Examples - GitHub teams required for the Bicep resource module of Azure Virtual Network (avm/res/network/virtual-network):
avm-res-network-virtualnetwork-module-owners-bicep –> assign to the avm-technical-reviewers-bicep parent team.
avm-res-network-virtualnetwork-module-contributors-bicep –> assign to the avm-module-contributors-bicep parent team.
Tip
Direct link to create a new GitHub team and assign it to its parent: Create new team
Fill in the values as follows:
Team name: Following the naming convention described above, use the value defined in the module indexes.
Description: Follow the guidance above (see the Description column in the table above).
Parent team: Follow the guidance above (see the Permissions granted through column in the table above).
Team visibility: Visible
Team notifications: Enabled
CODEOWNERS file
As part of the “initial Pull Request” (that publishes the first version of the module), module owners MUST add an entry to the CODEOWNERS file in the BRM repository (here).
Note
Through this approach, the AVM core team will grant review permission to module owners as part of the standard PR review process.
Every CODEOWNERS entry (line) MUST include the following segments separated by a single whitespace character:
Path of the module, relative to the repo’s root, e.g.: /avm/res/network/virtual-network/
The -module-owners-team, with the @Azure/ prefix, e.g., @Azure/avm-res-network-virtualnetwork-module-owners-bicep
The GitHub team of the AVM Bicep reviewers, with the @Azure/ prefix, i.e., @Azure/avm-module-reviewers-bicep
Example - CODEOWNERS entry for the Bicep resource module of Azure Virtual Network (avm/res/network/virtual-network):
Module owners MUST assign the -module-owners-and -module-contributors- teams the necessary permissions on their Terraform module repository per the guidance below.
GitHub Team Name
Description
Permissions
Permissions granted through
Where to work?
<module name>-module-owners-tf
AVM Terraform Module Owners - <module name>
Admin
Direct assignment to repo
Module owner can decide whether they want to work in a branch local to the repo or in a fork.
Only the latest released version of a module MUST be supported.
For example, if an AVM Resource Module is used in an AVM Pattern Module that was working but now is not. The first step by the AVM Pattern Module owner should be to upgrade to the latest version of the AVM Resource Module test and then if not fixed, troubleshoot and fix forward from the that latest version of the AVM Resource Module onwards.
This avoids AVM Module owners from having to maintain multiple major release versions.
```shell
# Linux / MacOs# For Windows replace $PWD with your the local path or your repository#docker run -it -v $PWD:/repo -w /repo mcr.microsoft.com/powershell pwsh -Command '
#Invoke-WebRequest -Uri "https://azure.github.io/Azure-Verified-Modules/scripts/Set-AvmGitHubLabels.ps1" -OutFile "Set-AvmGitHubLabels.ps1"
$gh_version = "2.44.1"
Invoke-WebRequest -Uri "https://github.com/cli/cli/releases/download/v2.44.1/gh_2.44.1_linux_amd64.tar.gz" -OutFile "gh_$($gh_version)_linux_amd64.tar.gz"
apt-get update && apt-get install -y git
tar -xzf "gh_$($gh_version)_linux_amd64.tar.gz"
ls -lsa
mv "gh_$($gh_version)_linux_amd64/bin/gh" /usr/local/bin/
rm "gh_$($gh_version)_linux_amd64.tar.gz" && rm -rf "gh_$($gh_version)_linux_amd64"
gh --version
ls -lsa
gh auth login
$OrgProject = "Azure/terraform-azurerm-avm-res-kusto-cluster"
gh auth status
./Set-AvmGitHubLabels.ps1 -RepositoryName $OrgProject -CreateCsvLabelExports $false -NoUserPrompts $true
'```
By default this script will only update and append labels on the repository specified. However, this can be changed by setting the parameter -UpdateAndAddLabelsOnly to $false, which will remove all the labels from the repository first and then apply the AVM labels from the CSV only.
Make sure you elevate your privilege to admin level or the labels will not be applied to your repository. Go to repos.opensource.microsoft.com/orgs/Azure/repos/ to request admin access before running the script.
Full Script:
These Set-AvmGitHubLabels.ps1 can be downloaded from here.
[Diagnostics.CodeAnalysis.SuppressMessageAttribute("PSAvoidUsingWriteHost", "", Justification = "Coloured output required in this script")]
<#
.SYNOPSIS This script can be used to create the Azure Verified Modules (AVM) standard GitHub labels to a GitHub repository.
.DESCRIPTION This script can be used to create the Azure Verified Modules (AVM) standard GitHub labels to a GitHub repository.
By default, the script will remove all pre-existing labels and apply the AVM labels. However, this can be changed by using the -RemoveExistingLabels parameter and setting it to $false. The tool will also output the labels that exist in the repository before and after the script has run to a CSV file in the current directory, or a directory specified by the -OutputDirectory parameter.
The AVM labels to be created are documented here: TBC
.NOTES Please ensure you have specified the GitHub repositry correctly. The script will prompt you to confirm the repository name before proceeding.
.COMPONENT You must have the GitHub CLI installed and be authenticated to a GitHub account with access to the repository you are applying the labels to before running this script.
.LINK TBC
.Parameter RepositoryName
The name of the GitHub repository to apply the labels to.
.Parameter RemoveExistingLabels
If set to $true, the default value, the script will remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels. If set to $false, the script will not remove any pre-existing labels.
.Parameter UpdateAndAddLabelsOnly
If set to $true, the default value, the script will only update and add labels to the repository specified in -RepositoryName. If set to $false, the script will remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels.
.Parameter OutputDirectory
The directory to output the pre-existing and post-existing labels to in a CSV file. The default value is the current directory.
.Parameter CreateCsvLabelExports
If set to $true, the default value, the script will output the pre-existing and post-existing labels to a CSV file in the current directory, or a directory specified by the -OutputDirectory parameter. If set to $false, the script will not output the pre-existing and post-existing labels to a CSV file.
.Parameter GitHubCliLimit
The maximum number of labels to return from the GitHub CLI. The default value is 999.
.Parameter LabelsToApplyCsvUri
The URI to the CSV file containing the labels to apply to the GitHub repository. The default value is https://raw.githubusercontent.com/jtracey93/label-source/main/avm-github-labels.csv.
.Parameter NoUserPrompts
If set to $true, the default value, the script will not prompt the user to confirm they want to remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels. If set to $false, the script will prompt the user to confirm they want to remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels.
This is useful for running the script in automation workflows
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and remove all pre-existing labels.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo"
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and do not remove any pre-existing labels, just overwrite any labels that have the same name.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -RemoveExistingLabels $false
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and output the pre-existing and post-existing labels to the directory C:\GitHubLabels.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -OutputDirectory "C:\GitHubLabels"
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and output the pre-existing and post-existing labels to the directory C:\GitHubLabels and do not remove any pre-existing labels, just overwrite any labels that have the same name.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -OutputDirectory "C:\GitHubLabels" -RemoveExistingLabels $false
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and do not create the pre-existing and post-existing labels CSV files and do not remove any pre-existing labels, just overwrite any labels that have the same name.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -RemoveExistingLabels $false -CreateCsvLabelExports $false
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and do not create the pre-existing and post-existing labels CSV files and do not remove any pre-existing labels, just overwrite any labels that have the same name. Finally, use a custom CSV file hosted on the internet to create the labels from.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -OutputDirectory "C:\GitHubLabels" -RemoveExistingLabels $false -CreateCsvLabelExports $false -LabelsToApplyCsvUri "https://example.com/csv/avm-github-labels.csv"
#>#Requires-PSEdition Core [CmdletBinding()]
param (
[Parameter(Mandatory = $true)]
[string]$RepositoryName,
[Parameter(Mandatory = $false)]
[bool]$RemoveExistingLabels = $true,
[Parameter(Mandatory = $false)]
[bool]$UpdateAndAddLabelsOnly = $true,
[Parameter(Mandatory = $false)]
[bool]$CreateCsvLabelExports = $true,
[Parameter(Mandatory = $false)]
[string]$OutputDirectory = (Get-Location),
[Parameter(Mandatory = $false)]
[int]$GitHubCliLimit = 999,
[Parameter(Mandatory = $false)]
[string]$LabelsToApplyCsvUri = "https://azure.github.io/Azure-Verified-Modules/governance/avm-standard-github-labels.csv",
[Parameter(Mandatory = $false)]
[bool]$NoUserPrompts = $false
)
# Check if the GitHub CLI is installed $GitHubCliInstalled = Get-Command gh -ErrorAction SilentlyContinue
if ($null -eq $GitHubCliInstalled) {
throw"The GitHub CLI is not installed. Please install the GitHub CLI and try again." }
Write-Host "The GitHub CLI is installed..." -ForegroundColor Green
# Check if GitHub CLI is authenticated $GitHubCliAuthenticated = gh auth status
if ($LASTEXITCODE -ne0) {
Write-Host $GitHubCliAuthenticated -ForegroundColor Red
throw"Not authenticated to GitHub. Please authenticate to GitHub using the GitHub CLI, `gh auth login`, and try again." }
Write-Host "Authenticated to GitHub..." -ForegroundColor Green
# Check if GitHub repository name is valid $GitHubRepositoryNameValid = $RepositoryName -match"^[a-zA-Z0-9-]+/[a-zA-Z0-9-]+$"if ($false -eq $GitHubRepositoryNameValid) {
throw"The GitHub repository name $RepositoryName is not valid. Please check the repository name and try again. The format must be <OrgName>/<RepoName>" }
# List GitHub repository provided and check it exists $GitHubRepository = gh repo view $RepositoryName
if ($LASTEXITCODE -ne0) {
Write-Host $GitHubRepository -ForegroundColor Red
throw"The GitHub repository $RepositoryName does not exist. Please check the repository name and try again." }
Write-Host "The GitHub repository $RepositoryName exists..." -ForegroundColor Green
# PRE - Get the current GitHub repository labels and export to a CSV file in the current directory or where -OutputDirectory specifies if set to a valid directory path and the directory exists or can be created if it does not exist alreadyif ($RemoveExistingLabels -or $UpdateAndAddLabelsOnly) {
Write-Host "Getting the current GitHub repository (pre) labels for $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels = gh label list -R $RepositoryName -L $GitHubCliLimit --json name,description,color
if ($null -ne $GitHubRepositoryLabels -and $CreateCsvLabelExports -eq $true) {
$csvFileNamePathPre = "$OutputDirectory\$($RepositoryName.Replace('/', '_'))-Labels-Pre-$(Get-Date -Format FileDateTime).csv" Write-Host "Exporting the current GitHub repository (pre) labels for $RepositoryName to $csvFileNamePathPre" -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | Export-Csv -Path $csvFileNamePathPre -NoTypeInformation
}
}
# Remove all pre-existing labels if -RemoveExistingLabels is set to $true and user confirms they want to remove all pre-existing labelsif ($null -ne $GitHubRepositoryLabels) {
$GitHubRepositoryLabelsJson = $GitHubRepositoryLabels | ConvertFrom-Json
if ($RemoveExistingLabels -eq $true -and $NoUserPrompts -eq $false -and $UpdateAndAddLabelsOnly -eq $false) {
$RemoveExistingLabelsConfirmation = Read-Host "Are you sure you want to remove all $($GitHubRepositoryLabelsJson.Count) pre-existing labels from $($RepositoryName)? (Y/N)"if ($RemoveExistingLabelsConfirmation -eq"Y") {
Write-Host "Removing all pre-existing labels from $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | ForEach-Object {
Write-Host "Removing label $($_.name) from $RepositoryName..." -ForegroundColor DarkRed
gh label delete -R $RepositoryName $_.name --yes
}
}
}
if ($RemoveExistingLabels -eq $true -and $NoUserPrompts -eq $true -and $UpdateAndAddLabelsOnly -eq $false) {
Write-Host "Removing all pre-existing labels from $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | ForEach-Object {
Write-Host "Removing label $($_.name) from $RepositoryName..." -ForegroundColor DarkRed
gh label delete -R $RepositoryName $_.name --yes
}
}
}
if ($null -eq $GitHubRepositoryLabels) {
Write-Host "No pre-existing labels to remove or not selected to be removed from $RepositoryName..." -ForegroundColor Magenta
}
# Check LabelsToApplyCsvUri is valid and contains a CSV content Write-Host "Checking $LabelsToApplyCsvUri is valid..." -ForegroundColor Yellow
$LabelsToApplyCsvUriValid = $LabelsToApplyCsvUri -match"^https?://"if ($false -eq $LabelsToApplyCsvUriValid) {
throw"The LabelsToApplyCsvUri $LabelsToApplyCsvUri is not valid. Please check the URI and try again. The format must be a valid URI." }
Write-Host "The LabelsToApplyCsvUri $LabelsToApplyCsvUri is valid..." -ForegroundColor Green
# Create AVM lables from the AVM labels CSV file stored on the web using the convertfrom-csv cmdlet $avmLabelsCsv = Invoke-WebRequest -Uri $LabelsToApplyCsvUri | ConvertFrom-Csv
# Check if the AVM labels CSV file contains the following columns: Name, Description, HEX $avmLabelsCsvColumns = $avmLabelsCsv | Get-Member -MemberType NoteProperty | Select-Object -ExpandProperty Name
$avmLabelsCsvColumnsValid = $avmLabelsCsvColumns -contains"Name"-and $avmLabelsCsvColumns -contains"Description"-and $avmLabelsCsvColumns -contains"HEX"if ($false -eq $avmLabelsCsvColumnsValid) {
throw"The labels CSV file does not contain the required columns: Name, Description, HEX. Please check the CSV file and try again. It contains the following columns: $avmLabelsCsvColumns" }
Write-Host "The labels CSV file contains the required columns: Name, Description, HEX" -ForegroundColor Green
# Create the AVM labels in the GitHub repository Write-Host "Creating/Updating the $($avmLabelsCsv.Count) AVM labels in $RepositoryName..." -ForegroundColor Yellow
$avmLabelsCsv | ForEach-Object {
if ($GitHubRepositoryLabelsJson.name -contains $_.name) {
Write-Host "The label $($_.name) already exists in $RepositoryName. Updating the label to ensure description and color are consitent..." -ForegroundColor Magenta
gh label create -R $RepositoryName "$($_.name)" -c $_.HEX -d $($_.Description) --force
}
else {
Write-Host "The label $($_.name) does not exist in $RepositoryName. Creating label $($_.name) in $RepositoryName..." -ForegroundColor Cyan
gh label create -R $RepositoryName "$($_.Name)" -c $_.HEX -d $($_.Description) --force
}
}
# POST - Get the current GitHub repository labels and export to a CSV file in the current directory or where -OutputDirectory specifies if set to a valid directory path and the directory exists or can be created if it does not exist alreadyif ($CreateCsvLabelExports -eq $true) {
Write-Host "Getting the current GitHub repository (post) labels for $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels = gh label list -R $RepositoryName -L $GitHubCliLimit --json name,description,color
if ($null -ne $GitHubRepositoryLabels) {
$csvFileNamePathPre = "$OutputDirectory\$($RepositoryName.Replace('/', '_'))-Labels-Post-$(Get-Date -Format FileDateTime).csv" Write-Host "Exporting the current GitHub repository (post) labels for $RepositoryName to $csvFileNamePathPre" -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | Export-Csv -Path $csvFileNamePathPre -NoTypeInformation
}
}
# If -RemoveExistingLabels is set to $true and user confirms they want to remove all pre-existing labels check that only the avm labels exist in the repositoryif ($RemoveExistingLabels -eq $true -and ($RemoveExistingLabelsConfirmation -eq"Y"-or $NoUserPrompts -eq $true) -and $UpdateAndAddLabelsOnly -eq $false) {
Write-Host "Checking that only the AVM labels exist in $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels = gh label list -R $RepositoryName -L $GitHubCliLimit --json name,description,color
$GitHubRepositoryLabels | ConvertFrom-Json | ForEach-Object {
if ($avmLabelsCsv.Name -notcontains $_.name) {
throw"The label $($_.name) exists in $RepositoryName but is not in the CSV file." }
}
Write-Host "Only the CSV labels exist in $RepositoryName..." -ForegroundColor Green
}
Write-Host "The CSV labels have been created/updated in $RepositoryName..." -ForegroundColor Green
As part of the “initial Pull Request” (that publishes the first version of the module), module owners MUST add an entry to the AVM Module Issue template file in the BRM repository (here).
Note
Through this approach, the AVM core team will allow raising a bug or feature request for a module, only after the module gets merged to the BRM repository.
The module name entry MUST be added to the dropdown list with id module-name-dropdown as an option, in alphabetical order.
Important
Module owners MUST ensure that the module name is added in alphabetical order, to simplify selecting the right module name when raising an AVM module issue.
Example - AVM Module Issue template module name entry for the Bicep resource module of Azure Virtual Network (avm/res/network/virtual-network):
- type: dropdownid: module-name-dropdownattributes:
label: Module Namedescription: Which existing AVM module is this issue related to?options:
... - "avm/res/network/virtual-network"...
Telemetry
The content below is listed based on the following tags
We will maintain a set of CSV files in the AVM Central Repo (Azure/Azure-Verified-Modules) with the required TelemetryId prefixes to enable checks to utilize this list to ensure the correct IDs are used. To see the formatted content of these CSV files with additional information, please visit the AVM Module Indexes page.
These will also be provided as a comment on the module proposal, once accepted, from the AVM core team.
Modules MUST provide the capability to collect deployment/usage telemetry as detailed in Telemetry further.
To highlight that AVM modules use telemetry, an information notice MUST be included in the footer of each module’s README.md file with the below content. (See more details on this requirement, here.)
Telemetry Information Notice
Note
The following information notice is automatically added at the bottom of the README.md file of the module when
Terraform: Executing the make docs command with the note and header ## Data Collection being placed in the module’s _footer.md beforehand
### Data Collection
The software may collect information about you and your use of the software and send it to Microsoft. Microsoft may use this information to provide services and improve our products and services. You may turn off the telemetry as described in the [repository](https://aka.ms/avm/telemetry). There are also some features in the software that may enable you and Microsoft to collect data from users of your applications. If you use these features, you must comply with applicable law, including providing appropriate notices to users of your applications together with a copy of Microsoftβs privacy statement. Our privacy statement is located at <https://go.microsoft.com/fwlink/?LinkID=824704>. You can learn more about data collection and use in the help documentation and our privacy statement. Your use of the software operates as your consent to these practices.
Bicep
The ARM deployment name used for the telemetry MUST follow the pattern and MUST be no longer than 64 characters in length: 46d3xbcp.<res/ptn>.<(short) module name>.<version>.<uniqueness>
<res/ptn> == AVM Resource or Pattern Module
<(short) module name> == The AVM Module’s, possibly shortened, name including the resource provider and the resource type, without;
The prefixes: avm-res-
The prefixes: avm-ptn-
<version> == The AVM Module’s MAJOR.MINOR version (only) with . (periods) replaced with - (hyphens), to allow simpler splitting of the ARM deployment name
<uniqueness> == This section of the ARM deployment name is to be used to ensure uniqueness of the deployment name.
This is to cater for the following scenarios:
The module is deployed multiple times to the same:
Due to the 64-character length limit of Azure deployment names, the <(short) module name> segment has a length limit of 36 characters, so if the module name is longer than that, it MUST be truncated to 36 characters. If any of the semantic version’s segments are longer than 1 character, it further restricts the number of characters that can be used for naming the module.
An example deployment name for the AVM Virtual Machine Resource Module would be: 46d3xbcp.res.compute-virtualmachine.1-2-3.eum3
An example deployment name for a shortened module name would be: 46d3xbcp.res.desktopvirtualization-appgroup.1-2-3.eum3
Tip
Terraform: Terraform uses a telemetry provider, the configuration of which is the same for every module and is included in the template repo.
General: See the language specific contribution guides for detailed guidance and sample code to use in AVM modules to achieve this requirement.
The telemetry enablement MUST be on/enabled by default, however this MUST be able to be disabled by a module consumer by setting the below parameter/variable value to false:
Bicep: enableTelemetry
Terraform: enable_telemetry
Note
Whenever a module references AVM modules that implement the telemetry parameter (e.g., a pattern module that uses AVM resource modules), the telemetry parameter value MUST be passed through to these modules. This is necessary to ensure a consumer can reliably enable & disable the telemetry feature for all used modules.
To comply with specifications outlined in SFR3 & SFR4 you MUST incorporate the following code snippet into your modules. Place this code sample in the “top level” main.bicep file; it is not necessary to include it in any nested Bicep files (child modules).
Modules MAY create/adopt public preview services and features at their discretion.
Preview API versions MAY be used when:
The resource/service/feature is GA but the only API version available for the GA resource/service/feature is a preview version
For example, Diagnostic Settings (Microsoft.Insights/diagnosticSettings) the latest version of the API available with GA features, like Category Groups etc., is 2021-05-01-preview
Otherwise the latest “non-preview” version of the API SHOULD be used
Preview services and features, SHOULD NOT be promoted and exposed, unless they are supported by the respective PG, and it’s documented publicly.
However, they MAY be exposed at the module owners discretion, but the following rules MUST be followed:
The description of each of the parameters/variables used for the preview service/feature MUST start with:
“THIS IS A <PARAMETER/VARIABLE> USED FOR A PREVIEW SERVICE/FEATURE, MICROSOFT MAY NOT PROVIDE SUPPORT FOR THIS, PLEASE CHECK THE PRODUCT DOCS FOR CLARIFICATION”
Modules SHOULD set defaults in input parameters/variables to align to high priority/impact/severity recommendations, where appropriate and applicable, in the following frameworks and resources:
They SHOULD NOT align to these recommendations when it requires an external dependency/resource to be deployed and configured and then associated to the resources in the module.
Alignment SHOULD prioritize best-practices and security over cost optimization, but MUST allow for these to be overridden by a module consumer easily, if desired.
ID: SFR5 - Category: Composition - Availability Zones
Modules that deploy zone-redundant resources MUST enable the spanning across as many zones as possible by default, typically all 3.
Modules that deploy zonal resources MUST provide the ability to specify a zone for the resources to be deployed/pinned to. However, they MUST NOT default to a particular zone by default, e.g. 1 in an effort to make the consumer aware of the zone they are selecting to suit their architecture requirements.
For both scenarios the modules MUST expose these configuration options via configurable parameters/variables.
ID: SFR6 - Category: Composition - Data Redundancy
Modules that deploy resources or patterns that support data redundancy SHOULD enable this to the highest possible value by default, e.g. RA-GZRS. When a resource or pattern doesn’t provide the ability to specify data redundancy as a simple property, e.g. GRS etc., then the modules MUST provide the ability to enable data redundancy for the resources or pattern via parameters/variables.
For example, a Storage Account module can simply set the sku.name property to Standard_RAGZRS. Whereas a SQL DB or Cosmos DB module will need to expose more properties, via parameters/variables, to allow the specification of the regions to replicate data to as per the consumers requirements.
Module owners MUST set the default resource name prefix for child, extension, and interface resources to the associated abbreviation for the specific resource as documented in the following CAF article Abbreviation examples for Azure resources, if specified and documented. This reduces the amount of input values a module consumer MUST provide by default when using the module.
For example, a Private Endpoint that is being deployed as part of a resource module, via the mandatory interfaces, MUST set the Private Endpoint’s default name to begin with the prefix of pep-.
Module owners MUST also provide the ability for these default names, including the prefixes, to be overridden via a parameter/variable if the consumer wishes to.
Furthermore, as per RMNFR2, Resource Modules MUST not have a default value specified for the name of the primary resource and therefore the name MUST be provided and specified by the module consumer.
The name provided MAY be used by the module owner to generate the rest of the default name for child, extension, and interface resources if they wish to. For example, for the Private Endpoint mentioned above, the full default name that can be overridden by the consumer, MAY be pep-<primary-resource-name>.
Tip
If the resource does not have a documented abbreviation in Abbreviation examples for Azure resources, then the module owner is free to use a sensible prefix instead.
Example: avm/ptn/compute/app-tier-vmss or avm/ptn/avd-lza/management-plane or avm/ptn/3-tier/web-app
Segments:
ptn defines this as a pattern module
<hyphenated grouping/category name> is a hierarchical grouping of pattern modules by category, with each word separated by dashes, such as:
project name, e.g., avd-lza,
primary resource provider, e.g., compute or network, or
architecture, e.g., 3-tier
<hyphenated pattern module name> is a term describing the moduleβs function, with each word separated by dashes, e.g., app-tier-vmss = Application Tier VMSS; management-plane = Azure Virtual Desktop Landing Zone Accelerator Management Plane
Terraform Pattern Module Naming
Naming convention:
avm-ptn-<pattern module name> (Module name for registry)
terraform-<provider>-avm-ptn-<pattern module name> (GitHub repository name to meet registry naming requirements)
Example: avm-ptn-apptiervmss or avm-ptn-avd-lza-managementplane
Segments:
<provider> is the logical abstraction of various APIs used by Terraform. In most cases, this is going to be azurerm or azuread for resource modules.
ptn defines this as a pattern module
<pattern module name> is a term describing the moduleβs function, e.g., apptiervmss = Application Tier VMSS; avd-lza-managementplane = Azure Virtual Desktop Landing Zone Accelerator Management Plane
ID: PMNFR2 - Category: Composition - Use Resource Modules to Build a Pattern Module
A Pattern Module SHOULD be built from AVM Resources Modules to establish a standardized code base and improve maintainability. If a valid reason exists, a pattern module MAY contain native resources (“vanilla” code) where it’s necessary. A Pattern Module MUST NOT contain references to non-AVM modules.
Valid reasons for not using a Resource Module for a resource required by a Pattern Module include but are not limited to:
When using a Resource Module would result in hitting scaling limitations and/or would reduce the capabilities of the Pattern Module due to the limitations of Azure Resource Manager.
Developing a Pattern Module under time constraint, without having all required Resource Modules readily available.
Note
In the latter case, the Pattern Module SHOULD be updated to use the Resource Module when the required Resource Module becomes available, to avoid accumulating technical debt. Ideally, all required Resource Modules SHOULD be developed first, and then leveraged by the Pattern Module.
Module owners MAY cross-references other modules to build either Resource or Pattern modules.
However, they MUST be referenced only by a public registry reference to a pinned version e.g. br/public:avm/[res|ptn|utl]/<publishedModuleName>:>version<. They MUST NOT use local parent path references to a module e.g. ../../xxx/yyy.bicep.
The only exception to this rule are child modules as documented in BCPFR6.
Modules MUST NOT contain references to non-AVM modules.
ID: BCPFR2 - Category: Composition - Role Assignments Role Definition Mapping
Module owners MAY define common RBAC Role Definition names and IDs within a variable to allow consumers to define a RBAC Role Definition by their name rather than their ID, this should be self contained within the module themselves.
However, they MUST use only the official RBAC Role Definition name within the variable and nothing else.
To meet the requirements of BCPFR2, BCPNFR5 and BCPNFR6 you MUST use the below code sample in your AVM Modules to achieve this.
@description('''Required. You can provide either the display name (note not all roles are supported, check module documentation) of the role definition, or its fully qualified ID in the following format: `/providers/Microsoft.Authorization/roleDefinitions/c2f4ef07-c644-48eb-af81-4b1b4947fb11`.''')
param roleDefinitionIdOrName string
var builtInRbacRoleNames = {
Owner: '/providers/Microsoft.Authorization/roleDefinitions/8e3af657-a8ff-443c-a75c-2fe8c4bcb635' Contributor: '/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c' Reader: '/providers/Microsoft.Authorization/roleDefinitions/acdd72a7-3385-48ef-bd42-f606fba81ae7''Role Based Access Control Administrator (Preview)': '/providers/Microsoft.Authorization/roleDefinitions/f58310d9-a9f6-439a-9e8d-f62e7b41a168''User Access Administrator': '/providers/Microsoft.Authorization/roleDefinitions/18d7d88d-d35e-4fb5-a5c3-7773c20a72d9'//Other RBAC Role Definitions Names & IDs can be added here as needed for your module }
var roleDefinitionIdMappedResult = (contains(builtInRbacRoleNames, roleDefinitionIdOrName) ? builtInRbacRoleNames[roleDefinitionIdOrName] : roleDefinitionIdOrName)
resource roleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
//Other properties removed for ease of reading properties: {
roleDefinitionId: roleDefinitionIdMappedResult
//Other properties removed for ease of reading }
}
Parent templates MUST reference all their direct child-templates to allow for an end-to-end deployment experience. For example, the SQL server template must reference its child database module and encapsulate it in a loop to allow for the deployment of multiple databases.
@description('Optional. The databases to create in the server')
param databases databaseType[]?
resource server 'Microsoft.Sql/servers@(...)' = { (...) }
module server_databases 'database/main.bicep' = [for (database, index) in (databases ?? []): {
name: '${uniqueString(deployment().name, location)}-Sql-DB-${index}' params: {
serverName: server.name
(...)
}
}]
User-defined types (UDTs) MUST always end with the suffix (...)Type to make them obvious to users. In addition it is recommended to extend the suffix to (...)OutputType if a UDT is exclusively used for outputs.
type subnet = { ... } // Wrongtype subnetType = { ... } // Correcttype subnetOutputType = { ... } // Correct, if used only for outputs
Since User-defined types (UDTs) MUST always be singular as per BCPNFR18, their naming should reflect this and also be singular.
ID: BCPNFR5 - Category: Composition - Role Assignments Role Definition Mapping Limits
As per BCPFR2, module owners MAY define common RBAC Role Definition names and IDs within a variable to allow consumers to define a RBAC Role Definition by their name rather than their ID.
Module owners SHOULD NOT map every RBAC Role Definition within this variable as it can cause the module to bloat in size and cause consumption issues later when stitched together with other modules due to the 4MB ARM Template size limit.
Therefore module owners SHOULD only map the most applicable and common RBAC Role Definition names for their module and SHOULD NOT exceed 15 RBAC Role Definitions in the variable.
Important
Remember if the RBAC Role Definition name is not included in the variable this does not mean it cannot be declared, used and assigned to an identity via an RBAC Role Assignment as part of a module, as any RBAC Role Definition can be specified via its ID without being in the variable.
The version value is in the form of MAJOR.MINOR. The PATCH version will be incremented by the CI automatically when publishing the module to the Public Bicep Registry once the corresponding pull request is merged. Therefore, contributions that would only require an update of the patch version, can keep the version.json file intact.
For example, the version value should be:
0.1 for new modules, so that they can be released as v0.1.0.
1.0 once the module owner signs off the module is stable enough for itβs first Major release of v1.0.0.
0.x for all feature updates between the first release v0.1.0 and the first Major release of v1.0.0.
Inputs / Outputs
The content below is listed based on the following tags
ID: SNFR22 - Category: Inputs - Parameters/Variables for Resource IDs
A module parameter/variable that requires a full Azure Resource ID as an input value, e.g. /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.KeyVault/vaults/{keyVaultName}, MUST contain ResourceId/resource_id in its parameter/variable name to assist users in knowing what value to provide at a glance of the parameter/variable name.
Example for the property workspaceId for the Diagnostic Settings resource. In Bicep its parameter name should be workspaceResourceId and the variable name in Terraform should be workspace_resource_id.
workspaceId is not descriptive enough and is ambiguous as to which ID is required to be input.
ID: BCPFR5 - Category: Inputs - Availability Zones Implementation
To implement requirement SFR5, the following convention SHOULD apply:
Availability Zones
@description('Optional. The Availability Zones to place the resources in.')
@allowed([
1
2
3
])
param zones int[] = [
1
2
3
]
resource myResource (...) {
(...)
properties: {
(...)
zones: map(zones, zone => string(zone))
}
}
@description('Required. The Availability Zone to place the resource in. If set to 0, then Availability Zone is not set.')
@allowed([
0
1
2
3
])
param zone int
resource myResource (...) {
(...)
properties: {
(...)
zones: zone != 0 ? [ string(zone) ] : null }
}
ID: BCPNFR1 - Inputs - User-defined types - General
To simplify the consumption experience for module consumers when interacting with complex data types input parameters, mainly objects and arrays, the Bicep feature of User-Defined TypesMUST be used and declared.
Tip
User-Defined Types are GA in Bicep as of version v0.21.1, please ensure you have this version installed as a minimum.
User-Defined Types allow intellisense support in supported IDEs (e.g. Visual Studio Code) for complex input parameters using arrays and objects.
CARML Migration Exemption
While the transition of CARML modules into AVM is complete, retrofitting User-Defined Types for all modules will take a considerable amount of time.
Therefore, the addition of User-Defined Types is currently NOT mandated/enforced. However, past their initial release, all modules MUST implement User-Defined Types prior to the release of their next version.
Similar to BCPNFR21, input parameters MUST implement decorators such as description & secure (if sensitive).
Further, input parameters SHOULD implement decorators like allowed, minValue, maxValue, minLength & maxLength (and others if available) as they have a big positive impact on the module’s usability.
@description('Optional. The threshold of your resource.')
@minValue(1)
@maxValue(10)
param threshold: int?
@description('Required. The SKU of your resource.')
@allowed([
'Basic''Premium''Standard'])
param sku string
User-defined types (UDTs) MUST always be singular and non-nullable. The configuration of either should instead be done directly at the parameter or output that uses the type.
For example, instead of
param subnets subnetsType
type subnetsType = { ... }[]?
the type should be defined like
param subnets subnetType[]?
type subnetType = { ... }
The primary reason for this requirement is clarity. If not defined directly at the parameter or output, a user would always be required to check the type to understand how e.g., a parameter is expected.
User-defined types (UDTs) MUST always end with the suffix (...)Type to make them obvious to users. In addition it is recommended to extend the suffix to (...)OutputType if a UDT is exclusively used for outputs.
type subnet = { ... } // Wrongtype subnetType = { ... } // Correcttype subnetOutputType = { ... } // Correct, if used only for outputs
Since User-defined types (UDTs) MUST always be singular as per BCPNFR18, their naming should reflect this and also be singular.
User-defined types (UDTs) SHOULD always be exported via the @export() annotation in every template they’re implemented in.
@export()
type subnetType = { ... }
Doing so has the benefit that other (e.g., parent) modules can import them and as such reduce code duplication. Also, if the module itself is published, users of the Public Bicep Registry can import the types independently of the module itself. One example where this can be useful is a pattern module that may re-use the same interface when referencing a module from the registry.
Similar to BCPNFR9, User-defined types (UDTs) MUST implement decorators such as description & secure (if sensitive). This is true for every property of the UDT, as well as the UDT itself.
Further, User-defined types SHOULD implement decorators like allowed, minValue, maxValue, minLength & maxLength (and others if available) as they have a big positive impact on the module’s usability.
@description('My type''s description.')
type myType = {
@description('Optional. The threshold of your resource.')
@minValue(1)
@maxValue(10)
threshold: int?
@description('Required. The SKU of your resource.')
sku: ('Basic' | 'Premium' | 'Standard')
}
Modules will have lots of parameters that will differ in their requirement type (required, optional, etc.). To help consumers understand what each parameter’s requirement type is, module owners MUST add the requirement type to the beginning of each parameter’s description. Below are the requirement types with a definition and example for the description decorator:
Parameter Requirement Type
Definition
Example Description Decorator
Required
The parameter value must be provided. The parameter does not have a default value and hence the module expects and requires an input.
The parameter value can be optional or required based on a condition, mostly based on the value provided to other parameters. Should contain a sentence starting with ‘Required if (…).’ to explain the condition.
The parameter value is generated within the module and should not be specified as input in most cases. A common example of this is the utcNow() function that is only supported as the input for a parameter value, and not inside a variable.
Modules MUST implement end-to-end (deployment) testing that create actual resources to validate that module deployments work. In Bicep tests are sourced from the directories in /tests/e2e. In Terraform, these are in /examples.
Each test MUST run and complete without user inputs successfully, for automation purposes.
Each test MUST also destroy/clean-up its resources and test dependencies following a run.
Tip
To see a directory and file structure for a module, see the language specific contribution guide.
It is likely that to complete E2E tests, a number of resources will be required as dependencies to enable the tests to pass successfully. Some examples:
When testing the Diagnostic Settings interface for a Resource Module, you will need an existing Log Analytics Workspace to be able to send the logs to as a destination.
When testing the Private Endpoints interface for a Resource Module, you will need an existing Virtual Network, Subnet and Private DNS Zone to be able to complete the Private Endpoint deployment and configuration.
Module owners MUST:
Create the required resources that their module depends upon in the test file/directory
They MUST either use:
Simple/native resource declarations/definitions in their respective IaC language, OR
Another already published AVM Module that MUST be pinned to a specific published version.
They MUST NOT use any local directory path references or local copies of AVM modules in their own modules test directory.
β Terraform & Bicep Log Analytics Workspace examples using simple/native declarations for use in E2E tests
Modules SHOULD implement unit testing to ensure logic and conditions within parameters/variables/locals are performing correctly. These tests MUST pass before a module version can be published.
Unit Tests test specific module functionality, without deploying resources. Used on more complex modules. In Bicep and Terraform these live in tests/unit.
Modules MUST use static analysis, e.g., linting, security scanning (PSRule, tflint, etc.). These tests MUST pass before a module version can be published.
There may be differences between languages in linting rules standards, but the AVM core team will try to close these and bring them into alignment over time.
Modules MUST implement idempotency end-to-end (deployment) testing. E.g. deploying the module twice over the top of itself.
Modules SHOULD pass the idempotency test, as we are aware that there are some exceptions where they may fail as a false-positive or legitimate cases where a resource cannot be idempotent.
For example, Virtual Machine Image names must be unique on each resource creation/update.
Module owners MUST test that child and extension resources and those Bicep or Terreform interface resources that are supported by their modules, are validated in E2E tests as per SNFR2 to ensure they deploy and are configured correctly.
These MAY be tested in a separate E2E test and DO NOT have to be tested in each E2E test.
ID: BCPNFR10 - Category: Testing - Test Bicep File Naming
Module owners MUST name their test .bicep files in the /tests/e2e/<defaults/waf-aligned/max/etc.> directories: main.test.bicep as the test framework (CI) relies upon this name.
ID: BCPNFR13 - Category: Testing - Test file metadata
By default, the ReadMe-generating utility will create usage examples headers based on each e2e folder’s name. Module owners MAY provide a custom name & description by specifying the metadata blocks name & description in their main.test.bicep test files.
For example:
metadata name = 'Using Customer-Managed-Keys with System-Assigned identity'metadata description = 'This instance deploys the module using Customer-Managed-Keys using a System-Assigned Identity. This required the service to be deployed twice, once as a pre-requisite to create the System-Assigned Identity, and once to use it for accessing the Customer-Managed-Key secret.'
would lead to a header in the module’s readme.md file along the lines of
### Example 1: _Using Customer-Managed-Keys with System-Assigned identity_
This instance deploys the module using Customer-Managed-Keys using a System-Assigned Identity. This required the service to be deployed twice, once as a pre-requisite to create the System-Assigned Identity, and once to use it for accessing the Customer-Managed-Key secret.
For each test case in the e2e folder, you can optionally add post-deployment Pester tests that are executed once the corresponding deployment completed and before the removal logic kicks in.
To leverage the feature you MUST:
Use Pester as a test framework in each test file
Name the file with the suffix "*.tests.ps1"
Place each test file the e2e test’s folder or any subfolder (e.g., e2e/max/myTest.tests.ps1 or e2e/max/tests/myTest.tests.ps1)
Implement an input parameter TestInputData in the following way:
Through this parameter you can make use of every output the main.test.bicep file returns, as well as the path to the test template file in case you want to extract data from it directly.
For example, with an output such as output resourceId string = testDeployment[1].outputs.resourceId defined in the main.test.bicep file, the $TestInputData would look like:
$TestInputData = @{
DeploymentOutputs = @{
resourceId = @{
Type = "String" Value = "/subscriptions/***/resourceGroups/dep-***-keyvault.vaults-kvvpe-rg/providers/Microsoft.KeyVault/vaults/***kvvpe001" }
}
ModuleTestFolderPath = "/home/runner/work/bicep-registry-modules/bicep-registry-modules/avm/res/key-vault/vault/tests/e2e/private-endpoint"}
README documentation MUST be automatically/programmatically generated. MUST include the sections as defined in the language specific requirements BCPNFR2, TFNFR2.
The above formats are currently automatically taken & generated from the tests/e2e tests. It is enough to run the Set-ModuleReadMe or Set-AVMModule functions (from the utilities folder) to update the usage examples in the readme(s).
Note
Bicep Parameter Files (.bicepparam) are being reviewed and considered by the AVM team for the usability and features at this time and will likely be added in the future.
It is planned that these examples are automatically added to the module readme’s parameter descriptions when running either the Set-ModuleReadMe or Set-AVMModule scripts (available in the utilities folder).
Release / Publishing
The content below is listed based on the following tags
You cannot specify the patch version for Bicep modules in the public Bicep Registry, as this is automatically incremented by 1 each time a module is published. You can only set the Major and Minor versions.
Modules MUST use semantic versioning (aka semver) for their versions and releases in accordance with: Semantic Versioning 2.0.0
For example all modules should be released using a semantic version that matches this pattern: X.Y.Z
X == Major Version
Y == Minor Version
Z == Patch Version
Module versioning before first Major version release 1.0.0
Initially modules MUST be released as version 0.1.0 and incremented via Minor and Patch versions only until the AVM Core Team are confident the AVM specifications are mature enough and appropriate CI test coverage is in place, plus the module owner is happy the module has been “road tested” and is now stable enough for its first Major release of version 1.0.0.
Note
Releasing as version 0.1.0 initially and only incrementing Minor and Patch versions allows the module owner to make breaking changes more easily and frequently as it’s still not an official Major/Stable release. π
Until first Major version 1.0.0 is released, given a version number X.Y.Z:
X Major version MUST NOT be bumped.
Y Minor version MUST be bumped when introducing breaking changes (which would normally bump Major after 1.0.0 release) or feature updates (same as it will be after 1.0.0 release).
Z Patch version MUST be bumped when introducing non-breaking, backward compatible bug fixes (same as it will be after 1.0.0 release).
A module SHOULD avoid breaking changes, e.g., deprecating inputs vs. removing. If you need to implement changes that cause a breaking change, the major version should be increased.
Info
Modules that have not been released as 1.0.0 may introduce breaking changes, as explained in the previous ID SNFR17. That means that you have to introduce non-breaking and breaking changes with a minor version jump, as long as the module has not reached version 1.0.0.
There are, however, scenarios where you want to include breaking changes into a commit and not create a new major version. If you want to introduce breaking changes as part of a minor update, you can do so. In this case, it is essential to keep the change backward compatible, so that the existing code will continue to work. At a later point, another update can increase the major version and remove the code introduced for the backward compatibility.
Tip
See the language specific examples to find out how you can deal with deprecations in AVM modules.
ID: SNFR21 - Category: Publishing - Cross Language Collaboration
When the module owners of the same Resource or Pattern AVM module are not the same individual or team for all languages, each languages team SHOULD collaborate with their sibling language team for the same module to ensure consistency where possible.
Code Style
The content below is listed based on the following tags
To improve the usability of primitive module properties declared as strings, you SHOULD declare them using a type which better represents them, and apply any required casting in the module on behalf of the user.
For reference, please refer to the following examples:
A module MUST have an owner that is defined and managed by a GitHub Team in the Azure GitHub organization.
Today this is only Microsoft FTEs, but everyone is welcome to contribute. The module just MUST be owned by a Microsoft FTE (today) so we can enforce and provide the long-term support required by this initiative.
Note
The names for the GitHub teams for each approved module are already defined in the respective Module Indexes. These teams MUST be created (and used) for each module.
ID: SNFR20 - Category: Contribution/Support - GitHub Teams Only
All GitHub repositories that AVM module are published from and hosted within MUST only assign GitHub repository permissions to GitHub teams only.
Each module MUST have separate GitHub teams assigned for module owners AND module contributors respectively. These GitHub teams MUST be created in the Azure organization in GitHub.
There MUST NOT be any GitHub repository permissions assigned to individual users.
Note
The names for the GitHub teams for each approved module are already defined in the respective Module Indexes. These teams MUST be created (and used) for each module.
The @Azure prefix in the last column of the tables linked above represents the “Azure” GitHub organization all AVM-related repositories exist in. DO NOT include this segment in the team’s name!
Important
Non-FTE / external contributors (subject matter experts that aren’t Microsoft employees) can’t be members of the teams described in this chapter, hence, they won’t gain any extra permissions on AVM repositories, therefore, they need to work in forks.
Naming Convention
The naming convention for the GitHub teams MUST follow the below pattern:
<hyphenated module name>-module-owners-<bicep/tf> - to be assigned as the GitHub repository’s Module Owners team
<hyphenated module name>-module-contributors-<bicep/tf> - to be assigned as the GitHub repository’s Module Contributors team
Note
The naming convention for Bicep modules is slightly different than the naming convention for their respective GitHub teams.
Segments:
<hyphenated module name> == the AVM Module’s name, with each segment separated by dashes, i.e., avm-res-<resource provider>-<ARM resource type>
All officially documented module owner(s) MUST be added to the -module-owners- team. The -module-owners- team MUST NOT have any other members.
Any additional module contributors whom the module owner(s) agreed to work with MUST be added to the -module-contributors- team.
Unless explicitly requested and agreed, members of the AVM core team or any PG teams MUST NOT be added to the -module-owners- or -module-contributors- teams as permissions for them are granted through the teams described in SNFR9.
Grant Permissions - Bicep
Team memberships
Note
In case of Bicep modules, permissions to the BRM repository (the repo of the Bicep Registry) are granted via assigning the -module-owners- and -module-contributors- teams to parent teams that already have the required level access configured. While it is the module owner’s responsibility to initiate the addition of their teams to the respective parents, only the AVM core team can approve this parent-child relationship.
Module owners MUST create their -module-owners- and -module-contributors- teams and as part of the provisioning process, they MUST request the addition of these teams to their respective parent teams (see the table below for details).
GitHub Team Name
Description
Permissions
Permissions granted through
Where to work?
<hyphenated module name>-module-owners-bicep
AVM Bicep Module Owners - <module name>
Write
Assignment to the avm-technical-reviewers-bicep parent team.
Examples - GitHub teams required for the Bicep resource module of Azure Virtual Network (avm/res/network/virtual-network):
avm-res-network-virtualnetwork-module-owners-bicep –> assign to the avm-technical-reviewers-bicep parent team.
avm-res-network-virtualnetwork-module-contributors-bicep –> assign to the avm-module-contributors-bicep parent team.
Tip
Direct link to create a new GitHub team and assign it to its parent: Create new team
Fill in the values as follows:
Team name: Following the naming convention described above, use the value defined in the module indexes.
Description: Follow the guidance above (see the Description column in the table above).
Parent team: Follow the guidance above (see the Permissions granted through column in the table above).
Team visibility: Visible
Team notifications: Enabled
CODEOWNERS file
As part of the “initial Pull Request” (that publishes the first version of the module), module owners MUST add an entry to the CODEOWNERS file in the BRM repository (here).
Note
Through this approach, the AVM core team will grant review permission to module owners as part of the standard PR review process.
Every CODEOWNERS entry (line) MUST include the following segments separated by a single whitespace character:
Path of the module, relative to the repo’s root, e.g.: /avm/res/network/virtual-network/
The -module-owners-team, with the @Azure/ prefix, e.g., @Azure/avm-res-network-virtualnetwork-module-owners-bicep
The GitHub team of the AVM Bicep reviewers, with the @Azure/ prefix, i.e., @Azure/avm-module-reviewers-bicep
Example - CODEOWNERS entry for the Bicep resource module of Azure Virtual Network (avm/res/network/virtual-network):
Module owners MUST assign the -module-owners-and -module-contributors- teams the necessary permissions on their Terraform module repository per the guidance below.
GitHub Team Name
Description
Permissions
Permissions granted through
Where to work?
<module name>-module-owners-tf
AVM Terraform Module Owners - <module name>
Admin
Direct assignment to repo
Module owner can decide whether they want to work in a branch local to the repo or in a fork.
Only the latest released version of a module MUST be supported.
For example, if an AVM Resource Module is used in an AVM Pattern Module that was working but now is not. The first step by the AVM Pattern Module owner should be to upgrade to the latest version of the AVM Resource Module test and then if not fixed, troubleshoot and fix forward from the that latest version of the AVM Resource Module onwards.
This avoids AVM Module owners from having to maintain multiple major release versions.
```shell
# Linux / MacOs# For Windows replace $PWD with your the local path or your repository#docker run -it -v $PWD:/repo -w /repo mcr.microsoft.com/powershell pwsh -Command '
#Invoke-WebRequest -Uri "https://azure.github.io/Azure-Verified-Modules/scripts/Set-AvmGitHubLabels.ps1" -OutFile "Set-AvmGitHubLabels.ps1"
$gh_version = "2.44.1"
Invoke-WebRequest -Uri "https://github.com/cli/cli/releases/download/v2.44.1/gh_2.44.1_linux_amd64.tar.gz" -OutFile "gh_$($gh_version)_linux_amd64.tar.gz"
apt-get update && apt-get install -y git
tar -xzf "gh_$($gh_version)_linux_amd64.tar.gz"
ls -lsa
mv "gh_$($gh_version)_linux_amd64/bin/gh" /usr/local/bin/
rm "gh_$($gh_version)_linux_amd64.tar.gz" && rm -rf "gh_$($gh_version)_linux_amd64"
gh --version
ls -lsa
gh auth login
$OrgProject = "Azure/terraform-azurerm-avm-res-kusto-cluster"
gh auth status
./Set-AvmGitHubLabels.ps1 -RepositoryName $OrgProject -CreateCsvLabelExports $false -NoUserPrompts $true
'```
By default this script will only update and append labels on the repository specified. However, this can be changed by setting the parameter -UpdateAndAddLabelsOnly to $false, which will remove all the labels from the repository first and then apply the AVM labels from the CSV only.
Make sure you elevate your privilege to admin level or the labels will not be applied to your repository. Go to repos.opensource.microsoft.com/orgs/Azure/repos/ to request admin access before running the script.
Full Script:
These Set-AvmGitHubLabels.ps1 can be downloaded from here.
[Diagnostics.CodeAnalysis.SuppressMessageAttribute("PSAvoidUsingWriteHost", "", Justification = "Coloured output required in this script")]
<#
.SYNOPSIS This script can be used to create the Azure Verified Modules (AVM) standard GitHub labels to a GitHub repository.
.DESCRIPTION This script can be used to create the Azure Verified Modules (AVM) standard GitHub labels to a GitHub repository.
By default, the script will remove all pre-existing labels and apply the AVM labels. However, this can be changed by using the -RemoveExistingLabels parameter and setting it to $false. The tool will also output the labels that exist in the repository before and after the script has run to a CSV file in the current directory, or a directory specified by the -OutputDirectory parameter.
The AVM labels to be created are documented here: TBC
.NOTES Please ensure you have specified the GitHub repositry correctly. The script will prompt you to confirm the repository name before proceeding.
.COMPONENT You must have the GitHub CLI installed and be authenticated to a GitHub account with access to the repository you are applying the labels to before running this script.
.LINK TBC
.Parameter RepositoryName
The name of the GitHub repository to apply the labels to.
.Parameter RemoveExistingLabels
If set to $true, the default value, the script will remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels. If set to $false, the script will not remove any pre-existing labels.
.Parameter UpdateAndAddLabelsOnly
If set to $true, the default value, the script will only update and add labels to the repository specified in -RepositoryName. If set to $false, the script will remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels.
.Parameter OutputDirectory
The directory to output the pre-existing and post-existing labels to in a CSV file. The default value is the current directory.
.Parameter CreateCsvLabelExports
If set to $true, the default value, the script will output the pre-existing and post-existing labels to a CSV file in the current directory, or a directory specified by the -OutputDirectory parameter. If set to $false, the script will not output the pre-existing and post-existing labels to a CSV file.
.Parameter GitHubCliLimit
The maximum number of labels to return from the GitHub CLI. The default value is 999.
.Parameter LabelsToApplyCsvUri
The URI to the CSV file containing the labels to apply to the GitHub repository. The default value is https://raw.githubusercontent.com/jtracey93/label-source/main/avm-github-labels.csv.
.Parameter NoUserPrompts
If set to $true, the default value, the script will not prompt the user to confirm they want to remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels. If set to $false, the script will prompt the user to confirm they want to remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels.
This is useful for running the script in automation workflows
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and remove all pre-existing labels.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo"
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and do not remove any pre-existing labels, just overwrite any labels that have the same name.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -RemoveExistingLabels $false
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and output the pre-existing and post-existing labels to the directory C:\GitHubLabels.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -OutputDirectory "C:\GitHubLabels"
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and output the pre-existing and post-existing labels to the directory C:\GitHubLabels and do not remove any pre-existing labels, just overwrite any labels that have the same name.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -OutputDirectory "C:\GitHubLabels" -RemoveExistingLabels $false
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and do not create the pre-existing and post-existing labels CSV files and do not remove any pre-existing labels, just overwrite any labels that have the same name.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -RemoveExistingLabels $false -CreateCsvLabelExports $false
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and do not create the pre-existing and post-existing labels CSV files and do not remove any pre-existing labels, just overwrite any labels that have the same name. Finally, use a custom CSV file hosted on the internet to create the labels from.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -OutputDirectory "C:\GitHubLabels" -RemoveExistingLabels $false -CreateCsvLabelExports $false -LabelsToApplyCsvUri "https://example.com/csv/avm-github-labels.csv"
#>#Requires-PSEdition Core [CmdletBinding()]
param (
[Parameter(Mandatory = $true)]
[string]$RepositoryName,
[Parameter(Mandatory = $false)]
[bool]$RemoveExistingLabels = $true,
[Parameter(Mandatory = $false)]
[bool]$UpdateAndAddLabelsOnly = $true,
[Parameter(Mandatory = $false)]
[bool]$CreateCsvLabelExports = $true,
[Parameter(Mandatory = $false)]
[string]$OutputDirectory = (Get-Location),
[Parameter(Mandatory = $false)]
[int]$GitHubCliLimit = 999,
[Parameter(Mandatory = $false)]
[string]$LabelsToApplyCsvUri = "https://azure.github.io/Azure-Verified-Modules/governance/avm-standard-github-labels.csv",
[Parameter(Mandatory = $false)]
[bool]$NoUserPrompts = $false
)
# Check if the GitHub CLI is installed $GitHubCliInstalled = Get-Command gh -ErrorAction SilentlyContinue
if ($null -eq $GitHubCliInstalled) {
throw"The GitHub CLI is not installed. Please install the GitHub CLI and try again." }
Write-Host "The GitHub CLI is installed..." -ForegroundColor Green
# Check if GitHub CLI is authenticated $GitHubCliAuthenticated = gh auth status
if ($LASTEXITCODE -ne0) {
Write-Host $GitHubCliAuthenticated -ForegroundColor Red
throw"Not authenticated to GitHub. Please authenticate to GitHub using the GitHub CLI, `gh auth login`, and try again." }
Write-Host "Authenticated to GitHub..." -ForegroundColor Green
# Check if GitHub repository name is valid $GitHubRepositoryNameValid = $RepositoryName -match"^[a-zA-Z0-9-]+/[a-zA-Z0-9-]+$"if ($false -eq $GitHubRepositoryNameValid) {
throw"The GitHub repository name $RepositoryName is not valid. Please check the repository name and try again. The format must be <OrgName>/<RepoName>" }
# List GitHub repository provided and check it exists $GitHubRepository = gh repo view $RepositoryName
if ($LASTEXITCODE -ne0) {
Write-Host $GitHubRepository -ForegroundColor Red
throw"The GitHub repository $RepositoryName does not exist. Please check the repository name and try again." }
Write-Host "The GitHub repository $RepositoryName exists..." -ForegroundColor Green
# PRE - Get the current GitHub repository labels and export to a CSV file in the current directory or where -OutputDirectory specifies if set to a valid directory path and the directory exists or can be created if it does not exist alreadyif ($RemoveExistingLabels -or $UpdateAndAddLabelsOnly) {
Write-Host "Getting the current GitHub repository (pre) labels for $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels = gh label list -R $RepositoryName -L $GitHubCliLimit --json name,description,color
if ($null -ne $GitHubRepositoryLabels -and $CreateCsvLabelExports -eq $true) {
$csvFileNamePathPre = "$OutputDirectory\$($RepositoryName.Replace('/', '_'))-Labels-Pre-$(Get-Date -Format FileDateTime).csv" Write-Host "Exporting the current GitHub repository (pre) labels for $RepositoryName to $csvFileNamePathPre" -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | Export-Csv -Path $csvFileNamePathPre -NoTypeInformation
}
}
# Remove all pre-existing labels if -RemoveExistingLabels is set to $true and user confirms they want to remove all pre-existing labelsif ($null -ne $GitHubRepositoryLabels) {
$GitHubRepositoryLabelsJson = $GitHubRepositoryLabels | ConvertFrom-Json
if ($RemoveExistingLabels -eq $true -and $NoUserPrompts -eq $false -and $UpdateAndAddLabelsOnly -eq $false) {
$RemoveExistingLabelsConfirmation = Read-Host "Are you sure you want to remove all $($GitHubRepositoryLabelsJson.Count) pre-existing labels from $($RepositoryName)? (Y/N)"if ($RemoveExistingLabelsConfirmation -eq"Y") {
Write-Host "Removing all pre-existing labels from $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | ForEach-Object {
Write-Host "Removing label $($_.name) from $RepositoryName..." -ForegroundColor DarkRed
gh label delete -R $RepositoryName $_.name --yes
}
}
}
if ($RemoveExistingLabels -eq $true -and $NoUserPrompts -eq $true -and $UpdateAndAddLabelsOnly -eq $false) {
Write-Host "Removing all pre-existing labels from $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | ForEach-Object {
Write-Host "Removing label $($_.name) from $RepositoryName..." -ForegroundColor DarkRed
gh label delete -R $RepositoryName $_.name --yes
}
}
}
if ($null -eq $GitHubRepositoryLabels) {
Write-Host "No pre-existing labels to remove or not selected to be removed from $RepositoryName..." -ForegroundColor Magenta
}
# Check LabelsToApplyCsvUri is valid and contains a CSV content Write-Host "Checking $LabelsToApplyCsvUri is valid..." -ForegroundColor Yellow
$LabelsToApplyCsvUriValid = $LabelsToApplyCsvUri -match"^https?://"if ($false -eq $LabelsToApplyCsvUriValid) {
throw"The LabelsToApplyCsvUri $LabelsToApplyCsvUri is not valid. Please check the URI and try again. The format must be a valid URI." }
Write-Host "The LabelsToApplyCsvUri $LabelsToApplyCsvUri is valid..." -ForegroundColor Green
# Create AVM lables from the AVM labels CSV file stored on the web using the convertfrom-csv cmdlet $avmLabelsCsv = Invoke-WebRequest -Uri $LabelsToApplyCsvUri | ConvertFrom-Csv
# Check if the AVM labels CSV file contains the following columns: Name, Description, HEX $avmLabelsCsvColumns = $avmLabelsCsv | Get-Member -MemberType NoteProperty | Select-Object -ExpandProperty Name
$avmLabelsCsvColumnsValid = $avmLabelsCsvColumns -contains"Name"-and $avmLabelsCsvColumns -contains"Description"-and $avmLabelsCsvColumns -contains"HEX"if ($false -eq $avmLabelsCsvColumnsValid) {
throw"The labels CSV file does not contain the required columns: Name, Description, HEX. Please check the CSV file and try again. It contains the following columns: $avmLabelsCsvColumns" }
Write-Host "The labels CSV file contains the required columns: Name, Description, HEX" -ForegroundColor Green
# Create the AVM labels in the GitHub repository Write-Host "Creating/Updating the $($avmLabelsCsv.Count) AVM labels in $RepositoryName..." -ForegroundColor Yellow
$avmLabelsCsv | ForEach-Object {
if ($GitHubRepositoryLabelsJson.name -contains $_.name) {
Write-Host "The label $($_.name) already exists in $RepositoryName. Updating the label to ensure description and color are consitent..." -ForegroundColor Magenta
gh label create -R $RepositoryName "$($_.name)" -c $_.HEX -d $($_.Description) --force
}
else {
Write-Host "The label $($_.name) does not exist in $RepositoryName. Creating label $($_.name) in $RepositoryName..." -ForegroundColor Cyan
gh label create -R $RepositoryName "$($_.Name)" -c $_.HEX -d $($_.Description) --force
}
}
# POST - Get the current GitHub repository labels and export to a CSV file in the current directory or where -OutputDirectory specifies if set to a valid directory path and the directory exists or can be created if it does not exist alreadyif ($CreateCsvLabelExports -eq $true) {
Write-Host "Getting the current GitHub repository (post) labels for $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels = gh label list -R $RepositoryName -L $GitHubCliLimit --json name,description,color
if ($null -ne $GitHubRepositoryLabels) {
$csvFileNamePathPre = "$OutputDirectory\$($RepositoryName.Replace('/', '_'))-Labels-Post-$(Get-Date -Format FileDateTime).csv" Write-Host "Exporting the current GitHub repository (post) labels for $RepositoryName to $csvFileNamePathPre" -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | Export-Csv -Path $csvFileNamePathPre -NoTypeInformation
}
}
# If -RemoveExistingLabels is set to $true and user confirms they want to remove all pre-existing labels check that only the avm labels exist in the repositoryif ($RemoveExistingLabels -eq $true -and ($RemoveExistingLabelsConfirmation -eq"Y"-or $NoUserPrompts -eq $true) -and $UpdateAndAddLabelsOnly -eq $false) {
Write-Host "Checking that only the AVM labels exist in $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels = gh label list -R $RepositoryName -L $GitHubCliLimit --json name,description,color
$GitHubRepositoryLabels | ConvertFrom-Json | ForEach-Object {
if ($avmLabelsCsv.Name -notcontains $_.name) {
throw"The label $($_.name) exists in $RepositoryName but is not in the CSV file." }
}
Write-Host "Only the CSV labels exist in $RepositoryName..." -ForegroundColor Green
}
Write-Host "The CSV labels have been created/updated in $RepositoryName..." -ForegroundColor Green
As part of the “initial Pull Request” (that publishes the first version of the module), module owners MUST add an entry to the AVM Module Issue template file in the BRM repository (here).
Note
Through this approach, the AVM core team will allow raising a bug or feature request for a module, only after the module gets merged to the BRM repository.
The module name entry MUST be added to the dropdown list with id module-name-dropdown as an option, in alphabetical order.
Important
Module owners MUST ensure that the module name is added in alphabetical order, to simplify selecting the right module name when raising an AVM module issue.
Example - AVM Module Issue template module name entry for the Bicep resource module of Azure Virtual Network (avm/res/network/virtual-network):
- type: dropdownid: module-name-dropdownattributes:
label: Module Namedescription: Which existing AVM module is this issue related to?options:
... - "avm/res/network/virtual-network"...
Telemetry
The content below is listed based on the following tags
We will maintain a set of CSV files in the AVM Central Repo (Azure/Azure-Verified-Modules) with the required TelemetryId prefixes to enable checks to utilize this list to ensure the correct IDs are used. To see the formatted content of these CSV files with additional information, please visit the AVM Module Indexes page.
These will also be provided as a comment on the module proposal, once accepted, from the AVM core team.
Modules MUST provide the capability to collect deployment/usage telemetry as detailed in Telemetry further.
To highlight that AVM modules use telemetry, an information notice MUST be included in the footer of each module’s README.md file with the below content. (See more details on this requirement, here.)
Telemetry Information Notice
Note
The following information notice is automatically added at the bottom of the README.md file of the module when
Terraform: Executing the make docs command with the note and header ## Data Collection being placed in the module’s _footer.md beforehand
### Data Collection
The software may collect information about you and your use of the software and send it to Microsoft. Microsoft may use this information to provide services and improve our products and services. You may turn off the telemetry as described in the [repository](https://aka.ms/avm/telemetry). There are also some features in the software that may enable you and Microsoft to collect data from users of your applications. If you use these features, you must comply with applicable law, including providing appropriate notices to users of your applications together with a copy of Microsoftβs privacy statement. Our privacy statement is located at <https://go.microsoft.com/fwlink/?LinkID=824704>. You can learn more about data collection and use in the help documentation and our privacy statement. Your use of the software operates as your consent to these practices.
Bicep
The ARM deployment name used for the telemetry MUST follow the pattern and MUST be no longer than 64 characters in length: 46d3xbcp.<res/ptn>.<(short) module name>.<version>.<uniqueness>
<res/ptn> == AVM Resource or Pattern Module
<(short) module name> == The AVM Module’s, possibly shortened, name including the resource provider and the resource type, without;
The prefixes: avm-res-
The prefixes: avm-ptn-
<version> == The AVM Module’s MAJOR.MINOR version (only) with . (periods) replaced with - (hyphens), to allow simpler splitting of the ARM deployment name
<uniqueness> == This section of the ARM deployment name is to be used to ensure uniqueness of the deployment name.
This is to cater for the following scenarios:
The module is deployed multiple times to the same:
Due to the 64-character length limit of Azure deployment names, the <(short) module name> segment has a length limit of 36 characters, so if the module name is longer than that, it MUST be truncated to 36 characters. If any of the semantic version’s segments are longer than 1 character, it further restricts the number of characters that can be used for naming the module.
An example deployment name for the AVM Virtual Machine Resource Module would be: 46d3xbcp.res.compute-virtualmachine.1-2-3.eum3
An example deployment name for a shortened module name would be: 46d3xbcp.res.desktopvirtualization-appgroup.1-2-3.eum3
Tip
Terraform: Terraform uses a telemetry provider, the configuration of which is the same for every module and is included in the template repo.
General: See the language specific contribution guides for detailed guidance and sample code to use in AVM modules to achieve this requirement.
The telemetry enablement MUST be on/enabled by default, however this MUST be able to be disabled by a module consumer by setting the below parameter/variable value to false:
Bicep: enableTelemetry
Terraform: enable_telemetry
Note
Whenever a module references AVM modules that implement the telemetry parameter (e.g., a pattern module that uses AVM resource modules), the telemetry parameter value MUST be passed through to these modules. This is necessary to ensure a consumer can reliably enable & disable the telemetry feature for all used modules.
To comply with specifications outlined in SFR3 & SFR4 you MUST incorporate the following code snippet into your modules. Place this code sample in the “top level” main.bicep file; it is not necessary to include it in any nested Bicep files (child modules).
Modules MAY create/adopt public preview services and features at their discretion.
Preview API versions MAY be used when:
The resource/service/feature is GA but the only API version available for the GA resource/service/feature is a preview version
For example, Diagnostic Settings (Microsoft.Insights/diagnosticSettings) the latest version of the API available with GA features, like Category Groups etc., is 2021-05-01-preview
Otherwise the latest “non-preview” version of the API SHOULD be used
Preview services and features, SHOULD NOT be promoted and exposed, unless they are supported by the respective PG, and it’s documented publicly.
However, they MAY be exposed at the module owners discretion, but the following rules MUST be followed:
The description of each of the parameters/variables used for the preview service/feature MUST start with:
“THIS IS A <PARAMETER/VARIABLE> USED FOR A PREVIEW SERVICE/FEATURE, MICROSOFT MAY NOT PROVIDE SUPPORT FOR THIS, PLEASE CHECK THE PRODUCT DOCS FOR CLARIFICATION”
Modules SHOULD set defaults in input parameters/variables to align to high priority/impact/severity recommendations, where appropriate and applicable, in the following frameworks and resources:
They SHOULD NOT align to these recommendations when it requires an external dependency/resource to be deployed and configured and then associated to the resources in the module.
Alignment SHOULD prioritize best-practices and security over cost optimization, but MUST allow for these to be overridden by a module consumer easily, if desired.
ID: SFR5 - Category: Composition - Availability Zones
Modules that deploy zone-redundant resources MUST enable the spanning across as many zones as possible by default, typically all 3.
Modules that deploy zonal resources MUST provide the ability to specify a zone for the resources to be deployed/pinned to. However, they MUST NOT default to a particular zone by default, e.g. 1 in an effort to make the consumer aware of the zone they are selecting to suit their architecture requirements.
For both scenarios the modules MUST expose these configuration options via configurable parameters/variables.
ID: SFR6 - Category: Composition - Data Redundancy
Modules that deploy resources or patterns that support data redundancy SHOULD enable this to the highest possible value by default, e.g. RA-GZRS. When a resource or pattern doesn’t provide the ability to specify data redundancy as a simple property, e.g. GRS etc., then the modules MUST provide the ability to enable data redundancy for the resources or pattern via parameters/variables.
For example, a Storage Account module can simply set the sku.name property to Standard_RAGZRS. Whereas a SQL DB or Cosmos DB module will need to expose more properties, via parameters/variables, to allow the specification of the regions to replicate data to as per the consumers requirements.
Module owners MUST set the default resource name prefix for child, extension, and interface resources to the associated abbreviation for the specific resource as documented in the following CAF article Abbreviation examples for Azure resources, if specified and documented. This reduces the amount of input values a module consumer MUST provide by default when using the module.
For example, a Private Endpoint that is being deployed as part of a resource module, via the mandatory interfaces, MUST set the Private Endpoint’s default name to begin with the prefix of pep-.
Module owners MUST also provide the ability for these default names, including the prefixes, to be overridden via a parameter/variable if the consumer wishes to.
Furthermore, as per RMNFR2, Resource Modules MUST not have a default value specified for the name of the primary resource and therefore the name MUST be provided and specified by the module consumer.
The name provided MAY be used by the module owner to generate the rest of the default name for child, extension, and interface resources if they wish to. For example, for the Private Endpoint mentioned above, the full default name that can be overridden by the consumer, MAY be pep-<primary-resource-name>.
Tip
If the resource does not have a documented abbreviation in Abbreviation examples for Azure resources, then the module owner is free to use a sensible prefix instead.
Resource modules support the following optional features/extension resources, as specified, if supported by the primary resource. The top-level variable/parameter names MUST be:
Optional Features/Extension Resources
Bicep Parameter Name
Terraform Variable Name
MUST/SHOULD
Diagnostic Settings
diagnosticSettings
diagnostic_settings
MUST
Role Assignments
roleAssignments
role_assignments
MUST
Resource Locks
lock
lock
MUST
Tags
tags
tags
MUST
Managed Identities (System / User Assigned)
managedIdentities
managed_identities
MUST
Private Endpoints
privateEndpoints
private_endpoints
MUST
Customer Managed Keys
customerManagedKey
customer_managed_key
MUST
Azure Monitor Alerts
alerts
alerts
SHOULD
Resource modules MUST NOT deploy required/dependent resources for the optional features/extension resources specified above. For example, for Diagnostic Settings the resource module MUST NOT deploy the Log Analytics Workspace, this is expected to be already in existence from the perspective of the resource module deployed via another method/module etc.
Note
Please note that the implementation of Customer Managed Keys from an ARM API perspective is different across various RPs that implement Customer Managed Keys in their service. For that reason you may see differences between modules on how Customer Managed Keys are handled and implemented, but functionality will be as expected.
Module owners MAY choose to utilize cross repo dependencies for these “add-on” resources, or MAY chose to implement the code directly in their own repo/module. So long as the implementation and outputs are as per the specifications requirements, then this is acceptable.
Tip
Make sure to checkout the language specific specifications for more info on this:
Resource modules MUST implement a common interface, e.g. the input’s data structures and properties within them (objects/arrays/dictionaries/maps), for the optional features/extension resources:
When a given version of an Azure resource used in a resource module reaches its end-of-life (EOL) and is no longer supported by Microsoft, the module owner SHOULD ensure that:
The module is aligned with these changes and only includes supported versions of the resource. This is typically achieved through the allowed values in the parameter that specifies the resource SKU or type.
The following notice is shown under the Notes section of the module’s readme.md. (If any related public announcement is available, it can also be linked to from the Notes section.):
“Certain versions of this Azure resource reached their end of life. The latest version of this module only includes supported versions of the resource. All unsupported versions have been removed from the related parameters.”
AND the related parameter’s description:
“Certain versions of this Azure resource reached their end of life. The latest version of this module only includes supported versions of the resource. All unsupported versions have been removed from this parameter.”
We will maintain a set of CSV files in the AVM Central Repo (Azure/Azure-Verified-Modules) with the correct singular names for all resource types to enable checks to utilize this list to ensure repos are named correctly. To see the formatted content of these CSV files with additional information, please visit the AVM Module Indexes page.
This will be updated quarterly, or ad-hoc as new RPs/ Resources are created and highlighted via a check failure.
Resource modules MUST follow the below naming conventions (all lower case):
Bicep Resource Module Naming
Naming convention: avm/res/<hyphenated resource provider name>/<hyphenated ARM resource type> (module name for registry)
Example: avm/res/compute/virtual-machine or avm/res/managed-identity/user-assigned-identity
Segments:
res defines this is a resource module
<hyphenated resource provider name> is the resource providerβs name after the Microsoft part, with each word starting with a capital letter separated by dashes, e.g., Microsoft.Compute = compute, Microsoft.ManagedIdentity = managed-identity.
<hyphenated ARM resource type> is the singular version of the word after the resource provider, with each word starting with a capital letter separated by dashes, e.g., Microsoft.Compute/virtualMachines = virtual-machine, BUTMicrosoft.Network/trafficmanagerprofiles = trafficmanagerprofile - since trafficmanagerprofiles is all lower case as per the ARM API definition.
Terraform Resource Module Naming
Naming convention:
avm-res-<resource provider>-<ARM resource type> (module name for registry)
terraform-<provider>-avm-res-<resource provider>-<ARM resource type> (GitHub repository name to meet registry naming requirements)
Example: avm-res-compute-virtualmachine or avm-res-managedidentity-userassignedidentity
Segments:
<provider> is the logical abstraction of various APIs used by Terraform. In most cases, this is going to be azurerm or azuread for resource modules.
res defines this is a resource module
<resource provider> is the resource providerβs name after the Microsoft part, e.g., Microsoft.Compute = compute.
<ARM resource type> is the singular version of the word after the resource provider, e.g., Microsoft.Compute/virtualMachines = virtualmachine
ID: RMNFR3 - Category: Composition - RP Collaboration
Module owners (Microsoft FTEs) SHOULD reach out to the respective Resource Provider teams to build a partnership and collaboration on the modules creation, existence and long term maintenance.
Module owners MAY cross-references other modules to build either Resource or Pattern modules.
However, they MUST be referenced only by a public registry reference to a pinned version e.g. br/public:avm/[res|ptn|utl]/<publishedModuleName>:>version<. They MUST NOT use local parent path references to a module e.g. ../../xxx/yyy.bicep.
The only exception to this rule are child modules as documented in BCPFR6.
Modules MUST NOT contain references to non-AVM modules.
ID: BCPFR2 - Category: Composition - Role Assignments Role Definition Mapping
Module owners MAY define common RBAC Role Definition names and IDs within a variable to allow consumers to define a RBAC Role Definition by their name rather than their ID, this should be self contained within the module themselves.
However, they MUST use only the official RBAC Role Definition name within the variable and nothing else.
To meet the requirements of BCPFR2, BCPNFR5 and BCPNFR6 you MUST use the below code sample in your AVM Modules to achieve this.
@description('''Required. You can provide either the display name (note not all roles are supported, check module documentation) of the role definition, or its fully qualified ID in the following format: `/providers/Microsoft.Authorization/roleDefinitions/c2f4ef07-c644-48eb-af81-4b1b4947fb11`.''')
param roleDefinitionIdOrName string
var builtInRbacRoleNames = {
Owner: '/providers/Microsoft.Authorization/roleDefinitions/8e3af657-a8ff-443c-a75c-2fe8c4bcb635' Contributor: '/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c' Reader: '/providers/Microsoft.Authorization/roleDefinitions/acdd72a7-3385-48ef-bd42-f606fba81ae7''Role Based Access Control Administrator (Preview)': '/providers/Microsoft.Authorization/roleDefinitions/f58310d9-a9f6-439a-9e8d-f62e7b41a168''User Access Administrator': '/providers/Microsoft.Authorization/roleDefinitions/18d7d88d-d35e-4fb5-a5c3-7773c20a72d9'//Other RBAC Role Definitions Names & IDs can be added here as needed for your module }
var roleDefinitionIdMappedResult = (contains(builtInRbacRoleNames, roleDefinitionIdOrName) ? builtInRbacRoleNames[roleDefinitionIdOrName] : roleDefinitionIdOrName)
resource roleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
//Other properties removed for ease of reading properties: {
roleDefinitionId: roleDefinitionIdMappedResult
//Other properties removed for ease of reading }
}
Parent templates MUST reference all their direct child-templates to allow for an end-to-end deployment experience. For example, the SQL server template must reference its child database module and encapsulate it in a loop to allow for the deployment of multiple databases.
@description('Optional. The databases to create in the server')
param databases databaseType[]?
resource server 'Microsoft.Sql/servers@(...)' = { (...) }
module server_databases 'database/main.bicep' = [for (database, index) in (databases ?? []): {
name: '${uniqueString(deployment().name, location)}-Sql-DB-${index}' params: {
serverName: server.name
(...)
}
}]
User-defined types (UDTs) MUST always end with the suffix (...)Type to make them obvious to users. In addition it is recommended to extend the suffix to (...)OutputType if a UDT is exclusively used for outputs.
type subnet = { ... } // Wrongtype subnetType = { ... } // Correcttype subnetOutputType = { ... } // Correct, if used only for outputs
Since User-defined types (UDTs) MUST always be singular as per BCPNFR18, their naming should reflect this and also be singular.
ID: BCPNFR5 - Category: Composition - Role Assignments Role Definition Mapping Limits
As per BCPFR2, module owners MAY define common RBAC Role Definition names and IDs within a variable to allow consumers to define a RBAC Role Definition by their name rather than their ID.
Module owners SHOULD NOT map every RBAC Role Definition within this variable as it can cause the module to bloat in size and cause consumption issues later when stitched together with other modules due to the 4MB ARM Template size limit.
Therefore module owners SHOULD only map the most applicable and common RBAC Role Definition names for their module and SHOULD NOT exceed 15 RBAC Role Definitions in the variable.
Important
Remember if the RBAC Role Definition name is not included in the variable this does not mean it cannot be declared, used and assigned to an identity via an RBAC Role Assignment as part of a module, as any RBAC Role Definition can be specified via its ID without being in the variable.
The version value is in the form of MAJOR.MINOR. The PATCH version will be incremented by the CI automatically when publishing the module to the Public Bicep Registry once the corresponding pull request is merged. Therefore, contributions that would only require an update of the patch version, can keep the version.json file intact.
For example, the version value should be:
0.1 for new modules, so that they can be released as v0.1.0.
1.0 once the module owner signs off the module is stable enough for itβs first Major release of v1.0.0.
0.x for all feature updates between the first release v0.1.0 and the first Major release of v1.0.0.
Child resource modules MUST be stored in a subfolder of their parent resource module and named after the child resource’s singular name (ref), so that the path to the child resource folder is consistent with the hierarchy of its resource type. For example, Microsoft.Sql/servers may have dedicated child resources of type Microsoft.Sql/servers/databases. Hence, the SQL server database child module is stored in a database subfolder of the server parent folder.
sql
ββ server [module]
ββ database [child-module/resource]
In this folder, we recommend to place the child resource-template alongside a ReadMe & compiled JSON (to be generated via the default Set-AVMModule utility) and optionally further nest additional folders for its child resources.
There are several reasons to structure a module in this way. For example:
It allows a separation of concerns where each module can focus on its own properties and logic, while delegating most of a child-resource’s logic to its separate child module
It’s consistent with the provider namespace structure and makes modules easier to understand not only because they’re more aligned with set structure, but also are aligned with one another
As each module is its own ‘deployment’, it reduces limitations around nested loops
Once the feature is enabled, it will enable module owners to publish set child-modules as separate modules to the public registry, allowing consumers to make use of them directly.
Note
In full transparency: The drawbacks of these additional deployments is an extended deployment period & a contribution to the 800 deployments limit. However, for AVM resource modules it was agreed that the advantages listed above outweigh these limitations.
Inputs / Outputs
The content below is listed based on the following tags
ID: SNFR22 - Category: Inputs - Parameters/Variables for Resource IDs
A module parameter/variable that requires a full Azure Resource ID as an input value, e.g. /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.KeyVault/vaults/{keyVaultName}, MUST contain ResourceId/resource_id in its parameter/variable name to assist users in knowing what value to provide at a glance of the parameter/variable name.
Example for the property workspaceId for the Diagnostic Settings resource. In Bicep its parameter name should be workspaceResourceId and the variable name in Terraform should be workspace_resource_id.
workspaceId is not descriptive enough and is ambiguous as to which ID is required to be input.
Parameters/variables that pertain to the primary resource MUST NOT use the resource type in the name.
e.g., use sku, vs. virtualMachineSku/virtualmachine_sku
Another example for where RPs contain some of their name within a property, leave the property unchanged. E.g. Key Vault has a property called keySize, it is fine to leave as this and not remove the key part from the property/parameter name.
A resource module MUST use the following standard inputs:
name (no default)
location (if supported by the resource and not a global resource, then use Resource Group location, if resource supports Resource Groups, otherwise no default)
ID: BCPFR5 - Category: Inputs - Availability Zones Implementation
To implement requirement SFR5, the following convention SHOULD apply:
Availability Zones
@description('Optional. The Availability Zones to place the resources in.')
@allowed([
1
2
3
])
param zones int[] = [
1
2
3
]
resource myResource (...) {
(...)
properties: {
(...)
zones: map(zones, zone => string(zone))
}
}
@description('Required. The Availability Zone to place the resource in. If set to 0, then Availability Zone is not set.')
@allowed([
0
1
2
3
])
param zone int
resource myResource (...) {
(...)
properties: {
(...)
zones: zone != 0 ? [ string(zone) ] : null }
}
ID: BCPNFR1 - Inputs - User-defined types - General
To simplify the consumption experience for module consumers when interacting with complex data types input parameters, mainly objects and arrays, the Bicep feature of User-Defined TypesMUST be used and declared.
Tip
User-Defined Types are GA in Bicep as of version v0.21.1, please ensure you have this version installed as a minimum.
User-Defined Types allow intellisense support in supported IDEs (e.g. Visual Studio Code) for complex input parameters using arrays and objects.
CARML Migration Exemption
While the transition of CARML modules into AVM is complete, retrofitting User-Defined Types for all modules will take a considerable amount of time.
Therefore, the addition of User-Defined Types is currently NOT mandated/enforced. However, past their initial release, all modules MUST implement User-Defined Types prior to the release of their next version.
Similar to BCPNFR21, input parameters MUST implement decorators such as description & secure (if sensitive).
Further, input parameters SHOULD implement decorators like allowed, minValue, maxValue, minLength & maxLength (and others if available) as they have a big positive impact on the module’s usability.
@description('Optional. The threshold of your resource.')
@minValue(1)
@maxValue(10)
param threshold: int?
@description('Required. The SKU of your resource.')
@allowed([
'Basic''Premium''Standard'])
param sku string
User-defined types (UDTs) MUST always be singular and non-nullable. The configuration of either should instead be done directly at the parameter or output that uses the type.
For example, instead of
param subnets subnetsType
type subnetsType = { ... }[]?
the type should be defined like
param subnets subnetType[]?
type subnetType = { ... }
The primary reason for this requirement is clarity. If not defined directly at the parameter or output, a user would always be required to check the type to understand how e.g., a parameter is expected.
User-defined types (UDTs) MUST always end with the suffix (...)Type to make them obvious to users. In addition it is recommended to extend the suffix to (...)OutputType if a UDT is exclusively used for outputs.
type subnet = { ... } // Wrongtype subnetType = { ... } // Correcttype subnetOutputType = { ... } // Correct, if used only for outputs
Since User-defined types (UDTs) MUST always be singular as per BCPNFR18, their naming should reflect this and also be singular.
User-defined types (UDTs) SHOULD always be exported via the @export() annotation in every template they’re implemented in.
@export()
type subnetType = { ... }
Doing so has the benefit that other (e.g., parent) modules can import them and as such reduce code duplication. Also, if the module itself is published, users of the Public Bicep Registry can import the types independently of the module itself. One example where this can be useful is a pattern module that may re-use the same interface when referencing a module from the registry.
Similar to BCPNFR9, User-defined types (UDTs) MUST implement decorators such as description & secure (if sensitive). This is true for every property of the UDT, as well as the UDT itself.
Further, User-defined types SHOULD implement decorators like allowed, minValue, maxValue, minLength & maxLength (and others if available) as they have a big positive impact on the module’s usability.
@description('My type''s description.')
type myType = {
@description('Optional. The threshold of your resource.')
@minValue(1)
@maxValue(10)
threshold: int?
@description('Required. The SKU of your resource.')
sku: ('Basic' | 'Premium' | 'Standard')
}
Modules will have lots of parameters that will differ in their requirement type (required, optional, etc.). To help consumers understand what each parameter’s requirement type is, module owners MUST add the requirement type to the beginning of each parameter’s description. Below are the requirement types with a definition and example for the description decorator:
Parameter Requirement Type
Definition
Example Description Decorator
Required
The parameter value must be provided. The parameter does not have a default value and hence the module expects and requires an input.
The parameter value can be optional or required based on a condition, mostly based on the value provided to other parameters. Should contain a sentence starting with ‘Required if (…).’ to explain the condition.
The parameter value is generated within the module and should not be specified as input in most cases. A common example of this is the utcNow() function that is only supported as the input for a parameter value, and not inside a variable.
When implementing any of the shared or Bicep-specific AVM interface variants you MUST import their User-defined type (UDT) via the published AVM-Common-Types module.
When doing so, each type MUST be imported separately, right above the parameter or output that uses it.
import { roleAssignmentType } from 'br/public:avm/utl/types/avm-common-types:*.*.*'@description('Optional. Array of role assignments to create.')
param roleAssignments roleAssignmentType[]?
import { diagnosticSettingFullType } from 'br/public:avm/utl/types/avm-common-types:*.*.*'@description('Optional. The diagnostic settings of the service.')
param diagnosticSettings diagnosticSettingFullType[]?
Importing them individually as opposed to one common block has several benefits such as
Individual versioning of types
If you must update the version for one type, you’re not exposed to unexpected changes to other types
Important
The import (...) block MUST not be added in between a parameter’s definition and its metadata. Doing so breaks the metadata’s binding to the parameter in question.
Finally, you should check for version updates regularly to ensure the resource module stays consistent with the specs. If the used AVM-Common-Types runs stale, the CI may eventually fail the module’s static tests.
Testing
The content below is listed based on the following tags
Modules MUST implement end-to-end (deployment) testing that create actual resources to validate that module deployments work. In Bicep tests are sourced from the directories in /tests/e2e. In Terraform, these are in /examples.
Each test MUST run and complete without user inputs successfully, for automation purposes.
Each test MUST also destroy/clean-up its resources and test dependencies following a run.
Tip
To see a directory and file structure for a module, see the language specific contribution guide.
It is likely that to complete E2E tests, a number of resources will be required as dependencies to enable the tests to pass successfully. Some examples:
When testing the Diagnostic Settings interface for a Resource Module, you will need an existing Log Analytics Workspace to be able to send the logs to as a destination.
When testing the Private Endpoints interface for a Resource Module, you will need an existing Virtual Network, Subnet and Private DNS Zone to be able to complete the Private Endpoint deployment and configuration.
Module owners MUST:
Create the required resources that their module depends upon in the test file/directory
They MUST either use:
Simple/native resource declarations/definitions in their respective IaC language, OR
Another already published AVM Module that MUST be pinned to a specific published version.
They MUST NOT use any local directory path references or local copies of AVM modules in their own modules test directory.
β Terraform & Bicep Log Analytics Workspace examples using simple/native declarations for use in E2E tests
Modules SHOULD implement unit testing to ensure logic and conditions within parameters/variables/locals are performing correctly. These tests MUST pass before a module version can be published.
Unit Tests test specific module functionality, without deploying resources. Used on more complex modules. In Bicep and Terraform these live in tests/unit.
Modules MUST use static analysis, e.g., linting, security scanning (PSRule, tflint, etc.). These tests MUST pass before a module version can be published.
There may be differences between languages in linting rules standards, but the AVM core team will try to close these and bring them into alignment over time.
Modules MUST implement idempotency end-to-end (deployment) testing. E.g. deploying the module twice over the top of itself.
Modules SHOULD pass the idempotency test, as we are aware that there are some exceptions where they may fail as a false-positive or legitimate cases where a resource cannot be idempotent.
For example, Virtual Machine Image names must be unique on each resource creation/update.
Module owners MUST test that child and extension resources and those Bicep or Terreform interface resources that are supported by their modules, are validated in E2E tests as per SNFR2 to ensure they deploy and are configured correctly.
These MAY be tested in a separate E2E test and DO NOT have to be tested in each E2E test.
ID: BCPNFR10 - Category: Testing - Test Bicep File Naming
Module owners MUST name their test .bicep files in the /tests/e2e/<defaults/waf-aligned/max/etc.> directories: main.test.bicep as the test framework (CI) relies upon this name.
ID: BCPNFR13 - Category: Testing - Test file metadata
By default, the ReadMe-generating utility will create usage examples headers based on each e2e folder’s name. Module owners MAY provide a custom name & description by specifying the metadata blocks name & description in their main.test.bicep test files.
For example:
metadata name = 'Using Customer-Managed-Keys with System-Assigned identity'metadata description = 'This instance deploys the module using Customer-Managed-Keys using a System-Assigned Identity. This required the service to be deployed twice, once as a pre-requisite to create the System-Assigned Identity, and once to use it for accessing the Customer-Managed-Key secret.'
would lead to a header in the module’s readme.md file along the lines of
### Example 1: _Using Customer-Managed-Keys with System-Assigned identity_
This instance deploys the module using Customer-Managed-Keys using a System-Assigned Identity. This required the service to be deployed twice, once as a pre-requisite to create the System-Assigned Identity, and once to use it for accessing the Customer-Managed-Key secret.
For each test case in the e2e folder, you can optionally add post-deployment Pester tests that are executed once the corresponding deployment completed and before the removal logic kicks in.
To leverage the feature you MUST:
Use Pester as a test framework in each test file
Name the file with the suffix "*.tests.ps1"
Place each test file the e2e test’s folder or any subfolder (e.g., e2e/max/myTest.tests.ps1 or e2e/max/tests/myTest.tests.ps1)
Implement an input parameter TestInputData in the following way:
Through this parameter you can make use of every output the main.test.bicep file returns, as well as the path to the test template file in case you want to extract data from it directly.
For example, with an output such as output resourceId string = testDeployment[1].outputs.resourceId defined in the main.test.bicep file, the $TestInputData would look like:
$TestInputData = @{
DeploymentOutputs = @{
resourceId = @{
Type = "String" Value = "/subscriptions/***/resourceGroups/dep-***-keyvault.vaults-kvvpe-rg/providers/Microsoft.KeyVault/vaults/***kvvpe001" }
}
ModuleTestFolderPath = "/home/runner/work/bicep-registry-modules/bicep-registry-modules/avm/res/key-vault/vault/tests/e2e/private-endpoint"}
ID: BCPRMNFR1 - Category: Testing - Expected Test Directories
Module owners MUST create the defaults, waf-aligned folders within their /tests/e2e/ directory in their resource module source code and SHOULD create a max folder also. Module owners CAN create additional folders as required. Each folder will be used as described for various test cases.
Defaults tests (MUST)
The defaults folder contains a test instance that deploys the module with the minimum set of required parameters.
This includes input parameters of type Required plus input parameters of type Conditional marked as required for WAF compliance.
This instance has heavy reliance on the default values for other input parameters. Parameters of type OptionalSHOULD NOT be used.
WAF aligned tests (MUST)
The waf-aligned folder contains a test instance that deploys the module in alignment with the best-practices of the Azure Well-Architected Framework.
This includes input parameters of type Required, parameters of type Conditional marked as required for WAF compliance, and parameters of type Optional useful for WAF compliance.
Parameters and dependencies which are not needed for WAF compliance, SHOULD NOT be included.
Max tests (SHOULD)
The max folder contains a test instance that deploys the module using a large parameter set, enabling most of the modules’ features.
The purpose of this instance is primarily parameter validation and not necessarily to serve as a real example scenario. Ideally, all features, extension resources and child resources should be enabled in this test, unless not possible due to conflicts, e.g., in case parameters are mutually exclusive.
Note
Please note that this test is not mandatory to have, but recommended for bulk parameter validation. It can be skipped in case the module parameter validation is covered already by additional, more scenario-specific tests.
Additional tests (CAN)
Additional folders CAN be created by module owners as required.
For example, to validate parameters not covered by the max test due to conflicts, or to provide a real example scenario for a specific use case.
If a module can deploy varying styles of the same resource, e.g., VMs can be Linux or Windows, each style should be tested as both defaults and waf-aligned. These names should be used as suffixes in the directory name to denote the style, e.g., for a VM we would expect to see:
/tests/e2e/defaults.linux/main.test.bicep
/tests/e2e/waf-aligned.linux/main.test.bicep
/tests/e2e/defaults.windows/main.test.bicep
/tests/e2e/waf-aligned.windows/main.test.bicep
Documentation
The content below is listed based on the following tags
README documentation MUST be automatically/programmatically generated. MUST include the sections as defined in the language specific requirements BCPNFR2, TFNFR2.
The above formats are currently automatically taken & generated from the tests/e2e tests. It is enough to run the Set-ModuleReadMe or Set-AVMModule functions (from the utilities folder) to update the usage examples in the readme(s).
Note
Bicep Parameter Files (.bicepparam) are being reviewed and considered by the AVM team for the usability and features at this time and will likely be added in the future.
It is planned that these examples are automatically added to the module readme’s parameter descriptions when running either the Set-ModuleReadMe or Set-AVMModule scripts (available in the utilities folder).
Release / Publishing
The content below is listed based on the following tags
You cannot specify the patch version for Bicep modules in the public Bicep Registry, as this is automatically incremented by 1 each time a module is published. You can only set the Major and Minor versions.
Modules MUST use semantic versioning (aka semver) for their versions and releases in accordance with: Semantic Versioning 2.0.0
For example all modules should be released using a semantic version that matches this pattern: X.Y.Z
X == Major Version
Y == Minor Version
Z == Patch Version
Module versioning before first Major version release 1.0.0
Initially modules MUST be released as version 0.1.0 and incremented via Minor and Patch versions only until the AVM Core Team are confident the AVM specifications are mature enough and appropriate CI test coverage is in place, plus the module owner is happy the module has been “road tested” and is now stable enough for its first Major release of version 1.0.0.
Note
Releasing as version 0.1.0 initially and only incrementing Minor and Patch versions allows the module owner to make breaking changes more easily and frequently as it’s still not an official Major/Stable release. π
Until first Major version 1.0.0 is released, given a version number X.Y.Z:
X Major version MUST NOT be bumped.
Y Minor version MUST be bumped when introducing breaking changes (which would normally bump Major after 1.0.0 release) or feature updates (same as it will be after 1.0.0 release).
Z Patch version MUST be bumped when introducing non-breaking, backward compatible bug fixes (same as it will be after 1.0.0 release).
A module SHOULD avoid breaking changes, e.g., deprecating inputs vs. removing. If you need to implement changes that cause a breaking change, the major version should be increased.
Info
Modules that have not been released as 1.0.0 may introduce breaking changes, as explained in the previous ID SNFR17. That means that you have to introduce non-breaking and breaking changes with a minor version jump, as long as the module has not reached version 1.0.0.
There are, however, scenarios where you want to include breaking changes into a commit and not create a new major version. If you want to introduce breaking changes as part of a minor update, you can do so. In this case, it is essential to keep the change backward compatible, so that the existing code will continue to work. At a later point, another update can increase the major version and remove the code introduced for the backward compatibility.
Tip
See the language specific examples to find out how you can deal with deprecations in AVM modules.
ID: SNFR21 - Category: Publishing - Cross Language Collaboration
When the module owners of the same Resource or Pattern AVM module are not the same individual or team for all languages, each languages team SHOULD collaborate with their sibling language team for the same module to ensure consistency where possible.
Code Style
The content below is listed based on the following tags
To improve the usability of primitive module properties declared as strings, you SHOULD declare them using a type which better represents them, and apply any required casting in the module on behalf of the user.
For reference, please refer to the following examples:
The version value is in the form of MAJOR.MINOR. The PATCH version will be incremented by the CI automatically when publishing the module to the Public Bicep Registry once the corresponding pull request is merged. Therefore, contributions that would only require an update of the patch version, can keep the version.json file intact.
For example, the version value should be:
0.1 for new modules, so that they can be released as v0.1.0.
1.0 once the module owner signs off the module is stable enough for itβs first Major release of v1.0.0.
0.x for all feature updates between the first release v0.1.0 and the first Major release of v1.0.0.
You cannot specify the patch version for Bicep modules in the public Bicep Registry, as this is automatically incremented by 1 each time a module is published. You can only set the Major and Minor versions.
Modules MUST use semantic versioning (aka semver) for their versions and releases in accordance with: Semantic Versioning 2.0.0
For example all modules should be released using a semantic version that matches this pattern: X.Y.Z
X == Major Version
Y == Minor Version
Z == Patch Version
Module versioning before first Major version release 1.0.0
Initially modules MUST be released as version 0.1.0 and incremented via Minor and Patch versions only until the AVM Core Team are confident the AVM specifications are mature enough and appropriate CI test coverage is in place, plus the module owner is happy the module has been “road tested” and is now stable enough for its first Major release of version 1.0.0.
Note
Releasing as version 0.1.0 initially and only incrementing Minor and Patch versions allows the module owner to make breaking changes more easily and frequently as it’s still not an official Major/Stable release. π
Until first Major version 1.0.0 is released, given a version number X.Y.Z:
X Major version MUST NOT be bumped.
Y Minor version MUST be bumped when introducing breaking changes (which would normally bump Major after 1.0.0 release) or feature updates (same as it will be after 1.0.0 release).
Z Patch version MUST be bumped when introducing non-breaking, backward compatible bug fixes (same as it will be after 1.0.0 release).
Any updates to existing or new specifications for Terraform must be submitted as a draft for review by Azure Terraform PG/Engineering(@Azure/terraform-avm) and AVM core team(@Azure/avm-core-team).
Important
Provider Versatility: Users have the autonomy to choose between AzureRM, AzAPI, or a combination of both, tailored to the specific complexity of module requirements.
What changed recently?
See what specifications changed in the last 30 days...
This chapter details the interfaces/schemas for the AVM Resource Modules features/extension resources as referenced in RMFR4 and RMFR5.
Diagnostic Settings
Important
Allowed values for logs and metric categories or category groups MUST NOT be specified to keep the module implementation evergreen for any new categories or category groups added by RPs, without module owners having to update a list of allowed values and cut a new release of their module.
variable"diagnostic_settings" {
type = map(object({
name = optional(string, null)
log_categories = optional(set(string), [])
log_groups = optional(set(string), ["allLogs"])
metric_categories = optional(set(string), ["AllMetrics"])
log_analytics_destination_type = optional(string, "Dedicated")
workspace_resource_id = optional(string, null)
storage_account_resource_id = optional(string, null)
event_hub_authorization_rule_resource_id = optional(string, null)
event_hub_name = optional(string, null)
marketplace_partner_resource_id = optional(string, null)
}))
default = {}
nullable = falsevalidation {
condition = alltrue([for_, vin var.diagnostic_settings: contains(["Dedicated", "AzureDiagnostics"], v.log_analytics_destination_type)])
error_message = "Log analytics destination type must be one of: 'Dedicated', 'AzureDiagnostics'." }
validation {
condition = alltrue(
[
for_, vin var.diagnostic_settings:v.workspace_resource_id!=null||v.storage_account_resource_id!=null||v.event_hub_authorization_rule_resource_id!=null||v.marketplace_partner_resource_id!=null ]
)
error_message = "At least one of `workspace_resource_id`, `storage_account_resource_id`, `marketplace_partner_resource_id`, or `event_hub_authorization_rule_resource_id`, must be set." }
description = <<DESCRIPTION A map of diagnostic settings to create on the Key Vault. The map key is deliberately arbitrary to avoid issues where map keys maybe unknown at plan time.
- `name` - (Optional) The name of the diagnostic setting. One will be generated if not set, however this will not be unique if you want to create multiple diagnostic setting resources.
- `log_categories` - (Optional) A set of log categories to send to the log analytics workspace. Defaults to `[]`.
- `log_groups` - (Optional) A set of log groups to send to the log analytics workspace. Defaults to `["allLogs"]`.
- `metric_categories` - (Optional) A set of metric categories to send to the log analytics workspace. Defaults to `["AllMetrics"]`.
- `log_analytics_destination_type` - (Optional) The destination type for the diagnostic setting. Possible values are `Dedicated` and `AzureDiagnostics`. Defaults to `Dedicated`.
- `workspace_resource_id` - (Optional) The resource ID of the log analytics workspace to send logs and metrics to.
- `storage_account_resource_id` - (Optional) The resource ID of the storage account to send logs and metrics to.
- `event_hub_authorization_rule_resource_id` - (Optional) The resource ID of the event hub authorization rule to send logs and metrics to.
- `event_hub_name` - (Optional) The name of the event hub. If none is specified, the default event hub will be selected.
- `marketplace_partner_resource_id` - (Optional) The full ARM resource ID of the Marketplace resource to which you would like to send Diagnostic LogsLogs.
DESCRIPTION } # Sample resource
resource"azurerm_monitor_diagnostic_setting""this" {
for_each = var.diagnostic_settingsname = each.value.name!=null? each.value.name:"diag-${var.name}"target_resource_id = azurerm_<MY_RESOURCE>.this.idstorage_account_id = each.value.storage_account_resource_ideventhub_authorization_rule_id = each.value.event_hub_authorization_rule_resource_ideventhub_name = each.value.event_hub_namepartner_solution_id = each.value.marketplace_partner_resource_idlog_analytics_workspace_id = each.value.workspace_resource_idlog_analytics_destination_type = each.value.log_analytics_destination_typedynamic"enabled_log" {
for_each = each.value.log_categoriescontent {
category = enabled_log.value }
}
dynamic"enabled_log" {
for_each = each.value.log_groupscontent {
category_group = enabled_log.value }
}
dynamic"metric" {
for_each = each.value.metric_categoriescontent {
category = metric.value }
}
}
In the provided example for Diagnostic Settings, both logs and metrics are enabled for the associated resource. However, it is IMPORTANT to note that certain resources may not support both diagnostic setting types/categories. In such cases, the resource configuration MUST be modified accordingly to ensure proper functionality and compliance with system requirements.
Role Assignments
variable"role_assignments" {
type = map(object({
role_definition_id_or_name = stringprincipal_id = stringdescription = optional(string, null)
skip_service_principal_aad_check = optional(bool, false)
condition = optional(string, null)
condition_version = optional(string, null)
delegated_managed_identity_resource_id = optional(string, null)
principal_type = optional(string, null)
}))
default = {}
nullable = falsedescription = <<DESCRIPTION A map of role assignments to create on the <RESOURCE>. The map key is deliberately arbitrary to avoid issues where map keys maybe unknown at plan time.
- `role_definition_id_or_name` - The ID or name of the role definition to assign to the principal.
- `principal_id` - The ID of the principal to assign the role to.
- `description` - (Optional) The description of the role assignment.
- `skip_service_principal_aad_check` - (Optional) If set to true, skips the Azure Active Directory check for the service principal in the tenant. Defaults to false.
- `condition` - (Optional) The condition which will be used to scope the role assignment.
- `condition_version` - (Optional) The version of the condition syntax. Leave as `null` if you are not using a condition, if you are then valid values are '2.0'.
- `delegated_managed_identity_resource_id` - (Optional) The delegated Azure Resource Id which contains a Managed Identity. Changing this forces a new resource to be created. This field is only used in cross-tenant scenario.
- `principal_type` - (Optional) The type of the `principal_id`. Possible values are `User`, `Group` and `ServicePrincipal`. It is necessary to explicitly set this attribute when creating role assignments if the principal creating the assignment is constrained by ABAC rules that filters on the PrincipalType attribute.
> Note: only set `skip_service_principal_aad_check` to true if you are assigning a role to a service principal.
DESCRIPTION }
locals {
role_definition_resource_substring = "providers/Microsoft.Authorization/roleDefinitions" } # Example resource declaration
resource"azurerm_role_assignment""this" {
for_each = var.role_assignmentsscope = azurerm_MY_RESOURCE.this.idrole_definition_id = strcontains(lower(each.value.role_definition_id_or_name), lower(local.role_definition_resource_substring)) ? each.value.role_definition_id_or_name:nullrole_definition_name = strcontains(lower(each.value.role_definition_id_or_name), lower(local.role_definition_resource_substring)) ?null: each.value.role_definition_id_or_nameprincipal_id = each.value.principal_idcondition = each.value.conditioncondition_version = each.value.condition_versionskip_service_principal_aad_check = each.value.skip_service_principal_aad_checkdelegated_managed_identity_resource_id = each.value.delegated_managed_identity_resource_idprincipal_type = each.value.principal_type }
Details on child, extension and cross-referenced resources:
Modules MUST support Role Assignments on child, extension and cross-referenced resources as well as the primary resource via parameters/variables
Resource Locks
variable"lock" {
type = object({
kind = stringname = optional(string, null)
})
default = nulldescription = <<DESCRIPTION Controls the Resource Lock configuration for this resource. The following properties can be specified:
- `kind` - (Required) The type of lock. Possible values are `\"CanNotDelete\"` and `\"ReadOnly\"`.
- `name` - (Optional) The name of the lock. If not specified, a name will be generated based on the `kind` value. Changing this forces the creation of a new resource.
DESCRIPTIONvalidation {
condition = var.lock!=null? contains(["CanNotDelete", "ReadOnly"], var.lock.kind) :trueerror_message = "Lock kind must be either `\"CanNotDelete\"` or `\"ReadOnly\"`." }
} # Example resource implementation
resource"azurerm_management_lock""this" {
count = var.lock!=null?1:0lock_level = var.lock.kindname = coalesce(var.lock.name, "lock-${var.lock.kind}")
scope = azurerm_MY_RESOURCE.this.idnotes = var.lock.kind =="CanNotDelete"?"Cannot delete the resource or its child resources.":"Cannot delete or modify the resource or its child resources." }
lock = {
name = "lock-{resourcename}" # optional
type = "CanNotDelete" }
Details on child and extension resources:
Locks SHOULD be able to be set for child resources of the primary resource in resource modules
Details on cross-referenced resources:
Locks MUST be automatically applied to cross-referenced resources if the primary resource has a lock applied.
This MUST also be able to be turned off for each of the cross-referenced resources by the module consumer via a parameter/variable if they desire
An example of this is a Key Vault module that has a Private Endpoints enabled. If a lock is applied to the Key Vault via the lock parameter/variable then the lock should also be applied to the Private Endpoint automatically, unless the privateEndpointLock/private_endpoint_lock (example name) parameter/variable is set to None
Important
In Terraform, locks become part of the resource graph and suitable depends_on values should be set. Note that, during a destroy operation, Terraform will remove the locks before removing the resource itself, reducing the usefulness of the lock somewhat. Also note, due to eventual consistency in Azure, use of locks can cause destroy operations to fail as the lock may not have been fully removed by the time the destroy operation is executed.
Tags
variable"tags" {
type = map(string)
default = nulldescription = "(Optional) Tags of the resource." }
Details on child, extension and cross-referenced resources:
Tags MUST be automatically applied to child, extension and cross-referenced resources, if tags are applied to the primary resource.
By default, all tags set for the primary resource will automatically be passed down to child, extension and cross-referenced resources.
This MUST be able to be overridden by the module consumer so they can specify alternate tags for child, extension and cross-referenced resources, if they desire via a parameter/variable
If overridden by the module consumer, no merge/union of tags will take place from the primary resource and only the tags specified for the child, extension and cross-referenced resources will be applied
Managed Identities
variable"managed_identities" {
type = object({
system_assigned = optional(bool, false)
user_assigned_resource_ids = optional(set(string), [])
})
default = {}
nullable = falsedescription = <<DESCRIPTION Controls the Managed Identity configuration on this resource. The following properties can be specified:
- `system_assigned` - (Optional) Specifies if the System Assigned Managed Identity should be enabled.
- `user_assigned_resource_ids` - (Optional) Specifies a list of User Assigned Managed Identity resource IDs to be assigned to this resource.
DESCRIPTION } # Helper locals to make the dynamic block more readable
# There are three attributes here to cater for resources that
# support both user and system MIs, only system MIs, and only user MIs
locals {
managed_identities = {
system_assigned_user_assigned = (var.managed_identities.system_assigned|| length(var.managed_identities.user_assigned_resource_ids) >0) ? {
this = {
type = var.managed_identities.system_assigned&& length(var.managed_identities.user_assigned_resource_ids) >0?"SystemAssigned, UserAssigned": length(var.managed_identities.user_assigned_resource_ids) >0?"UserAssigned":"SystemAssigned"user_assigned_resource_ids = var.managed_identities.user_assigned_resource_ids }
} : {}
system_assigned = var.managed_identities.system_assigned? {
this = {
type = "SystemAssigned" }
} : {}
user_assigned = length(var.managed_identities.user_assigned_resource_ids) >0? {
this = {
type = "UserAssigned"user_assigned_resource_ids = var.managed_identities.user_assigned_resource_ids }
} : {}
}
} ## Resources supporting both SystemAssigned and UserAssigned
dynamic"identity" {
for_each = local.managed_identities.system_assigned_user_assignedcontent {
type = identity.value.typeidentity_ids = identity.value.user_assigned_resource_ids }
} ## Resources that only support SystemAssigned
dynamic"identity" {
for_each = identity.managed_identities.system_assignedcontent {
type = identity.value.type }
} ## Resources that only support UserAssigned
dynamic"identity" {
for_each = local.managed_identities.user_assignedcontent {
type = identity.value.typeidentity_ids = identity.value.user_assigned_resource_ids }
}
Reason for differences in User Assigned data type in languages:
We do not forsee the Managed Identity Resource Provider team to ever add additional properties within the empty object ({}) value required on the input of a User Assigned Managed Identity.
In Bicep we therefore have removed the need for this to be declared and just converted it to a simple array of Resource IDs
However, in Terraform we have left it as a object/map as this simplifies for_each and other loop mechanisms and provides more consistency in plan, apply, destroy operations
Especially when adding, removing or changing the order of the User Assigned Managed Identities as they are declared
Private Endpoints
# In this example we only support one service, e.g. Key Vault.
# If your service has multiple private endpoint services, then expose the service name.
# This variable is used to determine if the private_dns_zone_group block should be included,
# or if it is to be managed externally, e.g. using Azure Policy.
# https://github.com/Azure/terraform-azurerm-avm-res-keyvault-vault/issues/32
# Alternatively you can use AzAPI, which does not have this issue.
variable"private_endpoints_manage_dns_zone_group" {
type = booldefault = truenullable = falsedescription = "Whether to manage private DNS zone groups with this module. If set to false, you must manage private DNS zone groups externally, e.g. using Azure Policy." }
variable"private_endpoints" {
type = map(object({
name = optional(string, null)
role_assignments = optional(map(object({
role_definition_id_or_name = stringprincipal_id = stringdescription = optional(string, null)
skip_service_principal_aad_check = optional(bool, false)
condition = optional(string, null)
condition_version = optional(string, null)
delegated_managed_identity_resource_id = optional(string, null)
principal_type = optional(string, null)
})), {})
lock = optional(object({
kind = stringname = optional(string, null)
}), null)
tags = optional(map(string), null)
subnet_resource_id = stringsubresource_name = string # NOTE: `subresource_name` can be excluded if the resource does not support multiple sub resource types (e.g. storage account supports blob, queue, etc)
private_dns_zone_group_name = optional(string, "default")
private_dns_zone_resource_ids = optional(set(string), [])
application_security_group_associations = optional(map(string), {})
private_service_connection_name = optional(string, null)
network_interface_name = optional(string, null)
location = optional(string, null)
resource_group_name = optional(string, null)
ip_configurations = optional(map(object({
name = stringprivate_ip_address = string })), {})
}))
default = {}
nullable = falsedescription = <<DESCRIPTION A map of private endpoints to create on the Key Vault. The map key is deliberately arbitrary to avoid issues where map keys maybe unknown at plan time.
- `name` - (Optional) The name of the private endpoint. One will be generated if not set.
- `role_assignments` - (Optional) A map of role assignments to create on the private endpoint. The map key is deliberately arbitrary to avoid issues where map keys maybe unknown at plan time. See `var.role_assignments` for more information.
- `role_definition_id_or_name` - The ID or name of the role definition to assign to the principal.
- `principal_id` - The ID of the principal to assign the role to.
- `description` - (Optional) The description of the role assignment.
- `skip_service_principal_aad_check` - (Optional) If set to true, skips the Azure Active Directory check for the service principal in the tenant. Defaults to false.
- `condition` - (Optional) The condition which will be used to scope the role assignment.
- `condition_version` - (Optional) The version of the condition syntax. Leave as `null` if you are not using a condition, if you are then valid values are '2.0'.
- `delegated_managed_identity_resource_id` - (Optional) The delegated Azure Resource Id which contains a Managed Identity. Changing this forces a new resource to be created. This field is only used in cross-tenant scenario.
- `principal_type` - (Optional) The type of the `principal_id`. Possible values are `User`, `Group` and `ServicePrincipal`. It is necessary to explicitly set this attribute when creating role assignments if the principal creating the assignment is constrained by ABAC rules that filters on the PrincipalType attribute.
- `lock` - (Optional) The lock level to apply to the private endpoint. Default is `None`. Possible values are `None`, `CanNotDelete`, and `ReadOnly`.
- `kind` - (Required) The type of lock. Possible values are `\"CanNotDelete\"` and `\"ReadOnly\"`.
- `name` - (Optional) The name of the lock. If not specified, a name will be generated based on the `kind` value. Changing this forces the creation of a new resource.
- `tags` - (Optional) A mapping of tags to assign to the private endpoint.
- `subnet_resource_id` - The resource ID of the subnet to deploy the private endpoint in.
- `subresource_name` - The name of the sub resource for the private endpoint.
- `private_dns_zone_group_name` - (Optional) The name of the private DNS zone group. One will be generated if not set.
- `private_dns_zone_resource_ids` - (Optional) A set of resource IDs of private DNS zones to associate with the private endpoint. If not set, no zone groups will be created and the private endpoint will not be associated with any private DNS zones. DNS records must be managed external to this module.
- `application_security_group_resource_ids` - (Optional) A map of resource IDs of application security groups to associate with the private endpoint. The map key is deliberately arbitrary to avoid issues where map keys maybe unknown at plan time.
- `private_service_connection_name` - (Optional) The name of the private service connection. One will be generated if not set.
- `network_interface_name` - (Optional) The name of the network interface. One will be generated if not set.
- `location` - (Optional) The Azure location where the resources will be deployed. Defaults to the location of the resource group.
- `resource_group_name` - (Optional) The resource group where the resources will be deployed. Defaults to the resource group of the Key Vault.
- `ip_configurations` - (Optional) A map of IP configurations to create on the private endpoint. If not specified the platform will create one. The map key is deliberately arbitrary to avoid issues where map keys maybe unknown at plan time.
- `name` - The name of the IP configuration.
- `private_ip_address` - The private IP address of the IP configuration.
DESCRIPTION } # The PE resource when we are managing the private_dns_zone_group block:
resource"azurerm_private_endpoint""this" {
for_each = { fork, vin var.private_endpoints:k => vif var.private_endpoints_manage_dns_zone_group }
name = each.value.name!=null? each.value.name:"pep-${var.name}"location = each.value.location!=null? each.value.location: var.locationresource_group_name = each.value.resource_group_name!=null? each.value.resource_group_name: var.resource_group_namesubnet_id = each.value.subnet_resource_idcustom_network_interface_name = each.value.network_interface_nametags = each.value.tagsprivate_service_connection {
name = each.value.private_service_connection_name!=null? each.value.private_service_connection_name:"pse-${var.name}"private_connection_resource_id = azurerm_key_vault.this.idis_manual_connection = falsesubresource_names = ["MYSERVICE"] # map to each.value.subresource_name if there are multiple services.
}
dynamic"private_dns_zone_group" {
for_each = length(each.value.private_dns_zone_resource_ids) >0? ["this"] : []
content {
name = each.value.private_dns_zone_group_nameprivate_dns_zone_ids = each.value.private_dns_zone_resource_ids }
}
dynamic"ip_configuration" {
for_each = each.value.ip_configurationscontent {
name = ip_configuration.value.namesubresource_name = "MYSERVICE" # map to each.value.subresource_name if there are multiple services.
member_name = "MYSERVICE" # map to each.value.subresource_name if there are multiple services.
private_ip_address = ip_configuration.value.private_ip_address }
}
} # The PE resource when we are managing **not** the private_dns_zone_group block:
resource"azurerm_private_endpoint""this_unmanaged_dns_zone_groups" {
for_each = { fork, vin var.private_endpoints:k => vif!var.private_endpoints_manage_dns_zone_group } # ... repeat configuration above
# **omitting the private_dns_zone_group block**
# then add the following lifecycle block to ignore changes to the private_dns_zone_group block
lifecycle {
ignore_changes = [private_dns_zone_group]
}
} # Private endpoint application security group associations.
# We merge the nested maps from private endpoints and application security group associations into a single map.
locals {
private_endpoint_application_security_group_associations = { forassocin flatten([
forpe_k, pe_vin var.private_endpoints: [
forasg_k, asg_vinpe_v.application_security_group_associations: {
asg_key = asg_kpe_key = pe_kasg_resource_id = asg_v }
]
]) :"${assoc.pe_key}-${assoc.asg_key}" => assoc }
}
resource"azurerm_private_endpoint_application_security_group_association""this" {
for_each = local.private_endpoint_application_security_group_associationsprivate_endpoint_id = azurerm_private_endpoint.this[each.value.pe_key].idapplication_security_group_id = each.value.asg_resource_id } # You need an additional resource when not managing private_dns_zone_group with this module:
# In your output you need to select the correct resource based on the value of var.private_endpoints_manage_dns_zone_group:
output"private_endpoints" {
value = var.private_endpoints_manage_dns_zone_group?azurerm_private_endpoint.this_managed_dns_zone_groups:azurerm_private_endpoint.this_unmanaged_dns_zone_groupsdescription = <<DESCRIPTION A map of the private endpoints created.
DESCRIPTION }
The properties defined in the schema above are the minimum amount of properties expected to be exposed for Private Endpoints in AVM Resource Modules.
A module owner MAY chose to expose additional properties of the Private Endpoint resource.
However, module owners considering this SHOULD contact the AVM core team first to consult on how the property should be exposed to avoid future breaking changes to the schema that may be enforced upon them.
Module owners MAY chose to define a list of allowed value for the ‘service’ (a.k.a. groupIds) property.
However, they should do so with caution as should a new service appear for their resource module, a new release will need to be cut to add this new service to the allowed values.
Whereas not specifying allowed values will allow flexibility from day 0 without the need for any changes and releases to be made.
A module MUST have an owner that is defined and managed by a GitHub Team in the Azure GitHub organization.
Today this is only Microsoft FTEs, but everyone is welcome to contribute. The module just MUST be owned by a Microsoft FTE (today) so we can enforce and provide the long-term support required by this initiative.
Note
The names for the GitHub teams for each approved module are already defined in the respective Module Indexes. These teams MUST be created (and used) for each module.
ID: SNFR20 - Category: Contribution/Support - GitHub Teams Only
All GitHub repositories that AVM module are published from and hosted within MUST only assign GitHub repository permissions to GitHub teams only.
Each module MUST have separate GitHub teams assigned for module owners AND module contributors respectively. These GitHub teams MUST be created in the Azure organization in GitHub.
There MUST NOT be any GitHub repository permissions assigned to individual users.
Note
The names for the GitHub teams for each approved module are already defined in the respective Module Indexes. These teams MUST be created (and used) for each module.
The @Azure prefix in the last column of the tables linked above represents the “Azure” GitHub organization all AVM-related repositories exist in. DO NOT include this segment in the team’s name!
Important
Non-FTE / external contributors (subject matter experts that aren’t Microsoft employees) can’t be members of the teams described in this chapter, hence, they won’t gain any extra permissions on AVM repositories, therefore, they need to work in forks.
Naming Convention
The naming convention for the GitHub teams MUST follow the below pattern:
<hyphenated module name>-module-owners-<bicep/tf> - to be assigned as the GitHub repository’s Module Owners team
<hyphenated module name>-module-contributors-<bicep/tf> - to be assigned as the GitHub repository’s Module Contributors team
Note
The naming convention for Bicep modules is slightly different than the naming convention for their respective GitHub teams.
Segments:
<hyphenated module name> == the AVM Module’s name, with each segment separated by dashes, i.e., avm-res-<resource provider>-<ARM resource type>
All officially documented module owner(s) MUST be added to the -module-owners- team. The -module-owners- team MUST NOT have any other members.
Any additional module contributors whom the module owner(s) agreed to work with MUST be added to the -module-contributors- team.
Unless explicitly requested and agreed, members of the AVM core team or any PG teams MUST NOT be added to the -module-owners- or -module-contributors- teams as permissions for them are granted through the teams described in SNFR9.
Grant Permissions - Bicep
Team memberships
Note
In case of Bicep modules, permissions to the BRM repository (the repo of the Bicep Registry) are granted via assigning the -module-owners- and -module-contributors- teams to parent teams that already have the required level access configured. While it is the module owner’s responsibility to initiate the addition of their teams to the respective parents, only the AVM core team can approve this parent-child relationship.
Module owners MUST create their -module-owners- and -module-contributors- teams and as part of the provisioning process, they MUST request the addition of these teams to their respective parent teams (see the table below for details).
GitHub Team Name
Description
Permissions
Permissions granted through
Where to work?
<hyphenated module name>-module-owners-bicep
AVM Bicep Module Owners - <module name>
Write
Assignment to the avm-technical-reviewers-bicep parent team.
Examples - GitHub teams required for the Bicep resource module of Azure Virtual Network (avm/res/network/virtual-network):
avm-res-network-virtualnetwork-module-owners-bicep –> assign to the avm-technical-reviewers-bicep parent team.
avm-res-network-virtualnetwork-module-contributors-bicep –> assign to the avm-module-contributors-bicep parent team.
Tip
Direct link to create a new GitHub team and assign it to its parent: Create new team
Fill in the values as follows:
Team name: Following the naming convention described above, use the value defined in the module indexes.
Description: Follow the guidance above (see the Description column in the table above).
Parent team: Follow the guidance above (see the Permissions granted through column in the table above).
Team visibility: Visible
Team notifications: Enabled
CODEOWNERS file
As part of the “initial Pull Request” (that publishes the first version of the module), module owners MUST add an entry to the CODEOWNERS file in the BRM repository (here).
Note
Through this approach, the AVM core team will grant review permission to module owners as part of the standard PR review process.
Every CODEOWNERS entry (line) MUST include the following segments separated by a single whitespace character:
Path of the module, relative to the repo’s root, e.g.: /avm/res/network/virtual-network/
The -module-owners-team, with the @Azure/ prefix, e.g., @Azure/avm-res-network-virtualnetwork-module-owners-bicep
The GitHub team of the AVM Bicep reviewers, with the @Azure/ prefix, i.e., @Azure/avm-module-reviewers-bicep
Example - CODEOWNERS entry for the Bicep resource module of Azure Virtual Network (avm/res/network/virtual-network):
Module owners MUST assign the -module-owners-and -module-contributors- teams the necessary permissions on their Terraform module repository per the guidance below.
GitHub Team Name
Description
Permissions
Permissions granted through
Where to work?
<module name>-module-owners-tf
AVM Terraform Module Owners - <module name>
Admin
Direct assignment to repo
Module owner can decide whether they want to work in a branch local to the repo or in a fork.
Only the latest released version of a module MUST be supported.
For example, if an AVM Resource Module is used in an AVM Pattern Module that was working but now is not. The first step by the AVM Pattern Module owner should be to upgrade to the latest version of the AVM Resource Module test and then if not fixed, troubleshoot and fix forward from the that latest version of the AVM Resource Module onwards.
This avoids AVM Module owners from having to maintain multiple major release versions.
```shell
# Linux / MacOs# For Windows replace $PWD with your the local path or your repository#docker run -it -v $PWD:/repo -w /repo mcr.microsoft.com/powershell pwsh -Command '
#Invoke-WebRequest -Uri "https://azure.github.io/Azure-Verified-Modules/scripts/Set-AvmGitHubLabels.ps1" -OutFile "Set-AvmGitHubLabels.ps1"
$gh_version = "2.44.1"
Invoke-WebRequest -Uri "https://github.com/cli/cli/releases/download/v2.44.1/gh_2.44.1_linux_amd64.tar.gz" -OutFile "gh_$($gh_version)_linux_amd64.tar.gz"
apt-get update && apt-get install -y git
tar -xzf "gh_$($gh_version)_linux_amd64.tar.gz"
ls -lsa
mv "gh_$($gh_version)_linux_amd64/bin/gh" /usr/local/bin/
rm "gh_$($gh_version)_linux_amd64.tar.gz" && rm -rf "gh_$($gh_version)_linux_amd64"
gh --version
ls -lsa
gh auth login
$OrgProject = "Azure/terraform-azurerm-avm-res-kusto-cluster"
gh auth status
./Set-AvmGitHubLabels.ps1 -RepositoryName $OrgProject -CreateCsvLabelExports $false -NoUserPrompts $true
'```
By default this script will only update and append labels on the repository specified. However, this can be changed by setting the parameter -UpdateAndAddLabelsOnly to $false, which will remove all the labels from the repository first and then apply the AVM labels from the CSV only.
Make sure you elevate your privilege to admin level or the labels will not be applied to your repository. Go to repos.opensource.microsoft.com/orgs/Azure/repos/ to request admin access before running the script.
Full Script:
These Set-AvmGitHubLabels.ps1 can be downloaded from here.
[Diagnostics.CodeAnalysis.SuppressMessageAttribute("PSAvoidUsingWriteHost", "", Justification = "Coloured output required in this script")]
<#
.SYNOPSIS This script can be used to create the Azure Verified Modules (AVM) standard GitHub labels to a GitHub repository.
.DESCRIPTION This script can be used to create the Azure Verified Modules (AVM) standard GitHub labels to a GitHub repository.
By default, the script will remove all pre-existing labels and apply the AVM labels. However, this can be changed by using the -RemoveExistingLabels parameter and setting it to $false. The tool will also output the labels that exist in the repository before and after the script has run to a CSV file in the current directory, or a directory specified by the -OutputDirectory parameter.
The AVM labels to be created are documented here: TBC
.NOTES Please ensure you have specified the GitHub repositry correctly. The script will prompt you to confirm the repository name before proceeding.
.COMPONENT You must have the GitHub CLI installed and be authenticated to a GitHub account with access to the repository you are applying the labels to before running this script.
.LINK TBC
.Parameter RepositoryName
The name of the GitHub repository to apply the labels to.
.Parameter RemoveExistingLabels
If set to $true, the default value, the script will remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels. If set to $false, the script will not remove any pre-existing labels.
.Parameter UpdateAndAddLabelsOnly
If set to $true, the default value, the script will only update and add labels to the repository specified in -RepositoryName. If set to $false, the script will remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels.
.Parameter OutputDirectory
The directory to output the pre-existing and post-existing labels to in a CSV file. The default value is the current directory.
.Parameter CreateCsvLabelExports
If set to $true, the default value, the script will output the pre-existing and post-existing labels to a CSV file in the current directory, or a directory specified by the -OutputDirectory parameter. If set to $false, the script will not output the pre-existing and post-existing labels to a CSV file.
.Parameter GitHubCliLimit
The maximum number of labels to return from the GitHub CLI. The default value is 999.
.Parameter LabelsToApplyCsvUri
The URI to the CSV file containing the labels to apply to the GitHub repository. The default value is https://raw.githubusercontent.com/jtracey93/label-source/main/avm-github-labels.csv.
.Parameter NoUserPrompts
If set to $true, the default value, the script will not prompt the user to confirm they want to remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels. If set to $false, the script will prompt the user to confirm they want to remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels.
This is useful for running the script in automation workflows
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and remove all pre-existing labels.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo"
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and do not remove any pre-existing labels, just overwrite any labels that have the same name.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -RemoveExistingLabels $false
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and output the pre-existing and post-existing labels to the directory C:\GitHubLabels.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -OutputDirectory "C:\GitHubLabels"
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and output the pre-existing and post-existing labels to the directory C:\GitHubLabels and do not remove any pre-existing labels, just overwrite any labels that have the same name.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -OutputDirectory "C:\GitHubLabels" -RemoveExistingLabels $false
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and do not create the pre-existing and post-existing labels CSV files and do not remove any pre-existing labels, just overwrite any labels that have the same name.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -RemoveExistingLabels $false -CreateCsvLabelExports $false
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and do not create the pre-existing and post-existing labels CSV files and do not remove any pre-existing labels, just overwrite any labels that have the same name. Finally, use a custom CSV file hosted on the internet to create the labels from.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -OutputDirectory "C:\GitHubLabels" -RemoveExistingLabels $false -CreateCsvLabelExports $false -LabelsToApplyCsvUri "https://example.com/csv/avm-github-labels.csv"
#>#Requires-PSEdition Core [CmdletBinding()]
param (
[Parameter(Mandatory = $true)]
[string]$RepositoryName,
[Parameter(Mandatory = $false)]
[bool]$RemoveExistingLabels = $true,
[Parameter(Mandatory = $false)]
[bool]$UpdateAndAddLabelsOnly = $true,
[Parameter(Mandatory = $false)]
[bool]$CreateCsvLabelExports = $true,
[Parameter(Mandatory = $false)]
[string]$OutputDirectory = (Get-Location),
[Parameter(Mandatory = $false)]
[int]$GitHubCliLimit = 999,
[Parameter(Mandatory = $false)]
[string]$LabelsToApplyCsvUri = "https://azure.github.io/Azure-Verified-Modules/governance/avm-standard-github-labels.csv",
[Parameter(Mandatory = $false)]
[bool]$NoUserPrompts = $false
)
# Check if the GitHub CLI is installed $GitHubCliInstalled = Get-Command gh -ErrorAction SilentlyContinue
if ($null -eq $GitHubCliInstalled) {
throw"The GitHub CLI is not installed. Please install the GitHub CLI and try again." }
Write-Host "The GitHub CLI is installed..." -ForegroundColor Green
# Check if GitHub CLI is authenticated $GitHubCliAuthenticated = gh auth status
if ($LASTEXITCODE -ne0) {
Write-Host $GitHubCliAuthenticated -ForegroundColor Red
throw"Not authenticated to GitHub. Please authenticate to GitHub using the GitHub CLI, `gh auth login`, and try again." }
Write-Host "Authenticated to GitHub..." -ForegroundColor Green
# Check if GitHub repository name is valid $GitHubRepositoryNameValid = $RepositoryName -match"^[a-zA-Z0-9-]+/[a-zA-Z0-9-]+$"if ($false -eq $GitHubRepositoryNameValid) {
throw"The GitHub repository name $RepositoryName is not valid. Please check the repository name and try again. The format must be <OrgName>/<RepoName>" }
# List GitHub repository provided and check it exists $GitHubRepository = gh repo view $RepositoryName
if ($LASTEXITCODE -ne0) {
Write-Host $GitHubRepository -ForegroundColor Red
throw"The GitHub repository $RepositoryName does not exist. Please check the repository name and try again." }
Write-Host "The GitHub repository $RepositoryName exists..." -ForegroundColor Green
# PRE - Get the current GitHub repository labels and export to a CSV file in the current directory or where -OutputDirectory specifies if set to a valid directory path and the directory exists or can be created if it does not exist alreadyif ($RemoveExistingLabels -or $UpdateAndAddLabelsOnly) {
Write-Host "Getting the current GitHub repository (pre) labels for $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels = gh label list -R $RepositoryName -L $GitHubCliLimit --json name,description,color
if ($null -ne $GitHubRepositoryLabels -and $CreateCsvLabelExports -eq $true) {
$csvFileNamePathPre = "$OutputDirectory\$($RepositoryName.Replace('/', '_'))-Labels-Pre-$(Get-Date -Format FileDateTime).csv" Write-Host "Exporting the current GitHub repository (pre) labels for $RepositoryName to $csvFileNamePathPre" -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | Export-Csv -Path $csvFileNamePathPre -NoTypeInformation
}
}
# Remove all pre-existing labels if -RemoveExistingLabels is set to $true and user confirms they want to remove all pre-existing labelsif ($null -ne $GitHubRepositoryLabels) {
$GitHubRepositoryLabelsJson = $GitHubRepositoryLabels | ConvertFrom-Json
if ($RemoveExistingLabels -eq $true -and $NoUserPrompts -eq $false -and $UpdateAndAddLabelsOnly -eq $false) {
$RemoveExistingLabelsConfirmation = Read-Host "Are you sure you want to remove all $($GitHubRepositoryLabelsJson.Count) pre-existing labels from $($RepositoryName)? (Y/N)"if ($RemoveExistingLabelsConfirmation -eq"Y") {
Write-Host "Removing all pre-existing labels from $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | ForEach-Object {
Write-Host "Removing label $($_.name) from $RepositoryName..." -ForegroundColor DarkRed
gh label delete -R $RepositoryName $_.name --yes
}
}
}
if ($RemoveExistingLabels -eq $true -and $NoUserPrompts -eq $true -and $UpdateAndAddLabelsOnly -eq $false) {
Write-Host "Removing all pre-existing labels from $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | ForEach-Object {
Write-Host "Removing label $($_.name) from $RepositoryName..." -ForegroundColor DarkRed
gh label delete -R $RepositoryName $_.name --yes
}
}
}
if ($null -eq $GitHubRepositoryLabels) {
Write-Host "No pre-existing labels to remove or not selected to be removed from $RepositoryName..." -ForegroundColor Magenta
}
# Check LabelsToApplyCsvUri is valid and contains a CSV content Write-Host "Checking $LabelsToApplyCsvUri is valid..." -ForegroundColor Yellow
$LabelsToApplyCsvUriValid = $LabelsToApplyCsvUri -match"^https?://"if ($false -eq $LabelsToApplyCsvUriValid) {
throw"The LabelsToApplyCsvUri $LabelsToApplyCsvUri is not valid. Please check the URI and try again. The format must be a valid URI." }
Write-Host "The LabelsToApplyCsvUri $LabelsToApplyCsvUri is valid..." -ForegroundColor Green
# Create AVM lables from the AVM labels CSV file stored on the web using the convertfrom-csv cmdlet $avmLabelsCsv = Invoke-WebRequest -Uri $LabelsToApplyCsvUri | ConvertFrom-Csv
# Check if the AVM labels CSV file contains the following columns: Name, Description, HEX $avmLabelsCsvColumns = $avmLabelsCsv | Get-Member -MemberType NoteProperty | Select-Object -ExpandProperty Name
$avmLabelsCsvColumnsValid = $avmLabelsCsvColumns -contains"Name"-and $avmLabelsCsvColumns -contains"Description"-and $avmLabelsCsvColumns -contains"HEX"if ($false -eq $avmLabelsCsvColumnsValid) {
throw"The labels CSV file does not contain the required columns: Name, Description, HEX. Please check the CSV file and try again. It contains the following columns: $avmLabelsCsvColumns" }
Write-Host "The labels CSV file contains the required columns: Name, Description, HEX" -ForegroundColor Green
# Create the AVM labels in the GitHub repository Write-Host "Creating/Updating the $($avmLabelsCsv.Count) AVM labels in $RepositoryName..." -ForegroundColor Yellow
$avmLabelsCsv | ForEach-Object {
if ($GitHubRepositoryLabelsJson.name -contains $_.name) {
Write-Host "The label $($_.name) already exists in $RepositoryName. Updating the label to ensure description and color are consitent..." -ForegroundColor Magenta
gh label create -R $RepositoryName "$($_.name)" -c $_.HEX -d $($_.Description) --force
}
else {
Write-Host "The label $($_.name) does not exist in $RepositoryName. Creating label $($_.name) in $RepositoryName..." -ForegroundColor Cyan
gh label create -R $RepositoryName "$($_.Name)" -c $_.HEX -d $($_.Description) --force
}
}
# POST - Get the current GitHub repository labels and export to a CSV file in the current directory or where -OutputDirectory specifies if set to a valid directory path and the directory exists or can be created if it does not exist alreadyif ($CreateCsvLabelExports -eq $true) {
Write-Host "Getting the current GitHub repository (post) labels for $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels = gh label list -R $RepositoryName -L $GitHubCliLimit --json name,description,color
if ($null -ne $GitHubRepositoryLabels) {
$csvFileNamePathPre = "$OutputDirectory\$($RepositoryName.Replace('/', '_'))-Labels-Post-$(Get-Date -Format FileDateTime).csv" Write-Host "Exporting the current GitHub repository (post) labels for $RepositoryName to $csvFileNamePathPre" -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | Export-Csv -Path $csvFileNamePathPre -NoTypeInformation
}
}
# If -RemoveExistingLabels is set to $true and user confirms they want to remove all pre-existing labels check that only the avm labels exist in the repositoryif ($RemoveExistingLabels -eq $true -and ($RemoveExistingLabelsConfirmation -eq"Y"-or $NoUserPrompts -eq $true) -and $UpdateAndAddLabelsOnly -eq $false) {
Write-Host "Checking that only the AVM labels exist in $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels = gh label list -R $RepositoryName -L $GitHubCliLimit --json name,description,color
$GitHubRepositoryLabels | ConvertFrom-Json | ForEach-Object {
if ($avmLabelsCsv.Name -notcontains $_.name) {
throw"The label $($_.name) exists in $RepositoryName but is not in the CSV file." }
}
Write-Host "Only the CSV labels exist in $RepositoryName..." -ForegroundColor Green
}
Write-Host "The CSV labels have been created/updated in $RepositoryName..." -ForegroundColor Green
Module owners MUST set a branch protection policy on their GitHub Repositories for AVM modules against their default branch, typically main, to do the following:
Requires a Pull Request before merging
Require approval of the most recent reviewable push
Dismiss stale pull request approvals when new commits are pushed
Require linear history
Prevents force pushes
Not allow deletions
Require CODEOWNERS review
Do not allow bypassing the above settings
Above settings MUST also be enforced to administrators
Tip
If you use the template repository as mentioned in the contribution guide, the above will automatically be set.
Telemetry
The content below is listed based on the following tags
We will maintain a set of CSV files in the AVM Central Repo (Azure/Azure-Verified-Modules) with the required TelemetryId prefixes to enable checks to utilize this list to ensure the correct IDs are used. To see the formatted content of these CSV files with additional information, please visit the AVM Module Indexes page.
These will also be provided as a comment on the module proposal, once accepted, from the AVM core team.
Modules MUST provide the capability to collect deployment/usage telemetry as detailed in Telemetry further.
To highlight that AVM modules use telemetry, an information notice MUST be included in the footer of each module’s README.md file with the below content. (See more details on this requirement, here.)
Telemetry Information Notice
Note
The following information notice is automatically added at the bottom of the README.md file of the module when
Terraform: Executing the make docs command with the note and header ## Data Collection being placed in the module’s _footer.md beforehand
### Data Collection
The software may collect information about you and your use of the software and send it to Microsoft. Microsoft may use this information to provide services and improve our products and services. You may turn off the telemetry as described in the [repository](https://aka.ms/avm/telemetry). There are also some features in the software that may enable you and Microsoft to collect data from users of your applications. If you use these features, you must comply with applicable law, including providing appropriate notices to users of your applications together with a copy of Microsoftβs privacy statement. Our privacy statement is located at <https://go.microsoft.com/fwlink/?LinkID=824704>. You can learn more about data collection and use in the help documentation and our privacy statement. Your use of the software operates as your consent to these practices.
Bicep
The ARM deployment name used for the telemetry MUST follow the pattern and MUST be no longer than 64 characters in length: 46d3xbcp.<res/ptn>.<(short) module name>.<version>.<uniqueness>
<res/ptn> == AVM Resource or Pattern Module
<(short) module name> == The AVM Module’s, possibly shortened, name including the resource provider and the resource type, without;
The prefixes: avm-res-
The prefixes: avm-ptn-
<version> == The AVM Module’s MAJOR.MINOR version (only) with . (periods) replaced with - (hyphens), to allow simpler splitting of the ARM deployment name
<uniqueness> == This section of the ARM deployment name is to be used to ensure uniqueness of the deployment name.
This is to cater for the following scenarios:
The module is deployed multiple times to the same:
Due to the 64-character length limit of Azure deployment names, the <(short) module name> segment has a length limit of 36 characters, so if the module name is longer than that, it MUST be truncated to 36 characters. If any of the semantic version’s segments are longer than 1 character, it further restricts the number of characters that can be used for naming the module.
An example deployment name for the AVM Virtual Machine Resource Module would be: 46d3xbcp.res.compute-virtualmachine.1-2-3.eum3
An example deployment name for a shortened module name would be: 46d3xbcp.res.desktopvirtualization-appgroup.1-2-3.eum3
Tip
Terraform: Terraform uses a telemetry provider, the configuration of which is the same for every module and is included in the template repo.
General: See the language specific contribution guides for detailed guidance and sample code to use in AVM modules to achieve this requirement.
The telemetry enablement MUST be on/enabled by default, however this MUST be able to be disabled by a module consumer by setting the below parameter/variable value to false:
Bicep: enableTelemetry
Terraform: enable_telemetry
Note
Whenever a module references AVM modules that implement the telemetry parameter (e.g., a pattern module that uses AVM resource modules), the telemetry parameter value MUST be passed through to these modules. This is necessary to ensure a consumer can reliably enable & disable the telemetry feature for all used modules.
Modules MAY create/adopt public preview services and features at their discretion.
Preview API versions MAY be used when:
The resource/service/feature is GA but the only API version available for the GA resource/service/feature is a preview version
For example, Diagnostic Settings (Microsoft.Insights/diagnosticSettings) the latest version of the API available with GA features, like Category Groups etc., is 2021-05-01-preview
Otherwise the latest “non-preview” version of the API SHOULD be used
Preview services and features, SHOULD NOT be promoted and exposed, unless they are supported by the respective PG, and it’s documented publicly.
However, they MAY be exposed at the module owners discretion, but the following rules MUST be followed:
The description of each of the parameters/variables used for the preview service/feature MUST start with:
“THIS IS A <PARAMETER/VARIABLE> USED FOR A PREVIEW SERVICE/FEATURE, MICROSOFT MAY NOT PROVIDE SUPPORT FOR THIS, PLEASE CHECK THE PRODUCT DOCS FOR CLARIFICATION”
Modules SHOULD set defaults in input parameters/variables to align to high priority/impact/severity recommendations, where appropriate and applicable, in the following frameworks and resources:
They SHOULD NOT align to these recommendations when it requires an external dependency/resource to be deployed and configured and then associated to the resources in the module.
Alignment SHOULD prioritize best-practices and security over cost optimization, but MUST allow for these to be overridden by a module consumer easily, if desired.
ID: SFR5 - Category: Composition - Availability Zones
Modules that deploy zone-redundant resources MUST enable the spanning across as many zones as possible by default, typically all 3.
Modules that deploy zonal resources MUST provide the ability to specify a zone for the resources to be deployed/pinned to. However, they MUST NOT default to a particular zone by default, e.g. 1 in an effort to make the consumer aware of the zone they are selecting to suit their architecture requirements.
For both scenarios the modules MUST expose these configuration options via configurable parameters/variables.
ID: SFR6 - Category: Composition - Data Redundancy
Modules that deploy resources or patterns that support data redundancy SHOULD enable this to the highest possible value by default, e.g. RA-GZRS. When a resource or pattern doesn’t provide the ability to specify data redundancy as a simple property, e.g. GRS etc., then the modules MUST provide the ability to enable data redundancy for the resources or pattern via parameters/variables.
For example, a Storage Account module can simply set the sku.name property to Standard_RAGZRS. Whereas a SQL DB or Cosmos DB module will need to expose more properties, via parameters/variables, to allow the specification of the regions to replicate data to as per the consumers requirements.
Module owners MUST set the default resource name prefix for child, extension, and interface resources to the associated abbreviation for the specific resource as documented in the following CAF article Abbreviation examples for Azure resources, if specified and documented. This reduces the amount of input values a module consumer MUST provide by default when using the module.
For example, a Private Endpoint that is being deployed as part of a resource module, via the mandatory interfaces, MUST set the Private Endpoint’s default name to begin with the prefix of pep-.
Module owners MUST also provide the ability for these default names, including the prefixes, to be overridden via a parameter/variable if the consumer wishes to.
Furthermore, as per RMNFR2, Resource Modules MUST not have a default value specified for the name of the primary resource and therefore the name MUST be provided and specified by the module consumer.
The name provided MAY be used by the module owner to generate the rest of the default name for child, extension, and interface resources if they wish to. For example, for the Private Endpoint mentioned above, the full default name that can be overridden by the consumer, MAY be pep-<primary-resource-name>.
Tip
If the resource does not have a documented abbreviation in Abbreviation examples for Azure resources, then the module owner is free to use a sensible prefix instead.
Example: avm/ptn/compute/app-tier-vmss or avm/ptn/avd-lza/management-plane or avm/ptn/3-tier/web-app
Segments:
ptn defines this as a pattern module
<hyphenated grouping/category name> is a hierarchical grouping of pattern modules by category, with each word separated by dashes, such as:
project name, e.g., avd-lza,
primary resource provider, e.g., compute or network, or
architecture, e.g., 3-tier
<hyphenated pattern module name> is a term describing the moduleβs function, with each word separated by dashes, e.g., app-tier-vmss = Application Tier VMSS; management-plane = Azure Virtual Desktop Landing Zone Accelerator Management Plane
Terraform Pattern Module Naming
Naming convention:
avm-ptn-<pattern module name> (Module name for registry)
terraform-<provider>-avm-ptn-<pattern module name> (GitHub repository name to meet registry naming requirements)
Example: avm-ptn-apptiervmss or avm-ptn-avd-lza-managementplane
Segments:
<provider> is the logical abstraction of various APIs used by Terraform. In most cases, this is going to be azurerm or azuread for resource modules.
ptn defines this as a pattern module
<pattern module name> is a term describing the moduleβs function, e.g., apptiervmss = Application Tier VMSS; avd-lza-managementplane = Azure Virtual Desktop Landing Zone Accelerator Management Plane
ID: PMNFR2 - Category: Composition - Use Resource Modules to Build a Pattern Module
A Pattern Module SHOULD be built from AVM Resources Modules to establish a standardized code base and improve maintainability. If a valid reason exists, a pattern module MAY contain native resources (“vanilla” code) where it’s necessary. A Pattern Module MUST NOT contain references to non-AVM modules.
Valid reasons for not using a Resource Module for a resource required by a Pattern Module include but are not limited to:
When using a Resource Module would result in hitting scaling limitations and/or would reduce the capabilities of the Pattern Module due to the limitations of Azure Resource Manager.
Developing a Pattern Module under time constraint, without having all required Resource Modules readily available.
Note
In the latter case, the Pattern Module SHOULD be updated to use the Resource Module when the required Resource Module becomes available, to avoid accumulating technical debt. Ideally, all required Resource Modules SHOULD be developed first, and then leveraged by the Pattern Module.
Module owners MAY cross-references other modules to build either Resource or Pattern modules. However, they MUST be referenced only by a HashiCorp Terraform registry reference to a pinned version e.g.,
The module’s owners MUST use map(xxx) or set(xxx) as resource’s for_each collection, the map’s key or set’s element MUST be static literals.
Good example:
resource"azurerm_subnet""pair" {
for_each = var.subnet_map // `map(string)`, when user call this module, it could be: `{ "subnet0": "subnet0" }`, or `{ "subnet0": azurerm_subnet.subnet0.name }`
name = "${each.value}"-pairresource_group_name = azurerm_resource_group.example.namevirtual_network_name = azurerm_virtual_network.example.nameaddress_prefixes = ["10.0.1.0/24"]
}
Bad example:
resource"azurerm_subnet""pair" {
for_each = var.subnet_name_set // `set(string)`, when user use `toset([azurerm_subnet.subnet0.name])`, it would cause an error.
name = "${each.value}"-pairresource_group_name = azurerm_resource_group.example.namevirtual_network_name = azurerm_virtual_network.example.nameaddress_prefixes = ["10.0.1.0/24"]
}
There are 3 types of assignment statements in a resource or data block: argument, meta-argument and nested block. The argument assignment statement is a parameter followed by =:
Sometimes we need to ensure that the resources created are compliant to some rules at a minimum extent, for example a subnet has to be connected to at least one network_security_group. The user SHOULD pass in a security_group_id and ask us to make a connection to an existing security_group, or want us to create a new security group.
The disadvantage of this approach is if the user create a security group directly in the root module and use the id as a variable of the module, the expression which determines the value of count will contain an attribute from another resource, the value of this very attribute is “known after apply” at plan stage. Terraform core will not be able to get an exact plan of deployment during the “plan” stage.
For this kind of parameters, wrapping with object type is RECOMMENDED:
variable"security_group" {
type:object({
id = string })
default = null}
The advantage of doing so is encapsulating the value which is “known after apply” in an object, and the object itself can be easily found out if it’s null or not. Since the id of a resource cannot be null, this approach can avoid the situation we are facing in the first example, like the following:
variable used as feature switches SHOULD apply a positive statement, use xxx_enabled instead of xxx_disabled. Avoid double negatives like !xxx_disabled.
Please use xxx_enabled instead of xxx_disabled as name of a variable.
ID: TFNFR17 - Category: Code Style - Variables with Descriptions
The target audience of description is the module users.
For a newly created variable (Eg. variable for switching dynamic block on-off), it’s descriptionSHOULD precisely describe the input parameter’s purpose and the expected data type. descriptionSHOULD NOT contain any information for module developers, this kind of information can only exist in code comments.
For object type variable, description can be composed in HEREDOC format:
variable"kubernetes_cluster_key_management_service" {
type:object({
key_vault_key_id = stringkey_vault_network_access = optional(string)
})
default = nulldescription = <<-EOT - `key_vault_key_id` - (Required) Identifier of Azure Key Vault key. See [key identifier format](https://learn.microsoft.com/en-us/azure/key-vault/general/about-keys-secrets-certificates#vault-name-and-object-name) for more details. When Azure Key Vault key management service is enabled, this field is required and must be a valid key identifier. When `enabled` is `false`, leave the field empty.
- `key_vault_network_access` - (Optional) Network access of the key vault Network access of key vault. The possible values are `Public` and `Private`. `Public` means the key vault allows public access from all networks. `Private` means the key vault disables public access and enables private link. Defaults to `Public`.
EOT}
ID: TFNFR19 - Category: Code Style - Sensitive Data Variables
If variable’s type is object and contains one or more fields that would be assigned to a sensitive argument, then this whole variableSHOULD be declared as sensitive = true, otherwise you SHOULD extract sensitive field into separated variable block with sensitive = true.
Nullable SHOULD be set to false for collection values (e.g. sets, maps, lists) when using them in loops. However for scalar values like string and number, a null value MAY have a semantic meaning and as such these values are allowed.
Sometimes we will find names for some variable are not suitable anymore, or a change SHOULD be made to the data type. We want to ensure forward compatibility within a major version, so direct changes are strictly forbidden. The right way to do this is move this variable to an independent deprecated_variables.tf file, then redefine the new parameter in variable.tf and make sure it’s compatible everywhere else.
Deprecated variableMUST be annotated as DEPRECATED at the beginning of the description, at the same time the replacement’s name SHOULD be declared. E.g.,
variable"enable_network_security_group" {
type = stringdefault = nulldescription = "DEPRECATED, use `network_security_group_enabled` instead; Whether to generate a network security group and assign it to the subnet. Changing this forces a new resource to be created."}
A cleanup of deprecated_variables.tfSHOULD be performed during a major version release.
The terraform.tf file MUST only contain one terraform block.
The first line of the terraform block MUST define a required_version property for the Terraform CLI.
The required_version property MUST include a constraint on the minimum version of the Terraform CLI. Previous releases of the Terraform CLI can have unexpected behavior.
The required_version property MUST include a constraint on the maximum major version of the Terraform CLI. Major version releases of the Terraform CLI can introduce breaking changes and MUST be tested.
The required_version property constraint SHOULD use the ~> #.# or the >= #.#.#, < #.#.# format.
Note: You can read more about Terraform version constraints in the documentation.
ID: TFNFR26 - Category: Code Style - Providers in required_providers
The terraform block in terraform.tfMUST contain the required_providers block.
Each provider used directly in the module MUST be specified with the source and version properties. Providers in the required_providers block SHOULD be sorted in alphabetical order.
Do not add providers to the required_providers block that are not directly required by this module. If submodules are used then each submodule SHOULD have its own versions.tf file.
The source property MUST be in the format of namespace/name. If this is not explicitly specified, it can cause failure.
The version property MUST include a constraint on the minimum version of the provider. Older provider versions may not work as expected.
The version property MUST include a constraint on the maximum major version. A provider major version release may introduce breaking change, so updates to the major version constraint for a provider MUST be tested.
The version property constraint SHOULD use the ~> #.# or the >= #.#.#, < #.#.# format.
Note: You can read more about Terraform version constraints in the documentation.
By rules, in the module code providerMUST NOT be declared. The only exception is when the module indeed need different instances of the same kind of provider(Eg. manipulating resources across different locations or accounts), you MUST declare configuration_aliases in terraform.required_providers. See details in this document.
provider block declared in the module MUST only be used to differentiate instances used in resource and data. Declaration of fields other than alias in provider block is strictly forbidden. It could lead to module users unable to utilize count, for_each or depends_on. Configurations of the provider instance SHOULD be passed in by the module users.
Sometimes we notice that the name of certain output is not appropriate anymore, however, since we have to ensure forward compatibility in the same major version, its name MUST NOT be changed directly. It MUST be moved to an independent deprecated_outputs.tf file, then redefine a new output in output.tf and make sure it’s compatible everywhere else in the module.
A cleanup SHOULD be performed to deprecated_outputs.tf and other logics related to compatibility during a major version upgrade.
ID: TFNFR31 - Category: Code Style - locals.tf for Locals Only
In locals.tf, file we could declare multiple locals blocks, but only locals blocks are allowed.
You MAY declare locals blocks next to a resource block or data block for some advanced scenarios, like making a fake module to execute some light-weight tests aimed at the expressions.
From Terraform AzureRM 3.0, the default value of prevent_deletion_if_contains_resources in provider block is true. This will lead to an unstable test because the test subscription has some policies applied, and they will add some extra resources during the run, which can cause failures during destroy of resource groups.
Since we cannot guarantee our testing environment won’t be applied some Azure Policy Remediation Tasks in the future, for a robust testing environment, prevent_deletion_if_contains_resourcesSHOULD be explicitly set to false.
newres is a command-line tool that generates Terraform configuration files for a specified resource type. It automates the process of creating variables.tf and main.tf files, making it easier to get started with Terraform and reducing the time spent on manual configuration.
Module owners MAY use newres when they’re trying to add new resource block, attribute, or nested block. They MAY generate the whole block along with the corresponding variable blocks in an empty folder, then copy-paste the parts they need with essential refactoring.
Inputs / Outputs
The content below is listed based on the following tags
ID: SNFR22 - Category: Inputs - Parameters/Variables for Resource IDs
A module parameter/variable that requires a full Azure Resource ID as an input value, e.g. /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.KeyVault/vaults/{keyVaultName}, MUST contain ResourceId/resource_id in its parameter/variable name to assist users in knowing what value to provide at a glance of the parameter/variable name.
Example for the property workspaceId for the Diagnostic Settings resource. In Bicep its parameter name should be workspaceResourceId and the variable name in Terraform should be workspace_resource_id.
workspaceId is not descriptive enough and is ambiguous as to which ID is required to be input.
Authors SHOULD NOT output entire resource objects as these may contain sensitive outputs and the schema can change with API or provider versions. Instead, authors SHOULD output the computed attributes of the resource as discreet outputs. This kind of pattern protects against provider schema changes and is known as an anti-corruption layer.
Remember, you SHOULD NOT output values that are already inputs (other than name).
E.g.,
# Resource output, computed attribute.
output"foo" {
description = "MyResource foo attribute"value = azurerm_resource_myresource.foo}# Resource output for resources that are deployed using `for_each`. Again only computed attributes.
output"childresource_foos" {
description = "MyResource children's foo attributes"value = {
forkey, valueinazurerm_resource_mychildresource:key => value.foo }
}# Output of a sensitive attribute
output"bar" {
description = "MyResource bar attribute"value = azurerm_resource_myresource.barsensitive = true}
ID: TFNFR14 - Category: Inputs - Not allowed variables
Since Terraform 0.13, count, for_each and depends_on are introduced for modules, module development is significantly simplified. Module’s owners MUST NOT add variables like enabled or module_depends_on to control the entire module’s operation. Boolean feature toggles are acceptable however.
Testing
The content below is listed based on the following tags
Modules MUST implement end-to-end (deployment) testing that create actual resources to validate that module deployments work. In Bicep tests are sourced from the directories in /tests/e2e. In Terraform, these are in /examples.
Each test MUST run and complete without user inputs successfully, for automation purposes.
Each test MUST also destroy/clean-up its resources and test dependencies following a run.
Tip
To see a directory and file structure for a module, see the language specific contribution guide.
It is likely that to complete E2E tests, a number of resources will be required as dependencies to enable the tests to pass successfully. Some examples:
When testing the Diagnostic Settings interface for a Resource Module, you will need an existing Log Analytics Workspace to be able to send the logs to as a destination.
When testing the Private Endpoints interface for a Resource Module, you will need an existing Virtual Network, Subnet and Private DNS Zone to be able to complete the Private Endpoint deployment and configuration.
Module owners MUST:
Create the required resources that their module depends upon in the test file/directory
They MUST either use:
Simple/native resource declarations/definitions in their respective IaC language, OR
Another already published AVM Module that MUST be pinned to a specific published version.
They MUST NOT use any local directory path references or local copies of AVM modules in their own modules test directory.
β Terraform & Bicep Log Analytics Workspace examples using simple/native declarations for use in E2E tests
Modules SHOULD implement unit testing to ensure logic and conditions within parameters/variables/locals are performing correctly. These tests MUST pass before a module version can be published.
Unit Tests test specific module functionality, without deploying resources. Used on more complex modules. In Bicep and Terraform these live in tests/unit.
Modules MUST use static analysis, e.g., linting, security scanning (PSRule, tflint, etc.). These tests MUST pass before a module version can be published.
There may be differences between languages in linting rules standards, but the AVM core team will try to close these and bring them into alignment over time.
Modules MUST implement idempotency end-to-end (deployment) testing. E.g. deploying the module twice over the top of itself.
Modules SHOULD pass the idempotency test, as we are aware that there are some exceptions where they may fail as a false-positive or legitimate cases where a resource cannot be idempotent.
For example, Virtual Machine Image names must be unique on each resource creation/update.
Module owners MUST test that child and extension resources and those Bicep or Terreform interface resources that are supported by their modules, are validated in E2E tests as per SNFR2 to ensure they deploy and are configured correctly.
These MAY be tested in a separate E2E test and DO NOT have to be tested in each E2E test.
README documentation MUST be automatically/programmatically generated. MUST include the sections as defined in the language specific requirements BCPNFR2, TFNFR2.
Where descriptions for variables and outputs spans multiple lines. The description MAY provide variable input examples for each variable using the HEREDOC format and embedded markdown.
Example:
variable"my_complex_input" {
type = map(object({
param1 = stringparam2 = optional(number, null)
}))
description = <<DESCRIPTION A complex input variable that is a map of objects.
Each object has two attributes:
- `param1`: A required string parameter.
- `param2`: (Optional) An optional number parameter.
Example Input:
```terraform
my_complex_input = {
"object1" = {
param1 = "value1"
param2 = 2
}
"object2" = {
param1 = "value2"
}
}
```
DESCRIPTION }
You cannot specify the patch version for Bicep modules in the public Bicep Registry, as this is automatically incremented by 1 each time a module is published. You can only set the Major and Minor versions.
Modules MUST use semantic versioning (aka semver) for their versions and releases in accordance with: Semantic Versioning 2.0.0
For example all modules should be released using a semantic version that matches this pattern: X.Y.Z
X == Major Version
Y == Minor Version
Z == Patch Version
Module versioning before first Major version release 1.0.0
Initially modules MUST be released as version 0.1.0 and incremented via Minor and Patch versions only until the AVM Core Team are confident the AVM specifications are mature enough and appropriate CI test coverage is in place, plus the module owner is happy the module has been “road tested” and is now stable enough for its first Major release of version 1.0.0.
Note
Releasing as version 0.1.0 initially and only incrementing Minor and Patch versions allows the module owner to make breaking changes more easily and frequently as it’s still not an official Major/Stable release. π
Until first Major version 1.0.0 is released, given a version number X.Y.Z:
X Major version MUST NOT be bumped.
Y Minor version MUST be bumped when introducing breaking changes (which would normally bump Major after 1.0.0 release) or feature updates (same as it will be after 1.0.0 release).
Z Patch version MUST be bumped when introducing non-breaking, backward compatible bug fixes (same as it will be after 1.0.0 release).
A module SHOULD avoid breaking changes, e.g., deprecating inputs vs. removing. If you need to implement changes that cause a breaking change, the major version should be increased.
Info
Modules that have not been released as 1.0.0 may introduce breaking changes, as explained in the previous ID SNFR17. That means that you have to introduce non-breaking and breaking changes with a minor version jump, as long as the module has not reached version 1.0.0.
There are, however, scenarios where you want to include breaking changes into a commit and not create a new major version. If you want to introduce breaking changes as part of a minor update, you can do so. In this case, it is essential to keep the change backward compatible, so that the existing code will continue to work. At a later point, another update can increase the major version and remove the code introduced for the backward compatibility.
Tip
See the language specific examples to find out how you can deal with deprecations in AVM modules.
ID: SNFR21 - Category: Publishing - Cross Language Collaboration
When the module owners of the same Resource or Pattern AVM module are not the same individual or team for all languages, each languages team SHOULD collaborate with their sibling language team for the same module to ensure consistency where possible.
Terraform Resource Module Specifications
Contribution / Support
The content below is listed based on the following tags
A module MUST have an owner that is defined and managed by a GitHub Team in the Azure GitHub organization.
Today this is only Microsoft FTEs, but everyone is welcome to contribute. The module just MUST be owned by a Microsoft FTE (today) so we can enforce and provide the long-term support required by this initiative.
Note
The names for the GitHub teams for each approved module are already defined in the respective Module Indexes. These teams MUST be created (and used) for each module.
ID: SNFR20 - Category: Contribution/Support - GitHub Teams Only
All GitHub repositories that AVM module are published from and hosted within MUST only assign GitHub repository permissions to GitHub teams only.
Each module MUST have separate GitHub teams assigned for module owners AND module contributors respectively. These GitHub teams MUST be created in the Azure organization in GitHub.
There MUST NOT be any GitHub repository permissions assigned to individual users.
Note
The names for the GitHub teams for each approved module are already defined in the respective Module Indexes. These teams MUST be created (and used) for each module.
The @Azure prefix in the last column of the tables linked above represents the “Azure” GitHub organization all AVM-related repositories exist in. DO NOT include this segment in the team’s name!
Important
Non-FTE / external contributors (subject matter experts that aren’t Microsoft employees) can’t be members of the teams described in this chapter, hence, they won’t gain any extra permissions on AVM repositories, therefore, they need to work in forks.
Naming Convention
The naming convention for the GitHub teams MUST follow the below pattern:
<hyphenated module name>-module-owners-<bicep/tf> - to be assigned as the GitHub repository’s Module Owners team
<hyphenated module name>-module-contributors-<bicep/tf> - to be assigned as the GitHub repository’s Module Contributors team
Note
The naming convention for Bicep modules is slightly different than the naming convention for their respective GitHub teams.
Segments:
<hyphenated module name> == the AVM Module’s name, with each segment separated by dashes, i.e., avm-res-<resource provider>-<ARM resource type>
All officially documented module owner(s) MUST be added to the -module-owners- team. The -module-owners- team MUST NOT have any other members.
Any additional module contributors whom the module owner(s) agreed to work with MUST be added to the -module-contributors- team.
Unless explicitly requested and agreed, members of the AVM core team or any PG teams MUST NOT be added to the -module-owners- or -module-contributors- teams as permissions for them are granted through the teams described in SNFR9.
Grant Permissions - Bicep
Team memberships
Note
In case of Bicep modules, permissions to the BRM repository (the repo of the Bicep Registry) are granted via assigning the -module-owners- and -module-contributors- teams to parent teams that already have the required level access configured. While it is the module owner’s responsibility to initiate the addition of their teams to the respective parents, only the AVM core team can approve this parent-child relationship.
Module owners MUST create their -module-owners- and -module-contributors- teams and as part of the provisioning process, they MUST request the addition of these teams to their respective parent teams (see the table below for details).
GitHub Team Name
Description
Permissions
Permissions granted through
Where to work?
<hyphenated module name>-module-owners-bicep
AVM Bicep Module Owners - <module name>
Write
Assignment to the avm-technical-reviewers-bicep parent team.
Examples - GitHub teams required for the Bicep resource module of Azure Virtual Network (avm/res/network/virtual-network):
avm-res-network-virtualnetwork-module-owners-bicep –> assign to the avm-technical-reviewers-bicep parent team.
avm-res-network-virtualnetwork-module-contributors-bicep –> assign to the avm-module-contributors-bicep parent team.
Tip
Direct link to create a new GitHub team and assign it to its parent: Create new team
Fill in the values as follows:
Team name: Following the naming convention described above, use the value defined in the module indexes.
Description: Follow the guidance above (see the Description column in the table above).
Parent team: Follow the guidance above (see the Permissions granted through column in the table above).
Team visibility: Visible
Team notifications: Enabled
CODEOWNERS file
As part of the “initial Pull Request” (that publishes the first version of the module), module owners MUST add an entry to the CODEOWNERS file in the BRM repository (here).
Note
Through this approach, the AVM core team will grant review permission to module owners as part of the standard PR review process.
Every CODEOWNERS entry (line) MUST include the following segments separated by a single whitespace character:
Path of the module, relative to the repo’s root, e.g.: /avm/res/network/virtual-network/
The -module-owners-team, with the @Azure/ prefix, e.g., @Azure/avm-res-network-virtualnetwork-module-owners-bicep
The GitHub team of the AVM Bicep reviewers, with the @Azure/ prefix, i.e., @Azure/avm-module-reviewers-bicep
Example - CODEOWNERS entry for the Bicep resource module of Azure Virtual Network (avm/res/network/virtual-network):
Module owners MUST assign the -module-owners-and -module-contributors- teams the necessary permissions on their Terraform module repository per the guidance below.
GitHub Team Name
Description
Permissions
Permissions granted through
Where to work?
<module name>-module-owners-tf
AVM Terraform Module Owners - <module name>
Admin
Direct assignment to repo
Module owner can decide whether they want to work in a branch local to the repo or in a fork.
Only the latest released version of a module MUST be supported.
For example, if an AVM Resource Module is used in an AVM Pattern Module that was working but now is not. The first step by the AVM Pattern Module owner should be to upgrade to the latest version of the AVM Resource Module test and then if not fixed, troubleshoot and fix forward from the that latest version of the AVM Resource Module onwards.
This avoids AVM Module owners from having to maintain multiple major release versions.
```shell
# Linux / MacOs# For Windows replace $PWD with your the local path or your repository#docker run -it -v $PWD:/repo -w /repo mcr.microsoft.com/powershell pwsh -Command '
#Invoke-WebRequest -Uri "https://azure.github.io/Azure-Verified-Modules/scripts/Set-AvmGitHubLabels.ps1" -OutFile "Set-AvmGitHubLabels.ps1"
$gh_version = "2.44.1"
Invoke-WebRequest -Uri "https://github.com/cli/cli/releases/download/v2.44.1/gh_2.44.1_linux_amd64.tar.gz" -OutFile "gh_$($gh_version)_linux_amd64.tar.gz"
apt-get update && apt-get install -y git
tar -xzf "gh_$($gh_version)_linux_amd64.tar.gz"
ls -lsa
mv "gh_$($gh_version)_linux_amd64/bin/gh" /usr/local/bin/
rm "gh_$($gh_version)_linux_amd64.tar.gz" && rm -rf "gh_$($gh_version)_linux_amd64"
gh --version
ls -lsa
gh auth login
$OrgProject = "Azure/terraform-azurerm-avm-res-kusto-cluster"
gh auth status
./Set-AvmGitHubLabels.ps1 -RepositoryName $OrgProject -CreateCsvLabelExports $false -NoUserPrompts $true
'```
By default this script will only update and append labels on the repository specified. However, this can be changed by setting the parameter -UpdateAndAddLabelsOnly to $false, which will remove all the labels from the repository first and then apply the AVM labels from the CSV only.
Make sure you elevate your privilege to admin level or the labels will not be applied to your repository. Go to repos.opensource.microsoft.com/orgs/Azure/repos/ to request admin access before running the script.
Full Script:
These Set-AvmGitHubLabels.ps1 can be downloaded from here.
[Diagnostics.CodeAnalysis.SuppressMessageAttribute("PSAvoidUsingWriteHost", "", Justification = "Coloured output required in this script")]
<#
.SYNOPSIS This script can be used to create the Azure Verified Modules (AVM) standard GitHub labels to a GitHub repository.
.DESCRIPTION This script can be used to create the Azure Verified Modules (AVM) standard GitHub labels to a GitHub repository.
By default, the script will remove all pre-existing labels and apply the AVM labels. However, this can be changed by using the -RemoveExistingLabels parameter and setting it to $false. The tool will also output the labels that exist in the repository before and after the script has run to a CSV file in the current directory, or a directory specified by the -OutputDirectory parameter.
The AVM labels to be created are documented here: TBC
.NOTES Please ensure you have specified the GitHub repositry correctly. The script will prompt you to confirm the repository name before proceeding.
.COMPONENT You must have the GitHub CLI installed and be authenticated to a GitHub account with access to the repository you are applying the labels to before running this script.
.LINK TBC
.Parameter RepositoryName
The name of the GitHub repository to apply the labels to.
.Parameter RemoveExistingLabels
If set to $true, the default value, the script will remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels. If set to $false, the script will not remove any pre-existing labels.
.Parameter UpdateAndAddLabelsOnly
If set to $true, the default value, the script will only update and add labels to the repository specified in -RepositoryName. If set to $false, the script will remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels.
.Parameter OutputDirectory
The directory to output the pre-existing and post-existing labels to in a CSV file. The default value is the current directory.
.Parameter CreateCsvLabelExports
If set to $true, the default value, the script will output the pre-existing and post-existing labels to a CSV file in the current directory, or a directory specified by the -OutputDirectory parameter. If set to $false, the script will not output the pre-existing and post-existing labels to a CSV file.
.Parameter GitHubCliLimit
The maximum number of labels to return from the GitHub CLI. The default value is 999.
.Parameter LabelsToApplyCsvUri
The URI to the CSV file containing the labels to apply to the GitHub repository. The default value is https://raw.githubusercontent.com/jtracey93/label-source/main/avm-github-labels.csv.
.Parameter NoUserPrompts
If set to $true, the default value, the script will not prompt the user to confirm they want to remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels. If set to $false, the script will prompt the user to confirm they want to remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels.
This is useful for running the script in automation workflows
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and remove all pre-existing labels.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo"
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and do not remove any pre-existing labels, just overwrite any labels that have the same name.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -RemoveExistingLabels $false
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and output the pre-existing and post-existing labels to the directory C:\GitHubLabels.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -OutputDirectory "C:\GitHubLabels"
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and output the pre-existing and post-existing labels to the directory C:\GitHubLabels and do not remove any pre-existing labels, just overwrite any labels that have the same name.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -OutputDirectory "C:\GitHubLabels" -RemoveExistingLabels $false
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and do not create the pre-existing and post-existing labels CSV files and do not remove any pre-existing labels, just overwrite any labels that have the same name.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -RemoveExistingLabels $false -CreateCsvLabelExports $false
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and do not create the pre-existing and post-existing labels CSV files and do not remove any pre-existing labels, just overwrite any labels that have the same name. Finally, use a custom CSV file hosted on the internet to create the labels from.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -OutputDirectory "C:\GitHubLabels" -RemoveExistingLabels $false -CreateCsvLabelExports $false -LabelsToApplyCsvUri "https://example.com/csv/avm-github-labels.csv"
#>#Requires-PSEdition Core [CmdletBinding()]
param (
[Parameter(Mandatory = $true)]
[string]$RepositoryName,
[Parameter(Mandatory = $false)]
[bool]$RemoveExistingLabels = $true,
[Parameter(Mandatory = $false)]
[bool]$UpdateAndAddLabelsOnly = $true,
[Parameter(Mandatory = $false)]
[bool]$CreateCsvLabelExports = $true,
[Parameter(Mandatory = $false)]
[string]$OutputDirectory = (Get-Location),
[Parameter(Mandatory = $false)]
[int]$GitHubCliLimit = 999,
[Parameter(Mandatory = $false)]
[string]$LabelsToApplyCsvUri = "https://azure.github.io/Azure-Verified-Modules/governance/avm-standard-github-labels.csv",
[Parameter(Mandatory = $false)]
[bool]$NoUserPrompts = $false
)
# Check if the GitHub CLI is installed $GitHubCliInstalled = Get-Command gh -ErrorAction SilentlyContinue
if ($null -eq $GitHubCliInstalled) {
throw"The GitHub CLI is not installed. Please install the GitHub CLI and try again." }
Write-Host "The GitHub CLI is installed..." -ForegroundColor Green
# Check if GitHub CLI is authenticated $GitHubCliAuthenticated = gh auth status
if ($LASTEXITCODE -ne0) {
Write-Host $GitHubCliAuthenticated -ForegroundColor Red
throw"Not authenticated to GitHub. Please authenticate to GitHub using the GitHub CLI, `gh auth login`, and try again." }
Write-Host "Authenticated to GitHub..." -ForegroundColor Green
# Check if GitHub repository name is valid $GitHubRepositoryNameValid = $RepositoryName -match"^[a-zA-Z0-9-]+/[a-zA-Z0-9-]+$"if ($false -eq $GitHubRepositoryNameValid) {
throw"The GitHub repository name $RepositoryName is not valid. Please check the repository name and try again. The format must be <OrgName>/<RepoName>" }
# List GitHub repository provided and check it exists $GitHubRepository = gh repo view $RepositoryName
if ($LASTEXITCODE -ne0) {
Write-Host $GitHubRepository -ForegroundColor Red
throw"The GitHub repository $RepositoryName does not exist. Please check the repository name and try again." }
Write-Host "The GitHub repository $RepositoryName exists..." -ForegroundColor Green
# PRE - Get the current GitHub repository labels and export to a CSV file in the current directory or where -OutputDirectory specifies if set to a valid directory path and the directory exists or can be created if it does not exist alreadyif ($RemoveExistingLabels -or $UpdateAndAddLabelsOnly) {
Write-Host "Getting the current GitHub repository (pre) labels for $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels = gh label list -R $RepositoryName -L $GitHubCliLimit --json name,description,color
if ($null -ne $GitHubRepositoryLabels -and $CreateCsvLabelExports -eq $true) {
$csvFileNamePathPre = "$OutputDirectory\$($RepositoryName.Replace('/', '_'))-Labels-Pre-$(Get-Date -Format FileDateTime).csv" Write-Host "Exporting the current GitHub repository (pre) labels for $RepositoryName to $csvFileNamePathPre" -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | Export-Csv -Path $csvFileNamePathPre -NoTypeInformation
}
}
# Remove all pre-existing labels if -RemoveExistingLabels is set to $true and user confirms they want to remove all pre-existing labelsif ($null -ne $GitHubRepositoryLabels) {
$GitHubRepositoryLabelsJson = $GitHubRepositoryLabels | ConvertFrom-Json
if ($RemoveExistingLabels -eq $true -and $NoUserPrompts -eq $false -and $UpdateAndAddLabelsOnly -eq $false) {
$RemoveExistingLabelsConfirmation = Read-Host "Are you sure you want to remove all $($GitHubRepositoryLabelsJson.Count) pre-existing labels from $($RepositoryName)? (Y/N)"if ($RemoveExistingLabelsConfirmation -eq"Y") {
Write-Host "Removing all pre-existing labels from $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | ForEach-Object {
Write-Host "Removing label $($_.name) from $RepositoryName..." -ForegroundColor DarkRed
gh label delete -R $RepositoryName $_.name --yes
}
}
}
if ($RemoveExistingLabels -eq $true -and $NoUserPrompts -eq $true -and $UpdateAndAddLabelsOnly -eq $false) {
Write-Host "Removing all pre-existing labels from $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | ForEach-Object {
Write-Host "Removing label $($_.name) from $RepositoryName..." -ForegroundColor DarkRed
gh label delete -R $RepositoryName $_.name --yes
}
}
}
if ($null -eq $GitHubRepositoryLabels) {
Write-Host "No pre-existing labels to remove or not selected to be removed from $RepositoryName..." -ForegroundColor Magenta
}
# Check LabelsToApplyCsvUri is valid and contains a CSV content Write-Host "Checking $LabelsToApplyCsvUri is valid..." -ForegroundColor Yellow
$LabelsToApplyCsvUriValid = $LabelsToApplyCsvUri -match"^https?://"if ($false -eq $LabelsToApplyCsvUriValid) {
throw"The LabelsToApplyCsvUri $LabelsToApplyCsvUri is not valid. Please check the URI and try again. The format must be a valid URI." }
Write-Host "The LabelsToApplyCsvUri $LabelsToApplyCsvUri is valid..." -ForegroundColor Green
# Create AVM lables from the AVM labels CSV file stored on the web using the convertfrom-csv cmdlet $avmLabelsCsv = Invoke-WebRequest -Uri $LabelsToApplyCsvUri | ConvertFrom-Csv
# Check if the AVM labels CSV file contains the following columns: Name, Description, HEX $avmLabelsCsvColumns = $avmLabelsCsv | Get-Member -MemberType NoteProperty | Select-Object -ExpandProperty Name
$avmLabelsCsvColumnsValid = $avmLabelsCsvColumns -contains"Name"-and $avmLabelsCsvColumns -contains"Description"-and $avmLabelsCsvColumns -contains"HEX"if ($false -eq $avmLabelsCsvColumnsValid) {
throw"The labels CSV file does not contain the required columns: Name, Description, HEX. Please check the CSV file and try again. It contains the following columns: $avmLabelsCsvColumns" }
Write-Host "The labels CSV file contains the required columns: Name, Description, HEX" -ForegroundColor Green
# Create the AVM labels in the GitHub repository Write-Host "Creating/Updating the $($avmLabelsCsv.Count) AVM labels in $RepositoryName..." -ForegroundColor Yellow
$avmLabelsCsv | ForEach-Object {
if ($GitHubRepositoryLabelsJson.name -contains $_.name) {
Write-Host "The label $($_.name) already exists in $RepositoryName. Updating the label to ensure description and color are consitent..." -ForegroundColor Magenta
gh label create -R $RepositoryName "$($_.name)" -c $_.HEX -d $($_.Description) --force
}
else {
Write-Host "The label $($_.name) does not exist in $RepositoryName. Creating label $($_.name) in $RepositoryName..." -ForegroundColor Cyan
gh label create -R $RepositoryName "$($_.Name)" -c $_.HEX -d $($_.Description) --force
}
}
# POST - Get the current GitHub repository labels and export to a CSV file in the current directory or where -OutputDirectory specifies if set to a valid directory path and the directory exists or can be created if it does not exist alreadyif ($CreateCsvLabelExports -eq $true) {
Write-Host "Getting the current GitHub repository (post) labels for $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels = gh label list -R $RepositoryName -L $GitHubCliLimit --json name,description,color
if ($null -ne $GitHubRepositoryLabels) {
$csvFileNamePathPre = "$OutputDirectory\$($RepositoryName.Replace('/', '_'))-Labels-Post-$(Get-Date -Format FileDateTime).csv" Write-Host "Exporting the current GitHub repository (post) labels for $RepositoryName to $csvFileNamePathPre" -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | Export-Csv -Path $csvFileNamePathPre -NoTypeInformation
}
}
# If -RemoveExistingLabels is set to $true and user confirms they want to remove all pre-existing labels check that only the avm labels exist in the repositoryif ($RemoveExistingLabels -eq $true -and ($RemoveExistingLabelsConfirmation -eq"Y"-or $NoUserPrompts -eq $true) -and $UpdateAndAddLabelsOnly -eq $false) {
Write-Host "Checking that only the AVM labels exist in $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels = gh label list -R $RepositoryName -L $GitHubCliLimit --json name,description,color
$GitHubRepositoryLabels | ConvertFrom-Json | ForEach-Object {
if ($avmLabelsCsv.Name -notcontains $_.name) {
throw"The label $($_.name) exists in $RepositoryName but is not in the CSV file." }
}
Write-Host "Only the CSV labels exist in $RepositoryName..." -ForegroundColor Green
}
Write-Host "The CSV labels have been created/updated in $RepositoryName..." -ForegroundColor Green
Module owners MUST set a branch protection policy on their GitHub Repositories for AVM modules against their default branch, typically main, to do the following:
Requires a Pull Request before merging
Require approval of the most recent reviewable push
Dismiss stale pull request approvals when new commits are pushed
Require linear history
Prevents force pushes
Not allow deletions
Require CODEOWNERS review
Do not allow bypassing the above settings
Above settings MUST also be enforced to administrators
Tip
If you use the template repository as mentioned in the contribution guide, the above will automatically be set.
Telemetry
The content below is listed based on the following tags
We will maintain a set of CSV files in the AVM Central Repo (Azure/Azure-Verified-Modules) with the required TelemetryId prefixes to enable checks to utilize this list to ensure the correct IDs are used. To see the formatted content of these CSV files with additional information, please visit the AVM Module Indexes page.
These will also be provided as a comment on the module proposal, once accepted, from the AVM core team.
Modules MUST provide the capability to collect deployment/usage telemetry as detailed in Telemetry further.
To highlight that AVM modules use telemetry, an information notice MUST be included in the footer of each module’s README.md file with the below content. (See more details on this requirement, here.)
Telemetry Information Notice
Note
The following information notice is automatically added at the bottom of the README.md file of the module when
Terraform: Executing the make docs command with the note and header ## Data Collection being placed in the module’s _footer.md beforehand
### Data Collection
The software may collect information about you and your use of the software and send it to Microsoft. Microsoft may use this information to provide services and improve our products and services. You may turn off the telemetry as described in the [repository](https://aka.ms/avm/telemetry). There are also some features in the software that may enable you and Microsoft to collect data from users of your applications. If you use these features, you must comply with applicable law, including providing appropriate notices to users of your applications together with a copy of Microsoftβs privacy statement. Our privacy statement is located at <https://go.microsoft.com/fwlink/?LinkID=824704>. You can learn more about data collection and use in the help documentation and our privacy statement. Your use of the software operates as your consent to these practices.
Bicep
The ARM deployment name used for the telemetry MUST follow the pattern and MUST be no longer than 64 characters in length: 46d3xbcp.<res/ptn>.<(short) module name>.<version>.<uniqueness>
<res/ptn> == AVM Resource or Pattern Module
<(short) module name> == The AVM Module’s, possibly shortened, name including the resource provider and the resource type, without;
The prefixes: avm-res-
The prefixes: avm-ptn-
<version> == The AVM Module’s MAJOR.MINOR version (only) with . (periods) replaced with - (hyphens), to allow simpler splitting of the ARM deployment name
<uniqueness> == This section of the ARM deployment name is to be used to ensure uniqueness of the deployment name.
This is to cater for the following scenarios:
The module is deployed multiple times to the same:
Due to the 64-character length limit of Azure deployment names, the <(short) module name> segment has a length limit of 36 characters, so if the module name is longer than that, it MUST be truncated to 36 characters. If any of the semantic version’s segments are longer than 1 character, it further restricts the number of characters that can be used for naming the module.
An example deployment name for the AVM Virtual Machine Resource Module would be: 46d3xbcp.res.compute-virtualmachine.1-2-3.eum3
An example deployment name for a shortened module name would be: 46d3xbcp.res.desktopvirtualization-appgroup.1-2-3.eum3
Tip
Terraform: Terraform uses a telemetry provider, the configuration of which is the same for every module and is included in the template repo.
General: See the language specific contribution guides for detailed guidance and sample code to use in AVM modules to achieve this requirement.
The telemetry enablement MUST be on/enabled by default, however this MUST be able to be disabled by a module consumer by setting the below parameter/variable value to false:
Bicep: enableTelemetry
Terraform: enable_telemetry
Note
Whenever a module references AVM modules that implement the telemetry parameter (e.g., a pattern module that uses AVM resource modules), the telemetry parameter value MUST be passed through to these modules. This is necessary to ensure a consumer can reliably enable & disable the telemetry feature for all used modules.
Modules MAY create/adopt public preview services and features at their discretion.
Preview API versions MAY be used when:
The resource/service/feature is GA but the only API version available for the GA resource/service/feature is a preview version
For example, Diagnostic Settings (Microsoft.Insights/diagnosticSettings) the latest version of the API available with GA features, like Category Groups etc., is 2021-05-01-preview
Otherwise the latest “non-preview” version of the API SHOULD be used
Preview services and features, SHOULD NOT be promoted and exposed, unless they are supported by the respective PG, and it’s documented publicly.
However, they MAY be exposed at the module owners discretion, but the following rules MUST be followed:
The description of each of the parameters/variables used for the preview service/feature MUST start with:
“THIS IS A <PARAMETER/VARIABLE> USED FOR A PREVIEW SERVICE/FEATURE, MICROSOFT MAY NOT PROVIDE SUPPORT FOR THIS, PLEASE CHECK THE PRODUCT DOCS FOR CLARIFICATION”
Modules SHOULD set defaults in input parameters/variables to align to high priority/impact/severity recommendations, where appropriate and applicable, in the following frameworks and resources:
They SHOULD NOT align to these recommendations when it requires an external dependency/resource to be deployed and configured and then associated to the resources in the module.
Alignment SHOULD prioritize best-practices and security over cost optimization, but MUST allow for these to be overridden by a module consumer easily, if desired.
ID: SFR5 - Category: Composition - Availability Zones
Modules that deploy zone-redundant resources MUST enable the spanning across as many zones as possible by default, typically all 3.
Modules that deploy zonal resources MUST provide the ability to specify a zone for the resources to be deployed/pinned to. However, they MUST NOT default to a particular zone by default, e.g. 1 in an effort to make the consumer aware of the zone they are selecting to suit their architecture requirements.
For both scenarios the modules MUST expose these configuration options via configurable parameters/variables.
ID: SFR6 - Category: Composition - Data Redundancy
Modules that deploy resources or patterns that support data redundancy SHOULD enable this to the highest possible value by default, e.g. RA-GZRS. When a resource or pattern doesn’t provide the ability to specify data redundancy as a simple property, e.g. GRS etc., then the modules MUST provide the ability to enable data redundancy for the resources or pattern via parameters/variables.
For example, a Storage Account module can simply set the sku.name property to Standard_RAGZRS. Whereas a SQL DB or Cosmos DB module will need to expose more properties, via parameters/variables, to allow the specification of the regions to replicate data to as per the consumers requirements.
Module owners MUST set the default resource name prefix for child, extension, and interface resources to the associated abbreviation for the specific resource as documented in the following CAF article Abbreviation examples for Azure resources, if specified and documented. This reduces the amount of input values a module consumer MUST provide by default when using the module.
For example, a Private Endpoint that is being deployed as part of a resource module, via the mandatory interfaces, MUST set the Private Endpoint’s default name to begin with the prefix of pep-.
Module owners MUST also provide the ability for these default names, including the prefixes, to be overridden via a parameter/variable if the consumer wishes to.
Furthermore, as per RMNFR2, Resource Modules MUST not have a default value specified for the name of the primary resource and therefore the name MUST be provided and specified by the module consumer.
The name provided MAY be used by the module owner to generate the rest of the default name for child, extension, and interface resources if they wish to. For example, for the Private Endpoint mentioned above, the full default name that can be overridden by the consumer, MAY be pep-<primary-resource-name>.
Tip
If the resource does not have a documented abbreviation in Abbreviation examples for Azure resources, then the module owner is free to use a sensible prefix instead.
Resource modules support the following optional features/extension resources, as specified, if supported by the primary resource. The top-level variable/parameter names MUST be:
Optional Features/Extension Resources
Bicep Parameter Name
Terraform Variable Name
MUST/SHOULD
Diagnostic Settings
diagnosticSettings
diagnostic_settings
MUST
Role Assignments
roleAssignments
role_assignments
MUST
Resource Locks
lock
lock
MUST
Tags
tags
tags
MUST
Managed Identities (System / User Assigned)
managedIdentities
managed_identities
MUST
Private Endpoints
privateEndpoints
private_endpoints
MUST
Customer Managed Keys
customerManagedKey
customer_managed_key
MUST
Azure Monitor Alerts
alerts
alerts
SHOULD
Resource modules MUST NOT deploy required/dependent resources for the optional features/extension resources specified above. For example, for Diagnostic Settings the resource module MUST NOT deploy the Log Analytics Workspace, this is expected to be already in existence from the perspective of the resource module deployed via another method/module etc.
Note
Please note that the implementation of Customer Managed Keys from an ARM API perspective is different across various RPs that implement Customer Managed Keys in their service. For that reason you may see differences between modules on how Customer Managed Keys are handled and implemented, but functionality will be as expected.
Module owners MAY choose to utilize cross repo dependencies for these “add-on” resources, or MAY chose to implement the code directly in their own repo/module. So long as the implementation and outputs are as per the specifications requirements, then this is acceptable.
Tip
Make sure to checkout the language specific specifications for more info on this:
Resource modules MUST implement a common interface, e.g. the input’s data structures and properties within them (objects/arrays/dictionaries/maps), for the optional features/extension resources:
When a given version of an Azure resource used in a resource module reaches its end-of-life (EOL) and is no longer supported by Microsoft, the module owner SHOULD ensure that:
The module is aligned with these changes and only includes supported versions of the resource. This is typically achieved through the allowed values in the parameter that specifies the resource SKU or type.
The following notice is shown under the Notes section of the module’s readme.md. (If any related public announcement is available, it can also be linked to from the Notes section.):
“Certain versions of this Azure resource reached their end of life. The latest version of this module only includes supported versions of the resource. All unsupported versions have been removed from the related parameters.”
AND the related parameter’s description:
“Certain versions of this Azure resource reached their end of life. The latest version of this module only includes supported versions of the resource. All unsupported versions have been removed from this parameter.”
We will maintain a set of CSV files in the AVM Central Repo (Azure/Azure-Verified-Modules) with the correct singular names for all resource types to enable checks to utilize this list to ensure repos are named correctly. To see the formatted content of these CSV files with additional information, please visit the AVM Module Indexes page.
This will be updated quarterly, or ad-hoc as new RPs/ Resources are created and highlighted via a check failure.
Resource modules MUST follow the below naming conventions (all lower case):
Bicep Resource Module Naming
Naming convention: avm/res/<hyphenated resource provider name>/<hyphenated ARM resource type> (module name for registry)
Example: avm/res/compute/virtual-machine or avm/res/managed-identity/user-assigned-identity
Segments:
res defines this is a resource module
<hyphenated resource provider name> is the resource providerβs name after the Microsoft part, with each word starting with a capital letter separated by dashes, e.g., Microsoft.Compute = compute, Microsoft.ManagedIdentity = managed-identity.
<hyphenated ARM resource type> is the singular version of the word after the resource provider, with each word starting with a capital letter separated by dashes, e.g., Microsoft.Compute/virtualMachines = virtual-machine, BUTMicrosoft.Network/trafficmanagerprofiles = trafficmanagerprofile - since trafficmanagerprofiles is all lower case as per the ARM API definition.
Terraform Resource Module Naming
Naming convention:
avm-res-<resource provider>-<ARM resource type> (module name for registry)
terraform-<provider>-avm-res-<resource provider>-<ARM resource type> (GitHub repository name to meet registry naming requirements)
Example: avm-res-compute-virtualmachine or avm-res-managedidentity-userassignedidentity
Segments:
<provider> is the logical abstraction of various APIs used by Terraform. In most cases, this is going to be azurerm or azuread for resource modules.
res defines this is a resource module
<resource provider> is the resource providerβs name after the Microsoft part, e.g., Microsoft.Compute = compute.
<ARM resource type> is the singular version of the word after the resource provider, e.g., Microsoft.Compute/virtualMachines = virtualmachine
ID: RMNFR3 - Category: Composition - RP Collaboration
Module owners (Microsoft FTEs) SHOULD reach out to the respective Resource Provider teams to build a partnership and collaboration on the modules creation, existence and long term maintenance.
Module owners MAY cross-references other modules to build either Resource or Pattern modules. However, they MUST be referenced only by a HashiCorp Terraform registry reference to a pinned version e.g.,
The module’s owners MUST use map(xxx) or set(xxx) as resource’s for_each collection, the map’s key or set’s element MUST be static literals.
Good example:
resource"azurerm_subnet""pair" {
for_each = var.subnet_map // `map(string)`, when user call this module, it could be: `{ "subnet0": "subnet0" }`, or `{ "subnet0": azurerm_subnet.subnet0.name }`
name = "${each.value}"-pairresource_group_name = azurerm_resource_group.example.namevirtual_network_name = azurerm_virtual_network.example.nameaddress_prefixes = ["10.0.1.0/24"]
}
Bad example:
resource"azurerm_subnet""pair" {
for_each = var.subnet_name_set // `set(string)`, when user use `toset([azurerm_subnet.subnet0.name])`, it would cause an error.
name = "${each.value}"-pairresource_group_name = azurerm_resource_group.example.namevirtual_network_name = azurerm_virtual_network.example.nameaddress_prefixes = ["10.0.1.0/24"]
}
There are 3 types of assignment statements in a resource or data block: argument, meta-argument and nested block. The argument assignment statement is a parameter followed by =:
Sometimes we need to ensure that the resources created are compliant to some rules at a minimum extent, for example a subnet has to be connected to at least one network_security_group. The user SHOULD pass in a security_group_id and ask us to make a connection to an existing security_group, or want us to create a new security group.
The disadvantage of this approach is if the user create a security group directly in the root module and use the id as a variable of the module, the expression which determines the value of count will contain an attribute from another resource, the value of this very attribute is “known after apply” at plan stage. Terraform core will not be able to get an exact plan of deployment during the “plan” stage.
For this kind of parameters, wrapping with object type is RECOMMENDED:
variable"security_group" {
type:object({
id = string })
default = null}
The advantage of doing so is encapsulating the value which is “known after apply” in an object, and the object itself can be easily found out if it’s null or not. Since the id of a resource cannot be null, this approach can avoid the situation we are facing in the first example, like the following:
variable used as feature switches SHOULD apply a positive statement, use xxx_enabled instead of xxx_disabled. Avoid double negatives like !xxx_disabled.
Please use xxx_enabled instead of xxx_disabled as name of a variable.
ID: TFNFR17 - Category: Code Style - Variables with Descriptions
The target audience of description is the module users.
For a newly created variable (Eg. variable for switching dynamic block on-off), it’s descriptionSHOULD precisely describe the input parameter’s purpose and the expected data type. descriptionSHOULD NOT contain any information for module developers, this kind of information can only exist in code comments.
For object type variable, description can be composed in HEREDOC format:
variable"kubernetes_cluster_key_management_service" {
type:object({
key_vault_key_id = stringkey_vault_network_access = optional(string)
})
default = nulldescription = <<-EOT - `key_vault_key_id` - (Required) Identifier of Azure Key Vault key. See [key identifier format](https://learn.microsoft.com/en-us/azure/key-vault/general/about-keys-secrets-certificates#vault-name-and-object-name) for more details. When Azure Key Vault key management service is enabled, this field is required and must be a valid key identifier. When `enabled` is `false`, leave the field empty.
- `key_vault_network_access` - (Optional) Network access of the key vault Network access of key vault. The possible values are `Public` and `Private`. `Public` means the key vault allows public access from all networks. `Private` means the key vault disables public access and enables private link. Defaults to `Public`.
EOT}
ID: TFNFR19 - Category: Code Style - Sensitive Data Variables
If variable’s type is object and contains one or more fields that would be assigned to a sensitive argument, then this whole variableSHOULD be declared as sensitive = true, otherwise you SHOULD extract sensitive field into separated variable block with sensitive = true.
Nullable SHOULD be set to false for collection values (e.g. sets, maps, lists) when using them in loops. However for scalar values like string and number, a null value MAY have a semantic meaning and as such these values are allowed.
Sometimes we will find names for some variable are not suitable anymore, or a change SHOULD be made to the data type. We want to ensure forward compatibility within a major version, so direct changes are strictly forbidden. The right way to do this is move this variable to an independent deprecated_variables.tf file, then redefine the new parameter in variable.tf and make sure it’s compatible everywhere else.
Deprecated variableMUST be annotated as DEPRECATED at the beginning of the description, at the same time the replacement’s name SHOULD be declared. E.g.,
variable"enable_network_security_group" {
type = stringdefault = nulldescription = "DEPRECATED, use `network_security_group_enabled` instead; Whether to generate a network security group and assign it to the subnet. Changing this forces a new resource to be created."}
A cleanup of deprecated_variables.tfSHOULD be performed during a major version release.
The terraform.tf file MUST only contain one terraform block.
The first line of the terraform block MUST define a required_version property for the Terraform CLI.
The required_version property MUST include a constraint on the minimum version of the Terraform CLI. Previous releases of the Terraform CLI can have unexpected behavior.
The required_version property MUST include a constraint on the maximum major version of the Terraform CLI. Major version releases of the Terraform CLI can introduce breaking changes and MUST be tested.
The required_version property constraint SHOULD use the ~> #.# or the >= #.#.#, < #.#.# format.
Note: You can read more about Terraform version constraints in the documentation.
ID: TFNFR26 - Category: Code Style - Providers in required_providers
The terraform block in terraform.tfMUST contain the required_providers block.
Each provider used directly in the module MUST be specified with the source and version properties. Providers in the required_providers block SHOULD be sorted in alphabetical order.
Do not add providers to the required_providers block that are not directly required by this module. If submodules are used then each submodule SHOULD have its own versions.tf file.
The source property MUST be in the format of namespace/name. If this is not explicitly specified, it can cause failure.
The version property MUST include a constraint on the minimum version of the provider. Older provider versions may not work as expected.
The version property MUST include a constraint on the maximum major version. A provider major version release may introduce breaking change, so updates to the major version constraint for a provider MUST be tested.
The version property constraint SHOULD use the ~> #.# or the >= #.#.#, < #.#.# format.
Note: You can read more about Terraform version constraints in the documentation.
By rules, in the module code providerMUST NOT be declared. The only exception is when the module indeed need different instances of the same kind of provider(Eg. manipulating resources across different locations or accounts), you MUST declare configuration_aliases in terraform.required_providers. See details in this document.
provider block declared in the module MUST only be used to differentiate instances used in resource and data. Declaration of fields other than alias in provider block is strictly forbidden. It could lead to module users unable to utilize count, for_each or depends_on. Configurations of the provider instance SHOULD be passed in by the module users.
Sometimes we notice that the name of certain output is not appropriate anymore, however, since we have to ensure forward compatibility in the same major version, its name MUST NOT be changed directly. It MUST be moved to an independent deprecated_outputs.tf file, then redefine a new output in output.tf and make sure it’s compatible everywhere else in the module.
A cleanup SHOULD be performed to deprecated_outputs.tf and other logics related to compatibility during a major version upgrade.
ID: TFNFR31 - Category: Code Style - locals.tf for Locals Only
In locals.tf, file we could declare multiple locals blocks, but only locals blocks are allowed.
You MAY declare locals blocks next to a resource block or data block for some advanced scenarios, like making a fake module to execute some light-weight tests aimed at the expressions.
From Terraform AzureRM 3.0, the default value of prevent_deletion_if_contains_resources in provider block is true. This will lead to an unstable test because the test subscription has some policies applied, and they will add some extra resources during the run, which can cause failures during destroy of resource groups.
Since we cannot guarantee our testing environment won’t be applied some Azure Policy Remediation Tasks in the future, for a robust testing environment, prevent_deletion_if_contains_resourcesSHOULD be explicitly set to false.
newres is a command-line tool that generates Terraform configuration files for a specified resource type. It automates the process of creating variables.tf and main.tf files, making it easier to get started with Terraform and reducing the time spent on manual configuration.
Module owners MAY use newres when they’re trying to add new resource block, attribute, or nested block. They MAY generate the whole block along with the corresponding variable blocks in an empty folder, then copy-paste the parts they need with essential refactoring.
Inputs / Outputs
The content below is listed based on the following tags
ID: SNFR22 - Category: Inputs - Parameters/Variables for Resource IDs
A module parameter/variable that requires a full Azure Resource ID as an input value, e.g. /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.KeyVault/vaults/{keyVaultName}, MUST contain ResourceId/resource_id in its parameter/variable name to assist users in knowing what value to provide at a glance of the parameter/variable name.
Example for the property workspaceId for the Diagnostic Settings resource. In Bicep its parameter name should be workspaceResourceId and the variable name in Terraform should be workspace_resource_id.
workspaceId is not descriptive enough and is ambiguous as to which ID is required to be input.
Parameters/variables that pertain to the primary resource MUST NOT use the resource type in the name.
e.g., use sku, vs. virtualMachineSku/virtualmachine_sku
Another example for where RPs contain some of their name within a property, leave the property unchanged. E.g. Key Vault has a property called keySize, it is fine to leave as this and not remove the key part from the property/parameter name.
A resource module MUST use the following standard inputs:
name (no default)
location (if supported by the resource and not a global resource, then use Resource Group location, if resource supports Resource Groups, otherwise no default)
Authors SHOULD NOT output entire resource objects as these may contain sensitive outputs and the schema can change with API or provider versions. Instead, authors SHOULD output the computed attributes of the resource as discreet outputs. This kind of pattern protects against provider schema changes and is known as an anti-corruption layer.
Remember, you SHOULD NOT output values that are already inputs (other than name).
E.g.,
# Resource output, computed attribute.
output"foo" {
description = "MyResource foo attribute"value = azurerm_resource_myresource.foo}# Resource output for resources that are deployed using `for_each`. Again only computed attributes.
output"childresource_foos" {
description = "MyResource children's foo attributes"value = {
forkey, valueinazurerm_resource_mychildresource:key => value.foo }
}# Output of a sensitive attribute
output"bar" {
description = "MyResource bar attribute"value = azurerm_resource_myresource.barsensitive = true}
ID: TFNFR14 - Category: Inputs - Not allowed variables
Since Terraform 0.13, count, for_each and depends_on are introduced for modules, module development is significantly simplified. Module’s owners MUST NOT add variables like enabled or module_depends_on to control the entire module’s operation. Boolean feature toggles are acceptable however.
Testing
The content below is listed based on the following tags
Modules MUST implement end-to-end (deployment) testing that create actual resources to validate that module deployments work. In Bicep tests are sourced from the directories in /tests/e2e. In Terraform, these are in /examples.
Each test MUST run and complete without user inputs successfully, for automation purposes.
Each test MUST also destroy/clean-up its resources and test dependencies following a run.
Tip
To see a directory and file structure for a module, see the language specific contribution guide.
It is likely that to complete E2E tests, a number of resources will be required as dependencies to enable the tests to pass successfully. Some examples:
When testing the Diagnostic Settings interface for a Resource Module, you will need an existing Log Analytics Workspace to be able to send the logs to as a destination.
When testing the Private Endpoints interface for a Resource Module, you will need an existing Virtual Network, Subnet and Private DNS Zone to be able to complete the Private Endpoint deployment and configuration.
Module owners MUST:
Create the required resources that their module depends upon in the test file/directory
They MUST either use:
Simple/native resource declarations/definitions in their respective IaC language, OR
Another already published AVM Module that MUST be pinned to a specific published version.
They MUST NOT use any local directory path references or local copies of AVM modules in their own modules test directory.
β Terraform & Bicep Log Analytics Workspace examples using simple/native declarations for use in E2E tests
Modules SHOULD implement unit testing to ensure logic and conditions within parameters/variables/locals are performing correctly. These tests MUST pass before a module version can be published.
Unit Tests test specific module functionality, without deploying resources. Used on more complex modules. In Bicep and Terraform these live in tests/unit.
Modules MUST use static analysis, e.g., linting, security scanning (PSRule, tflint, etc.). These tests MUST pass before a module version can be published.
There may be differences between languages in linting rules standards, but the AVM core team will try to close these and bring them into alignment over time.
Modules MUST implement idempotency end-to-end (deployment) testing. E.g. deploying the module twice over the top of itself.
Modules SHOULD pass the idempotency test, as we are aware that there are some exceptions where they may fail as a false-positive or legitimate cases where a resource cannot be idempotent.
For example, Virtual Machine Image names must be unique on each resource creation/update.
Module owners MUST test that child and extension resources and those Bicep or Terreform interface resources that are supported by their modules, are validated in E2E tests as per SNFR2 to ensure they deploy and are configured correctly.
These MAY be tested in a separate E2E test and DO NOT have to be tested in each E2E test.
README documentation MUST be automatically/programmatically generated. MUST include the sections as defined in the language specific requirements BCPNFR2, TFNFR2.
Where descriptions for variables and outputs spans multiple lines. The description MAY provide variable input examples for each variable using the HEREDOC format and embedded markdown.
Example:
variable"my_complex_input" {
type = map(object({
param1 = stringparam2 = optional(number, null)
}))
description = <<DESCRIPTION A complex input variable that is a map of objects.
Each object has two attributes:
- `param1`: A required string parameter.
- `param2`: (Optional) An optional number parameter.
Example Input:
```terraform
my_complex_input = {
"object1" = {
param1 = "value1"
param2 = 2
}
"object2" = {
param1 = "value2"
}
}
```
DESCRIPTION }
You cannot specify the patch version for Bicep modules in the public Bicep Registry, as this is automatically incremented by 1 each time a module is published. You can only set the Major and Minor versions.
Modules MUST use semantic versioning (aka semver) for their versions and releases in accordance with: Semantic Versioning 2.0.0
For example all modules should be released using a semantic version that matches this pattern: X.Y.Z
X == Major Version
Y == Minor Version
Z == Patch Version
Module versioning before first Major version release 1.0.0
Initially modules MUST be released as version 0.1.0 and incremented via Minor and Patch versions only until the AVM Core Team are confident the AVM specifications are mature enough and appropriate CI test coverage is in place, plus the module owner is happy the module has been “road tested” and is now stable enough for its first Major release of version 1.0.0.
Note
Releasing as version 0.1.0 initially and only incrementing Minor and Patch versions allows the module owner to make breaking changes more easily and frequently as it’s still not an official Major/Stable release. π
Until first Major version 1.0.0 is released, given a version number X.Y.Z:
X Major version MUST NOT be bumped.
Y Minor version MUST be bumped when introducing breaking changes (which would normally bump Major after 1.0.0 release) or feature updates (same as it will be after 1.0.0 release).
Z Patch version MUST be bumped when introducing non-breaking, backward compatible bug fixes (same as it will be after 1.0.0 release).
A module SHOULD avoid breaking changes, e.g., deprecating inputs vs. removing. If you need to implement changes that cause a breaking change, the major version should be increased.
Info
Modules that have not been released as 1.0.0 may introduce breaking changes, as explained in the previous ID SNFR17. That means that you have to introduce non-breaking and breaking changes with a minor version jump, as long as the module has not reached version 1.0.0.
There are, however, scenarios where you want to include breaking changes into a commit and not create a new major version. If you want to introduce breaking changes as part of a minor update, you can do so. In this case, it is essential to keep the change backward compatible, so that the existing code will continue to work. At a later point, another update can increase the major version and remove the code introduced for the backward compatibility.
Tip
See the language specific examples to find out how you can deal with deprecations in AVM modules.
ID: SNFR21 - Category: Publishing - Cross Language Collaboration
When the module owners of the same Resource or Pattern AVM module are not the same individual or team for all languages, each languages team SHOULD collaborate with their sibling language team for the same module to ensure consistency where possible.
You cannot specify the patch version for Bicep modules in the public Bicep Registry, as this is automatically incremented by 1 each time a module is published. You can only set the Major and Minor versions.
Modules MUST use semantic versioning (aka semver) for their versions and releases in accordance with: Semantic Versioning 2.0.0
For example all modules should be released using a semantic version that matches this pattern: X.Y.Z
X == Major Version
Y == Minor Version
Z == Patch Version
Module versioning before first Major version release 1.0.0
Initially modules MUST be released as version 0.1.0 and incremented via Minor and Patch versions only until the AVM Core Team are confident the AVM specifications are mature enough and appropriate CI test coverage is in place, plus the module owner is happy the module has been “road tested” and is now stable enough for its first Major release of version 1.0.0.
Note
Releasing as version 0.1.0 initially and only incrementing Minor and Patch versions allows the module owner to make breaking changes more easily and frequently as it’s still not an official Major/Stable release. π
Until first Major version 1.0.0 is released, given a version number X.Y.Z:
X Major version MUST NOT be bumped.
Y Minor version MUST be bumped when introducing breaking changes (which would normally bump Major after 1.0.0 release) or feature updates (same as it will be after 1.0.0 release).
Z Patch version MUST be bumped when introducing non-breaking, backward compatible bug fixes (same as it will be after 1.0.0 release).
Module owners MAY cross-references other modules to build either Resource or Pattern modules.
However, they MUST be referenced only by a public registry reference to a pinned version e.g. br/public:avm/[res|ptn|utl]/<publishedModuleName>:>version<. They MUST NOT use local parent path references to a module e.g. ../../xxx/yyy.bicep.
The only exception to this rule are child modules as documented in BCPFR6.
Modules MUST NOT contain references to non-AVM modules.
ID: BCPFR2 - Category: Composition - Role Assignments Role Definition Mapping
Module owners MAY define common RBAC Role Definition names and IDs within a variable to allow consumers to define a RBAC Role Definition by their name rather than their ID, this should be self contained within the module themselves.
However, they MUST use only the official RBAC Role Definition name within the variable and nothing else.
To meet the requirements of BCPFR2, BCPNFR5 and BCPNFR6 you MUST use the below code sample in your AVM Modules to achieve this.
@description('''Required. You can provide either the display name (note not all roles are supported, check module documentation) of the role definition, or its fully qualified ID in the following format: `/providers/Microsoft.Authorization/roleDefinitions/c2f4ef07-c644-48eb-af81-4b1b4947fb11`.''')
param roleDefinitionIdOrName string
var builtInRbacRoleNames = {
Owner: '/providers/Microsoft.Authorization/roleDefinitions/8e3af657-a8ff-443c-a75c-2fe8c4bcb635' Contributor: '/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c' Reader: '/providers/Microsoft.Authorization/roleDefinitions/acdd72a7-3385-48ef-bd42-f606fba81ae7''Role Based Access Control Administrator (Preview)': '/providers/Microsoft.Authorization/roleDefinitions/f58310d9-a9f6-439a-9e8d-f62e7b41a168''User Access Administrator': '/providers/Microsoft.Authorization/roleDefinitions/18d7d88d-d35e-4fb5-a5c3-7773c20a72d9'//Other RBAC Role Definitions Names & IDs can be added here as needed for your module }
var roleDefinitionIdMappedResult = (contains(builtInRbacRoleNames, roleDefinitionIdOrName) ? builtInRbacRoleNames[roleDefinitionIdOrName] : roleDefinitionIdOrName)
resource roleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
//Other properties removed for ease of reading properties: {
roleDefinitionId: roleDefinitionIdMappedResult
//Other properties removed for ease of reading }
}
To comply with specifications outlined in SFR3 & SFR4 you MUST incorporate the following code snippet into your modules. Place this code sample in the “top level” main.bicep file; it is not necessary to include it in any nested Bicep files (child modules).
ID: BCPFR5 - Category: Inputs - Availability Zones Implementation
To implement requirement SFR5, the following convention SHOULD apply:
Availability Zones
@description('Optional. The Availability Zones to place the resources in.')
@allowed([
1
2
3
])
param zones int[] = [
1
2
3
]
resource myResource (...) {
(...)
properties: {
(...)
zones: map(zones, zone => string(zone))
}
}
@description('Required. The Availability Zone to place the resource in. If set to 0, then Availability Zone is not set.')
@allowed([
0
1
2
3
])
param zone int
resource myResource (...) {
(...)
properties: {
(...)
zones: zone != 0 ? [ string(zone) ] : null }
}
Parent templates MUST reference all their direct child-templates to allow for an end-to-end deployment experience. For example, the SQL server template must reference its child database module and encapsulate it in a loop to allow for the deployment of multiple databases.
@description('Optional. The databases to create in the server')
param databases databaseType[]?
resource server 'Microsoft.Sql/servers@(...)' = { (...) }
module server_databases 'database/main.bicep' = [for (database, index) in (databases ?? []): {
name: '${uniqueString(deployment().name, location)}-Sql-DB-${index}' params: {
serverName: server.name
(...)
}
}]
ID: BCPNFR1 - Inputs - User-defined types - General
To simplify the consumption experience for module consumers when interacting with complex data types input parameters, mainly objects and arrays, the Bicep feature of User-Defined TypesMUST be used and declared.
Tip
User-Defined Types are GA in Bicep as of version v0.21.1, please ensure you have this version installed as a minimum.
User-Defined Types allow intellisense support in supported IDEs (e.g. Visual Studio Code) for complex input parameters using arrays and objects.
CARML Migration Exemption
While the transition of CARML modules into AVM is complete, retrofitting User-Defined Types for all modules will take a considerable amount of time.
Therefore, the addition of User-Defined Types is currently NOT mandated/enforced. However, past their initial release, all modules MUST implement User-Defined Types prior to the release of their next version.
ID: BCPNFR10 - Category: Testing - Test Bicep File Naming
Module owners MUST name their test .bicep files in the /tests/e2e/<defaults/waf-aligned/max/etc.> directories: main.test.bicep as the test framework (CI) relies upon this name.
ID: BCPNFR13 - Category: Testing - Test file metadata
By default, the ReadMe-generating utility will create usage examples headers based on each e2e folder’s name. Module owners MAY provide a custom name & description by specifying the metadata blocks name & description in their main.test.bicep test files.
For example:
metadata name = 'Using Customer-Managed-Keys with System-Assigned identity'metadata description = 'This instance deploys the module using Customer-Managed-Keys using a System-Assigned Identity. This required the service to be deployed twice, once as a pre-requisite to create the System-Assigned Identity, and once to use it for accessing the Customer-Managed-Key secret.'
would lead to a header in the module’s readme.md file along the lines of
### Example 1: _Using Customer-Managed-Keys with System-Assigned identity_
This instance deploys the module using Customer-Managed-Keys using a System-Assigned Identity. This required the service to be deployed twice, once as a pre-requisite to create the System-Assigned Identity, and once to use it for accessing the Customer-Managed-Key secret.
The version value is in the form of MAJOR.MINOR. The PATCH version will be incremented by the CI automatically when publishing the module to the Public Bicep Registry once the corresponding pull request is merged. Therefore, contributions that would only require an update of the patch version, can keep the version.json file intact.
For example, the version value should be:
0.1 for new modules, so that they can be released as v0.1.0.
1.0 once the module owner signs off the module is stable enough for itβs first Major release of v1.0.0.
0.x for all feature updates between the first release v0.1.0 and the first Major release of v1.0.0.
As part of the “initial Pull Request” (that publishes the first version of the module), module owners MUST add an entry to the AVM Module Issue template file in the BRM repository (here).
Note
Through this approach, the AVM core team will allow raising a bug or feature request for a module, only after the module gets merged to the BRM repository.
The module name entry MUST be added to the dropdown list with id module-name-dropdown as an option, in alphabetical order.
Important
Module owners MUST ensure that the module name is added in alphabetical order, to simplify selecting the right module name when raising an AVM module issue.
Example - AVM Module Issue template module name entry for the Bicep resource module of Azure Virtual Network (avm/res/network/virtual-network):
- type: dropdownid: module-name-dropdownattributes:
label: Module Namedescription: Which existing AVM module is this issue related to?options:
... - "avm/res/network/virtual-network"...
For each test case in the e2e folder, you can optionally add post-deployment Pester tests that are executed once the corresponding deployment completed and before the removal logic kicks in.
To leverage the feature you MUST:
Use Pester as a test framework in each test file
Name the file with the suffix "*.tests.ps1"
Place each test file the e2e test’s folder or any subfolder (e.g., e2e/max/myTest.tests.ps1 or e2e/max/tests/myTest.tests.ps1)
Implement an input parameter TestInputData in the following way:
Through this parameter you can make use of every output the main.test.bicep file returns, as well as the path to the test template file in case you want to extract data from it directly.
For example, with an output such as output resourceId string = testDeployment[1].outputs.resourceId defined in the main.test.bicep file, the $TestInputData would look like:
$TestInputData = @{
DeploymentOutputs = @{
resourceId = @{
Type = "String" Value = "/subscriptions/***/resourceGroups/dep-***-keyvault.vaults-kvvpe-rg/providers/Microsoft.KeyVault/vaults/***kvvpe001" }
}
ModuleTestFolderPath = "/home/runner/work/bicep-registry-modules/bicep-registry-modules/avm/res/key-vault/vault/tests/e2e/private-endpoint"}
To improve the usability of primitive module properties declared as strings, you SHOULD declare them using a type which better represents them, and apply any required casting in the module on behalf of the user.
For reference, please refer to the following examples:
User-defined types (UDTs) MUST always be singular and non-nullable. The configuration of either should instead be done directly at the parameter or output that uses the type.
For example, instead of
param subnets subnetsType
type subnetsType = { ... }[]?
the type should be defined like
param subnets subnetType[]?
type subnetType = { ... }
The primary reason for this requirement is clarity. If not defined directly at the parameter or output, a user would always be required to check the type to understand how e.g., a parameter is expected.
User-defined types (UDTs) MUST always end with the suffix (...)Type to make them obvious to users. In addition it is recommended to extend the suffix to (...)OutputType if a UDT is exclusively used for outputs.
type subnet = { ... } // Wrongtype subnetType = { ... } // Correcttype subnetOutputType = { ... } // Correct, if used only for outputs
Since User-defined types (UDTs) MUST always be singular as per BCPNFR18, their naming should reflect this and also be singular.
User-defined types (UDTs) SHOULD always be exported via the @export() annotation in every template they’re implemented in.
@export()
type subnetType = { ... }
Doing so has the benefit that other (e.g., parent) modules can import them and as such reduce code duplication. Also, if the module itself is published, users of the Public Bicep Registry can import the types independently of the module itself. One example where this can be useful is a pattern module that may re-use the same interface when referencing a module from the registry.
Similar to BCPNFR9, User-defined types (UDTs) MUST implement decorators such as description & secure (if sensitive). This is true for every property of the UDT, as well as the UDT itself.
Further, User-defined types SHOULD implement decorators like allowed, minValue, maxValue, minLength & maxLength (and others if available) as they have a big positive impact on the module’s usability.
@description('My type''s description.')
type myType = {
@description('Optional. The threshold of your resource.')
@minValue(1)
@maxValue(10)
threshold: int?
@description('Required. The SKU of your resource.')
sku: ('Basic' | 'Premium' | 'Standard')
}
The above formats are currently automatically taken & generated from the tests/e2e tests. It is enough to run the Set-ModuleReadMe or Set-AVMModule functions (from the utilities folder) to update the usage examples in the readme(s).
Note
Bicep Parameter Files (.bicepparam) are being reviewed and considered by the AVM team for the usability and features at this time and will likely be added in the future.
It is planned that these examples are automatically added to the module readme’s parameter descriptions when running either the Set-ModuleReadMe or Set-AVMModule scripts (available in the utilities folder).
BCPNFR5 - Role Assignments Role Definition Mapping Limits
ID: BCPNFR5 - Category: Composition - Role Assignments Role Definition Mapping Limits
As per BCPFR2, module owners MAY define common RBAC Role Definition names and IDs within a variable to allow consumers to define a RBAC Role Definition by their name rather than their ID.
Module owners SHOULD NOT map every RBAC Role Definition within this variable as it can cause the module to bloat in size and cause consumption issues later when stitched together with other modules due to the 4MB ARM Template size limit.
Therefore module owners SHOULD only map the most applicable and common RBAC Role Definition names for their module and SHOULD NOT exceed 15 RBAC Role Definitions in the variable.
Important
Remember if the RBAC Role Definition name is not included in the variable this does not mean it cannot be declared, used and assigned to an identity via an RBAC Role Assignment as part of a module, as any RBAC Role Definition can be specified via its ID without being in the variable.
Modules will have lots of parameters that will differ in their requirement type (required, optional, etc.). To help consumers understand what each parameter’s requirement type is, module owners MUST add the requirement type to the beginning of each parameter’s description. Below are the requirement types with a definition and example for the description decorator:
Parameter Requirement Type
Definition
Example Description Decorator
Required
The parameter value must be provided. The parameter does not have a default value and hence the module expects and requires an input.
The parameter value can be optional or required based on a condition, mostly based on the value provided to other parameters. Should contain a sentence starting with ‘Required if (…).’ to explain the condition.
The parameter value is generated within the module and should not be specified as input in most cases. A common example of this is the utcNow() function that is only supported as the input for a parameter value, and not inside a variable.
Similar to BCPNFR21, input parameters MUST implement decorators such as description & secure (if sensitive).
Further, input parameters SHOULD implement decorators like allowed, minValue, maxValue, minLength & maxLength (and others if available) as they have a big positive impact on the module’s usability.
@description('Optional. The threshold of your resource.')
@minValue(1)
@maxValue(10)
param threshold: int?
@description('Required. The SKU of your resource.')
@allowed([
'Basic''Premium''Standard'])
param sku string
ID: BCPRMNFR1 - Category: Testing - Expected Test Directories
Module owners MUST create the defaults, waf-aligned folders within their /tests/e2e/ directory in their resource module source code and SHOULD create a max folder also. Module owners CAN create additional folders as required. Each folder will be used as described for various test cases.
Defaults tests (MUST)
The defaults folder contains a test instance that deploys the module with the minimum set of required parameters.
This includes input parameters of type Required plus input parameters of type Conditional marked as required for WAF compliance.
This instance has heavy reliance on the default values for other input parameters. Parameters of type OptionalSHOULD NOT be used.
WAF aligned tests (MUST)
The waf-aligned folder contains a test instance that deploys the module in alignment with the best-practices of the Azure Well-Architected Framework.
This includes input parameters of type Required, parameters of type Conditional marked as required for WAF compliance, and parameters of type Optional useful for WAF compliance.
Parameters and dependencies which are not needed for WAF compliance, SHOULD NOT be included.
Max tests (SHOULD)
The max folder contains a test instance that deploys the module using a large parameter set, enabling most of the modules’ features.
The purpose of this instance is primarily parameter validation and not necessarily to serve as a real example scenario. Ideally, all features, extension resources and child resources should be enabled in this test, unless not possible due to conflicts, e.g., in case parameters are mutually exclusive.
Note
Please note that this test is not mandatory to have, but recommended for bulk parameter validation. It can be skipped in case the module parameter validation is covered already by additional, more scenario-specific tests.
Additional tests (CAN)
Additional folders CAN be created by module owners as required.
For example, to validate parameters not covered by the max test due to conflicts, or to provide a real example scenario for a specific use case.
If a module can deploy varying styles of the same resource, e.g., VMs can be Linux or Windows, each style should be tested as both defaults and waf-aligned. These names should be used as suffixes in the directory name to denote the style, e.g., for a VM we would expect to see:
When implementing any of the shared or Bicep-specific AVM interface variants you MUST import their User-defined type (UDT) via the published AVM-Common-Types module.
When doing so, each type MUST be imported separately, right above the parameter or output that uses it.
import { roleAssignmentType } from 'br/public:avm/utl/types/avm-common-types:*.*.*'@description('Optional. Array of role assignments to create.')
param roleAssignments roleAssignmentType[]?
import { diagnosticSettingFullType } from 'br/public:avm/utl/types/avm-common-types:*.*.*'@description('Optional. The diagnostic settings of the service.')
param diagnosticSettings diagnosticSettingFullType[]?
Importing them individually as opposed to one common block has several benefits such as
Individual versioning of types
If you must update the version for one type, you’re not exposed to unexpected changes to other types
Important
The import (...) block MUST not be added in between a parameter’s definition and its metadata. Doing so breaks the metadata’s binding to the parameter in question.
Finally, you should check for version updates regularly to ensure the resource module stays consistent with the specs. If the used AVM-Common-Types runs stale, the CI may eventually fail the module’s static tests.
Child resource modules MUST be stored in a subfolder of their parent resource module and named after the child resource’s singular name (ref), so that the path to the child resource folder is consistent with the hierarchy of its resource type. For example, Microsoft.Sql/servers may have dedicated child resources of type Microsoft.Sql/servers/databases. Hence, the SQL server database child module is stored in a database subfolder of the server parent folder.
sql
ββ server [module]
ββ database [child-module/resource]
In this folder, we recommend to place the child resource-template alongside a ReadMe & compiled JSON (to be generated via the default Set-AVMModule utility) and optionally further nest additional folders for its child resources.
There are several reasons to structure a module in this way. For example:
It allows a separation of concerns where each module can focus on its own properties and logic, while delegating most of a child-resource’s logic to its separate child module
It’s consistent with the provider namespace structure and makes modules easier to understand not only because they’re more aligned with set structure, but also are aligned with one another
As each module is its own ‘deployment’, it reduces limitations around nested loops
Once the feature is enabled, it will enable module owners to publish set child-modules as separate modules to the public registry, allowing consumers to make use of them directly.
Note
In full transparency: The drawbacks of these additional deployments is an extended deployment period & a contribution to the 800 deployments limit. However, for AVM resource modules it was agreed that the advantages listed above outweigh these limitations.
Module Classifications
Module Classification Definitions
AVM defines two module classifications, Resource Modules and Pattern Modules, that can be created, published, and consumed, these are defined further in the table below:
Module Class
Definition
Who is it for?
Resource Module
Deploys a primary resource with WAF high priority/impact best practice configurations set by default, e.g., availability zones, firewall, enforced Entra ID authentication and other shared interfaces, e.g., RBAC, Locks, Private Endpoints etc. (if supported). See What does AVM mean by “WAF Aligned”?
They MAY include related resources, e.g. VM contains disk & NIC. Focus should be on customer experience. A customer would expect that a VM module would include all required resources to provision a VM.
Furthermore, Resource Modules MUST NOT deploy external dependencies for the primary resource. E.g. a VM needs a vNet and Subnet to be deployed into, but the vNet will not be created by the VM Resource Module.
Finally, a resource can be anything such as Microsoft Defender for Cloud Pricing Plans, these are still resources in ARM and can therefore be created as a Resource Module.
People who want to craft bespoke architectures that default to WAF best practices, where appropriate, for each resource.
People who want to create pattern modules.
Pattern Module
Deploys multiple resources, usually using Resource Modules. They can be any size but should help accelerate a common task/deployment/architecture.
Good candidates for pattern modules are those architectures that exist in Azure Architecture Center, or other official documentation.
Note: Pattern modules can contain other pattern modules, however, pattern modules MUST NOT contain references to non-AVM modules.
People who want to easily deploy patterns (architectures) using WAF best practices.
Utility Module (draft, see below)
Implements a function or routine that can be flexibly reused in resource or pattern modules - e.g., a function that retrieves the endpoint of an API or portal of a given environment.
It MUST NOT deploy any Azure resources other than deployment scripts.
People who want to leverage commonly used functions/routines/helpers in their module, instead of re-implementing them locally.
PREVIEW
The concept of Utility Modules will be introduced gradually, through some initial examples. The definition above is subject to change as additional details are worked out.
The required automated tests and other workflow elements will be derived from the Pattern Modules’ automation/CI environment as the concept matures.
Utility modules will follow the below naming convention:
Bicep: avm/utl/<hyphenated grouping/category name>/<hyphenated utility module name>. Modules will be kept under the avm/utl folder in the BRM repository.
Terraform: avm-utl-<utility-module-name>. Repositories will be named after the utility module (e.g., terraform-azurerm-avm-utl-<my utility module>).
All related documentation (functional and non-functional requirements, etc.) will also be published along the way.
Module Lifecycle
Note
This page is still a work in progress
Request/Propose a New AVM Resource or Pattern Module
Please review the Process Overview for guidance on proposing a new AVM module.
Orphaned AVM Modules
It is critical to the consumers experience that modules continue to be maintained. In the case where a module owner cannot continue in their role or do not respond to issues as per the defined timescale in the Module Support page , the following process will apply:
The module owner is responsible for finding a replacement owner and providing a handover.
If no replacement can be found or the module owner leaves Microsoft without giving warning to the AVM core team, the AVM core team will provide essential maintenance (critical bug and security fixes), as per the Module Support page
The AVM core team will continue to try and re-assign the module ownership.
While a module is in an orphaned state, only security and bug fixes MUST be made, no new feature development will be worked on until a new owner is found that can then lead this effort for the module.
An issue will be created on the central AVM repo (Azure/Azure-Verified-Modules) to track the finding of a new owner for a module
Notification of a Module Becoming Orphaned
Important
When a module becomes orphaned, the AVM core team will communicate this through an information notice to be placed as follows.
In case of a Bicep module, the information notice will be placed in an ORPHANED.md file and in the header of the module’s README.md - both residing in the module’s root.
In case of a Terraform module, the information notice will be placed in the header of the README.md file, in the module’s root.
The information notice will include the following statement:
β οΈTHIS MODULE IS CURRENTLY ORPHANED.β οΈ
- Only security and bug fixes are being handled by the AVM core team at present.
- If interested in becoming the module owner of this orphaned module (must be Microsoft FTE), please look for the related "orphaned module" GitHub issue [here](https://aka.ms/AVM/OrphanedModules)!
Also, the AVM core team will amend the issue automation to auto reply stating that the repo is orphaned and only security/bug fixes are being handled until a new module owner is found.
Example: avm/ptn/compute/app-tier-vmss or avm/ptn/avd-lza/management-plane or avm/ptn/3-tier/web-app
Segments:
ptn defines this as a pattern module
<hyphenated grouping/category name> is a hierarchical grouping of pattern modules by category, with each word separated by dashes, such as:
project name, e.g., avd-lza,
primary resource provider, e.g., compute or network, or
architecture, e.g., 3-tier
<hyphenated pattern module name> is a term describing the moduleβs function, with each word separated by dashes, e.g., app-tier-vmss = Application Tier VMSS; management-plane = Azure Virtual Desktop Landing Zone Accelerator Management Plane
Terraform Pattern Module Naming
Naming convention:
avm-ptn-<pattern module name> (Module name for registry)
terraform-<provider>-avm-ptn-<pattern module name> (GitHub repository name to meet registry naming requirements)
Example: avm-ptn-apptiervmss or avm-ptn-avd-lza-managementplane
Segments:
<provider> is the logical abstraction of various APIs used by Terraform. In most cases, this is going to be azurerm or azuread for resource modules.
ptn defines this as a pattern module
<pattern module name> is a term describing the moduleβs function, e.g., apptiervmss = Application Tier VMSS; avd-lza-managementplane = Azure Virtual Desktop Landing Zone Accelerator Management Plane
PMNFR2 - Use Resource Modules to Build a Pattern Module
ID: PMNFR2 - Category: Composition - Use Resource Modules to Build a Pattern Module
A Pattern Module SHOULD be built from AVM Resources Modules to establish a standardized code base and improve maintainability. If a valid reason exists, a pattern module MAY contain native resources (“vanilla” code) where it’s necessary. A Pattern Module MUST NOT contain references to non-AVM modules.
Valid reasons for not using a Resource Module for a resource required by a Pattern Module include but are not limited to:
When using a Resource Module would result in hitting scaling limitations and/or would reduce the capabilities of the Pattern Module due to the limitations of Azure Resource Manager.
Developing a Pattern Module under time constraint, without having all required Resource Modules readily available.
Note
In the latter case, the Pattern Module SHOULD be updated to use the Resource Module when the required Resource Module becomes available, to avoid accumulating technical debt. Ideally, all required Resource Modules SHOULD be developed first, and then leveraged by the Pattern Module.
Resource modules support the following optional features/extension resources, as specified, if supported by the primary resource. The top-level variable/parameter names MUST be:
Optional Features/Extension Resources
Bicep Parameter Name
Terraform Variable Name
MUST/SHOULD
Diagnostic Settings
diagnosticSettings
diagnostic_settings
MUST
Role Assignments
roleAssignments
role_assignments
MUST
Resource Locks
lock
lock
MUST
Tags
tags
tags
MUST
Managed Identities (System / User Assigned)
managedIdentities
managed_identities
MUST
Private Endpoints
privateEndpoints
private_endpoints
MUST
Customer Managed Keys
customerManagedKey
customer_managed_key
MUST
Azure Monitor Alerts
alerts
alerts
SHOULD
Resource modules MUST NOT deploy required/dependent resources for the optional features/extension resources specified above. For example, for Diagnostic Settings the resource module MUST NOT deploy the Log Analytics Workspace, this is expected to be already in existence from the perspective of the resource module deployed via another method/module etc.
Note
Please note that the implementation of Customer Managed Keys from an ARM API perspective is different across various RPs that implement Customer Managed Keys in their service. For that reason you may see differences between modules on how Customer Managed Keys are handled and implemented, but functionality will be as expected.
Module owners MAY choose to utilize cross repo dependencies for these “add-on” resources, or MAY chose to implement the code directly in their own repo/module. So long as the implementation and outputs are as per the specifications requirements, then this is acceptable.
Tip
Make sure to checkout the language specific specifications for more info on this:
Resource modules MUST implement a common interface, e.g. the input’s data structures and properties within them (objects/arrays/dictionaries/maps), for the optional features/extension resources:
Parameters/variables that pertain to the primary resource MUST NOT use the resource type in the name.
e.g., use sku, vs. virtualMachineSku/virtualmachine_sku
Another example for where RPs contain some of their name within a property, leave the property unchanged. E.g. Key Vault has a property called keySize, it is fine to leave as this and not remove the key part from the property/parameter name.
When a given version of an Azure resource used in a resource module reaches its end-of-life (EOL) and is no longer supported by Microsoft, the module owner SHOULD ensure that:
The module is aligned with these changes and only includes supported versions of the resource. This is typically achieved through the allowed values in the parameter that specifies the resource SKU or type.
The following notice is shown under the Notes section of the module’s readme.md. (If any related public announcement is available, it can also be linked to from the Notes section.):
“Certain versions of this Azure resource reached their end of life. The latest version of this module only includes supported versions of the resource. All unsupported versions have been removed from the related parameters.”
AND the related parameter’s description:
“Certain versions of this Azure resource reached their end of life. The latest version of this module only includes supported versions of the resource. All unsupported versions have been removed from this parameter.”
We will maintain a set of CSV files in the AVM Central Repo (Azure/Azure-Verified-Modules) with the correct singular names for all resource types to enable checks to utilize this list to ensure repos are named correctly. To see the formatted content of these CSV files with additional information, please visit the AVM Module Indexes page.
This will be updated quarterly, or ad-hoc as new RPs/ Resources are created and highlighted via a check failure.
Resource modules MUST follow the below naming conventions (all lower case):
Bicep Resource Module Naming
Naming convention: avm/res/<hyphenated resource provider name>/<hyphenated ARM resource type> (module name for registry)
Example: avm/res/compute/virtual-machine or avm/res/managed-identity/user-assigned-identity
Segments:
res defines this is a resource module
<hyphenated resource provider name> is the resource providerβs name after the Microsoft part, with each word starting with a capital letter separated by dashes, e.g., Microsoft.Compute = compute, Microsoft.ManagedIdentity = managed-identity.
<hyphenated ARM resource type> is the singular version of the word after the resource provider, with each word starting with a capital letter separated by dashes, e.g., Microsoft.Compute/virtualMachines = virtual-machine, BUTMicrosoft.Network/trafficmanagerprofiles = trafficmanagerprofile - since trafficmanagerprofiles is all lower case as per the ARM API definition.
Terraform Resource Module Naming
Naming convention:
avm-res-<resource provider>-<ARM resource type> (module name for registry)
terraform-<provider>-avm-res-<resource provider>-<ARM resource type> (GitHub repository name to meet registry naming requirements)
Example: avm-res-compute-virtualmachine or avm-res-managedidentity-userassignedidentity
Segments:
<provider> is the logical abstraction of various APIs used by Terraform. In most cases, this is going to be azurerm or azuread for resource modules.
res defines this is a resource module
<resource provider> is the resource providerβs name after the Microsoft part, e.g., Microsoft.Compute = compute.
<ARM resource type> is the singular version of the word after the resource provider, e.g., Microsoft.Compute/virtualMachines = virtualmachine
A resource module MUST use the following standard inputs:
name (no default)
location (if supported by the resource and not a global resource, then use Resource Group location, if resource supports Resource Groups, otherwise no default)
ID: RMNFR3 - Category: Composition - RP Collaboration
Module owners (Microsoft FTEs) SHOULD reach out to the respective Resource Provider teams to build a partnership and collaboration on the modules creation, existence and long term maintenance.
Modules MAY create/adopt public preview services and features at their discretion.
Preview API versions MAY be used when:
The resource/service/feature is GA but the only API version available for the GA resource/service/feature is a preview version
For example, Diagnostic Settings (Microsoft.Insights/diagnosticSettings) the latest version of the API available with GA features, like Category Groups etc., is 2021-05-01-preview
Otherwise the latest “non-preview” version of the API SHOULD be used
Preview services and features, SHOULD NOT be promoted and exposed, unless they are supported by the respective PG, and it’s documented publicly.
However, they MAY be exposed at the module owners discretion, but the following rules MUST be followed:
The description of each of the parameters/variables used for the preview service/feature MUST start with:
“THIS IS A <PARAMETER/VARIABLE> USED FOR A PREVIEW SERVICE/FEATURE, MICROSOFT MAY NOT PROVIDE SUPPORT FOR THIS, PLEASE CHECK THE PRODUCT DOCS FOR CLARIFICATION”
Modules SHOULD set defaults in input parameters/variables to align to high priority/impact/severity recommendations, where appropriate and applicable, in the following frameworks and resources:
They SHOULD NOT align to these recommendations when it requires an external dependency/resource to be deployed and configured and then associated to the resources in the module.
Alignment SHOULD prioritize best-practices and security over cost optimization, but MUST allow for these to be overridden by a module consumer easily, if desired.
We will maintain a set of CSV files in the AVM Central Repo (Azure/Azure-Verified-Modules) with the required TelemetryId prefixes to enable checks to utilize this list to ensure the correct IDs are used. To see the formatted content of these CSV files with additional information, please visit the AVM Module Indexes page.
These will also be provided as a comment on the module proposal, once accepted, from the AVM core team.
Modules MUST provide the capability to collect deployment/usage telemetry as detailed in Telemetry further.
To highlight that AVM modules use telemetry, an information notice MUST be included in the footer of each module’s README.md file with the below content. (See more details on this requirement, here.)
Telemetry Information Notice
Note
The following information notice is automatically added at the bottom of the README.md file of the module when
Terraform: Executing the make docs command with the note and header ## Data Collection being placed in the module’s _footer.md beforehand
### Data Collection
The software may collect information about you and your use of the software and send it to Microsoft. Microsoft may use this information to provide services and improve our products and services. You may turn off the telemetry as described in the [repository](https://aka.ms/avm/telemetry). There are also some features in the software that may enable you and Microsoft to collect data from users of your applications. If you use these features, you must comply with applicable law, including providing appropriate notices to users of your applications together with a copy of Microsoftβs privacy statement. Our privacy statement is located at <https://go.microsoft.com/fwlink/?LinkID=824704>. You can learn more about data collection and use in the help documentation and our privacy statement. Your use of the software operates as your consent to these practices.
Bicep
The ARM deployment name used for the telemetry MUST follow the pattern and MUST be no longer than 64 characters in length: 46d3xbcp.<res/ptn>.<(short) module name>.<version>.<uniqueness>
<res/ptn> == AVM Resource or Pattern Module
<(short) module name> == The AVM Module’s, possibly shortened, name including the resource provider and the resource type, without;
The prefixes: avm-res-
The prefixes: avm-ptn-
<version> == The AVM Module’s MAJOR.MINOR version (only) with . (periods) replaced with - (hyphens), to allow simpler splitting of the ARM deployment name
<uniqueness> == This section of the ARM deployment name is to be used to ensure uniqueness of the deployment name.
This is to cater for the following scenarios:
The module is deployed multiple times to the same:
Due to the 64-character length limit of Azure deployment names, the <(short) module name> segment has a length limit of 36 characters, so if the module name is longer than that, it MUST be truncated to 36 characters. If any of the semantic version’s segments are longer than 1 character, it further restricts the number of characters that can be used for naming the module.
An example deployment name for the AVM Virtual Machine Resource Module would be: 46d3xbcp.res.compute-virtualmachine.1-2-3.eum3
An example deployment name for a shortened module name would be: 46d3xbcp.res.desktopvirtualization-appgroup.1-2-3.eum3
Tip
Terraform: Terraform uses a telemetry provider, the configuration of which is the same for every module and is included in the template repo.
General: See the language specific contribution guides for detailed guidance and sample code to use in AVM modules to achieve this requirement.
The telemetry enablement MUST be on/enabled by default, however this MUST be able to be disabled by a module consumer by setting the below parameter/variable value to false:
Bicep: enableTelemetry
Terraform: enable_telemetry
Note
Whenever a module references AVM modules that implement the telemetry parameter (e.g., a pattern module that uses AVM resource modules), the telemetry parameter value MUST be passed through to these modules. This is necessary to ensure a consumer can reliably enable & disable the telemetry feature for all used modules.
ID: SFR5 - Category: Composition - Availability Zones
Modules that deploy zone-redundant resources MUST enable the spanning across as many zones as possible by default, typically all 3.
Modules that deploy zonal resources MUST provide the ability to specify a zone for the resources to be deployed/pinned to. However, they MUST NOT default to a particular zone by default, e.g. 1 in an effort to make the consumer aware of the zone they are selecting to suit their architecture requirements.
For both scenarios the modules MUST expose these configuration options via configurable parameters/variables.
ID: SFR6 - Category: Composition - Data Redundancy
Modules that deploy resources or patterns that support data redundancy SHOULD enable this to the highest possible value by default, e.g. RA-GZRS. When a resource or pattern doesn’t provide the ability to specify data redundancy as a simple property, e.g. GRS etc., then the modules MUST provide the ability to enable data redundancy for the resources or pattern via parameters/variables.
For example, a Storage Account module can simply set the sku.name property to Standard_RAGZRS. Whereas a SQL DB or Cosmos DB module will need to expose more properties, via parameters/variables, to allow the specification of the regions to replicate data to as per the consumers requirements.
Only the latest released version of a module MUST be supported.
For example, if an AVM Resource Module is used in an AVM Pattern Module that was working but now is not. The first step by the AVM Pattern Module owner should be to upgrade to the latest version of the AVM Resource Module test and then if not fixed, troubleshoot and fix forward from the that latest version of the AVM Resource Module onwards.
This avoids AVM Module owners from having to maintain multiple major release versions.
README documentation MUST be automatically/programmatically generated. MUST include the sections as defined in the language specific requirements BCPNFR2, TFNFR2.
You cannot specify the patch version for Bicep modules in the public Bicep Registry, as this is automatically incremented by 1 each time a module is published. You can only set the Major and Minor versions.
Modules MUST use semantic versioning (aka semver) for their versions and releases in accordance with: Semantic Versioning 2.0.0
For example all modules should be released using a semantic version that matches this pattern: X.Y.Z
X == Major Version
Y == Minor Version
Z == Patch Version
Module versioning before first Major version release 1.0.0
Initially modules MUST be released as version 0.1.0 and incremented via Minor and Patch versions only until the AVM Core Team are confident the AVM specifications are mature enough and appropriate CI test coverage is in place, plus the module owner is happy the module has been “road tested” and is now stable enough for its first Major release of version 1.0.0.
Note
Releasing as version 0.1.0 initially and only incrementing Minor and Patch versions allows the module owner to make breaking changes more easily and frequently as it’s still not an official Major/Stable release. π
Until first Major version 1.0.0 is released, given a version number X.Y.Z:
X Major version MUST NOT be bumped.
Y Minor version MUST be bumped when introducing breaking changes (which would normally bump Major after 1.0.0 release) or feature updates (same as it will be after 1.0.0 release).
Z Patch version MUST be bumped when introducing non-breaking, backward compatible bug fixes (same as it will be after 1.0.0 release).
A module SHOULD avoid breaking changes, e.g., deprecating inputs vs. removing. If you need to implement changes that cause a breaking change, the major version should be increased.
Info
Modules that have not been released as 1.0.0 may introduce breaking changes, as explained in the previous ID SNFR17. That means that you have to introduce non-breaking and breaking changes with a minor version jump, as long as the module has not reached version 1.0.0.
There are, however, scenarios where you want to include breaking changes into a commit and not create a new major version. If you want to introduce breaking changes as part of a minor update, you can do so. In this case, it is essential to keep the change backward compatible, so that the existing code will continue to work. At a later point, another update can increase the major version and remove the code introduced for the backward compatibility.
Tip
See the language specific examples to find out how you can deal with deprecations in AVM modules.
Modules MUST implement end-to-end (deployment) testing that create actual resources to validate that module deployments work. In Bicep tests are sourced from the directories in /tests/e2e. In Terraform, these are in /examples.
Each test MUST run and complete without user inputs successfully, for automation purposes.
Each test MUST also destroy/clean-up its resources and test dependencies following a run.
Tip
To see a directory and file structure for a module, see the language specific contribution guide.
It is likely that to complete E2E tests, a number of resources will be required as dependencies to enable the tests to pass successfully. Some examples:
When testing the Diagnostic Settings interface for a Resource Module, you will need an existing Log Analytics Workspace to be able to send the logs to as a destination.
When testing the Private Endpoints interface for a Resource Module, you will need an existing Virtual Network, Subnet and Private DNS Zone to be able to complete the Private Endpoint deployment and configuration.
Module owners MUST:
Create the required resources that their module depends upon in the test file/directory
They MUST either use:
Simple/native resource declarations/definitions in their respective IaC language, OR
Another already published AVM Module that MUST be pinned to a specific published version.
They MUST NOT use any local directory path references or local copies of AVM modules in their own modules test directory.
β Terraform & Bicep Log Analytics Workspace examples using simple/native declarations for use in E2E tests
ID: SNFR20 - Category: Contribution/Support - GitHub Teams Only
All GitHub repositories that AVM module are published from and hosted within MUST only assign GitHub repository permissions to GitHub teams only.
Each module MUST have separate GitHub teams assigned for module owners AND module contributors respectively. These GitHub teams MUST be created in the Azure organization in GitHub.
There MUST NOT be any GitHub repository permissions assigned to individual users.
Note
The names for the GitHub teams for each approved module are already defined in the respective Module Indexes. These teams MUST be created (and used) for each module.
The @Azure prefix in the last column of the tables linked above represents the “Azure” GitHub organization all AVM-related repositories exist in. DO NOT include this segment in the team’s name!
Important
Non-FTE / external contributors (subject matter experts that aren’t Microsoft employees) can’t be members of the teams described in this chapter, hence, they won’t gain any extra permissions on AVM repositories, therefore, they need to work in forks.
Naming Convention
The naming convention for the GitHub teams MUST follow the below pattern:
<hyphenated module name>-module-owners-<bicep/tf> - to be assigned as the GitHub repository’s Module Owners team
<hyphenated module name>-module-contributors-<bicep/tf> - to be assigned as the GitHub repository’s Module Contributors team
Note
The naming convention for Bicep modules is slightly different than the naming convention for their respective GitHub teams.
Segments:
<hyphenated module name> == the AVM Module’s name, with each segment separated by dashes, i.e., avm-res-<resource provider>-<ARM resource type>
All officially documented module owner(s) MUST be added to the -module-owners- team. The -module-owners- team MUST NOT have any other members.
Any additional module contributors whom the module owner(s) agreed to work with MUST be added to the -module-contributors- team.
Unless explicitly requested and agreed, members of the AVM core team or any PG teams MUST NOT be added to the -module-owners- or -module-contributors- teams as permissions for them are granted through the teams described in SNFR9.
Grant Permissions - Bicep
Team memberships
Note
In case of Bicep modules, permissions to the BRM repository (the repo of the Bicep Registry) are granted via assigning the -module-owners- and -module-contributors- teams to parent teams that already have the required level access configured. While it is the module owner’s responsibility to initiate the addition of their teams to the respective parents, only the AVM core team can approve this parent-child relationship.
Module owners MUST create their -module-owners- and -module-contributors- teams and as part of the provisioning process, they MUST request the addition of these teams to their respective parent teams (see the table below for details).
GitHub Team Name
Description
Permissions
Permissions granted through
Where to work?
<hyphenated module name>-module-owners-bicep
AVM Bicep Module Owners - <module name>
Write
Assignment to the avm-technical-reviewers-bicep parent team.
Examples - GitHub teams required for the Bicep resource module of Azure Virtual Network (avm/res/network/virtual-network):
avm-res-network-virtualnetwork-module-owners-bicep –> assign to the avm-technical-reviewers-bicep parent team.
avm-res-network-virtualnetwork-module-contributors-bicep –> assign to the avm-module-contributors-bicep parent team.
Tip
Direct link to create a new GitHub team and assign it to its parent: Create new team
Fill in the values as follows:
Team name: Following the naming convention described above, use the value defined in the module indexes.
Description: Follow the guidance above (see the Description column in the table above).
Parent team: Follow the guidance above (see the Permissions granted through column in the table above).
Team visibility: Visible
Team notifications: Enabled
CODEOWNERS file
As part of the “initial Pull Request” (that publishes the first version of the module), module owners MUST add an entry to the CODEOWNERS file in the BRM repository (here).
Note
Through this approach, the AVM core team will grant review permission to module owners as part of the standard PR review process.
Every CODEOWNERS entry (line) MUST include the following segments separated by a single whitespace character:
Path of the module, relative to the repo’s root, e.g.: /avm/res/network/virtual-network/
The -module-owners-team, with the @Azure/ prefix, e.g., @Azure/avm-res-network-virtualnetwork-module-owners-bicep
The GitHub team of the AVM Bicep reviewers, with the @Azure/ prefix, i.e., @Azure/avm-module-reviewers-bicep
Example - CODEOWNERS entry for the Bicep resource module of Azure Virtual Network (avm/res/network/virtual-network):
Module owners MUST assign the -module-owners-and -module-contributors- teams the necessary permissions on their Terraform module repository per the guidance below.
GitHub Team Name
Description
Permissions
Permissions granted through
Where to work?
<module name>-module-owners-tf
AVM Terraform Module Owners - <module name>
Admin
Direct assignment to repo
Module owner can decide whether they want to work in a branch local to the repo or in a fork.
ID: SNFR21 - Category: Publishing - Cross Language Collaboration
When the module owners of the same Resource or Pattern AVM module are not the same individual or team for all languages, each languages team SHOULD collaborate with their sibling language team for the same module to ensure consistency where possible.
ID: SNFR22 - Category: Inputs - Parameters/Variables for Resource IDs
A module parameter/variable that requires a full Azure Resource ID as an input value, e.g. /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.KeyVault/vaults/{keyVaultName}, MUST contain ResourceId/resource_id in its parameter/variable name to assist users in knowing what value to provide at a glance of the parameter/variable name.
Example for the property workspaceId for the Diagnostic Settings resource. In Bicep its parameter name should be workspaceResourceId and the variable name in Terraform should be workspace_resource_id.
workspaceId is not descriptive enough and is ambiguous as to which ID is required to be input.
```shell
# Linux / MacOs# For Windows replace $PWD with your the local path or your repository#docker run -it -v $PWD:/repo -w /repo mcr.microsoft.com/powershell pwsh -Command '
#Invoke-WebRequest -Uri "https://azure.github.io/Azure-Verified-Modules/scripts/Set-AvmGitHubLabels.ps1" -OutFile "Set-AvmGitHubLabels.ps1"
$gh_version = "2.44.1"
Invoke-WebRequest -Uri "https://github.com/cli/cli/releases/download/v2.44.1/gh_2.44.1_linux_amd64.tar.gz" -OutFile "gh_$($gh_version)_linux_amd64.tar.gz"
apt-get update && apt-get install -y git
tar -xzf "gh_$($gh_version)_linux_amd64.tar.gz"
ls -lsa
mv "gh_$($gh_version)_linux_amd64/bin/gh" /usr/local/bin/
rm "gh_$($gh_version)_linux_amd64.tar.gz" && rm -rf "gh_$($gh_version)_linux_amd64"
gh --version
ls -lsa
gh auth login
$OrgProject = "Azure/terraform-azurerm-avm-res-kusto-cluster"
gh auth status
./Set-AvmGitHubLabels.ps1 -RepositoryName $OrgProject -CreateCsvLabelExports $false -NoUserPrompts $true
'```
By default this script will only update and append labels on the repository specified. However, this can be changed by setting the parameter -UpdateAndAddLabelsOnly to $false, which will remove all the labels from the repository first and then apply the AVM labels from the CSV only.
Make sure you elevate your privilege to admin level or the labels will not be applied to your repository. Go to repos.opensource.microsoft.com/orgs/Azure/repos/ to request admin access before running the script.
Full Script:
These Set-AvmGitHubLabels.ps1 can be downloaded from here.
[Diagnostics.CodeAnalysis.SuppressMessageAttribute("PSAvoidUsingWriteHost", "", Justification = "Coloured output required in this script")]
<#
.SYNOPSIS This script can be used to create the Azure Verified Modules (AVM) standard GitHub labels to a GitHub repository.
.DESCRIPTION This script can be used to create the Azure Verified Modules (AVM) standard GitHub labels to a GitHub repository.
By default, the script will remove all pre-existing labels and apply the AVM labels. However, this can be changed by using the -RemoveExistingLabels parameter and setting it to $false. The tool will also output the labels that exist in the repository before and after the script has run to a CSV file in the current directory, or a directory specified by the -OutputDirectory parameter.
The AVM labels to be created are documented here: TBC
.NOTES Please ensure you have specified the GitHub repositry correctly. The script will prompt you to confirm the repository name before proceeding.
.COMPONENT You must have the GitHub CLI installed and be authenticated to a GitHub account with access to the repository you are applying the labels to before running this script.
.LINK TBC
.Parameter RepositoryName
The name of the GitHub repository to apply the labels to.
.Parameter RemoveExistingLabels
If set to $true, the default value, the script will remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels. If set to $false, the script will not remove any pre-existing labels.
.Parameter UpdateAndAddLabelsOnly
If set to $true, the default value, the script will only update and add labels to the repository specified in -RepositoryName. If set to $false, the script will remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels.
.Parameter OutputDirectory
The directory to output the pre-existing and post-existing labels to in a CSV file. The default value is the current directory.
.Parameter CreateCsvLabelExports
If set to $true, the default value, the script will output the pre-existing and post-existing labels to a CSV file in the current directory, or a directory specified by the -OutputDirectory parameter. If set to $false, the script will not output the pre-existing and post-existing labels to a CSV file.
.Parameter GitHubCliLimit
The maximum number of labels to return from the GitHub CLI. The default value is 999.
.Parameter LabelsToApplyCsvUri
The URI to the CSV file containing the labels to apply to the GitHub repository. The default value is https://raw.githubusercontent.com/jtracey93/label-source/main/avm-github-labels.csv.
.Parameter NoUserPrompts
If set to $true, the default value, the script will not prompt the user to confirm they want to remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels. If set to $false, the script will prompt the user to confirm they want to remove all pre-existing labels from the repository specified in -RepositoryName before applying the AVM labels.
This is useful for running the script in automation workflows
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and remove all pre-existing labels.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo"
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and do not remove any pre-existing labels, just overwrite any labels that have the same name.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -RemoveExistingLabels $false
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and output the pre-existing and post-existing labels to the directory C:\GitHubLabels.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -OutputDirectory "C:\GitHubLabels"
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and output the pre-existing and post-existing labels to the directory C:\GitHubLabels and do not remove any pre-existing labels, just overwrite any labels that have the same name.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -OutputDirectory "C:\GitHubLabels" -RemoveExistingLabels $false
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and do not create the pre-existing and post-existing labels CSV files and do not remove any pre-existing labels, just overwrite any labels that have the same name.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -RemoveExistingLabels $false -CreateCsvLabelExports $false
.EXAMPLE Create the AVM labels in the repository Org/MyGitHubRepo and do not create the pre-existing and post-existing labels CSV files and do not remove any pre-existing labels, just overwrite any labels that have the same name. Finally, use a custom CSV file hosted on the internet to create the labels from.
Set-AvmGitHubLabels.ps1 -RepositoryName "Org/MyGitHubRepo" -OutputDirectory "C:\GitHubLabels" -RemoveExistingLabels $false -CreateCsvLabelExports $false -LabelsToApplyCsvUri "https://example.com/csv/avm-github-labels.csv"
#>#Requires-PSEdition Core [CmdletBinding()]
param (
[Parameter(Mandatory = $true)]
[string]$RepositoryName,
[Parameter(Mandatory = $false)]
[bool]$RemoveExistingLabels = $true,
[Parameter(Mandatory = $false)]
[bool]$UpdateAndAddLabelsOnly = $true,
[Parameter(Mandatory = $false)]
[bool]$CreateCsvLabelExports = $true,
[Parameter(Mandatory = $false)]
[string]$OutputDirectory = (Get-Location),
[Parameter(Mandatory = $false)]
[int]$GitHubCliLimit = 999,
[Parameter(Mandatory = $false)]
[string]$LabelsToApplyCsvUri = "https://azure.github.io/Azure-Verified-Modules/governance/avm-standard-github-labels.csv",
[Parameter(Mandatory = $false)]
[bool]$NoUserPrompts = $false
)
# Check if the GitHub CLI is installed $GitHubCliInstalled = Get-Command gh -ErrorAction SilentlyContinue
if ($null -eq $GitHubCliInstalled) {
throw"The GitHub CLI is not installed. Please install the GitHub CLI and try again." }
Write-Host "The GitHub CLI is installed..." -ForegroundColor Green
# Check if GitHub CLI is authenticated $GitHubCliAuthenticated = gh auth status
if ($LASTEXITCODE -ne0) {
Write-Host $GitHubCliAuthenticated -ForegroundColor Red
throw"Not authenticated to GitHub. Please authenticate to GitHub using the GitHub CLI, `gh auth login`, and try again." }
Write-Host "Authenticated to GitHub..." -ForegroundColor Green
# Check if GitHub repository name is valid $GitHubRepositoryNameValid = $RepositoryName -match"^[a-zA-Z0-9-]+/[a-zA-Z0-9-]+$"if ($false -eq $GitHubRepositoryNameValid) {
throw"The GitHub repository name $RepositoryName is not valid. Please check the repository name and try again. The format must be <OrgName>/<RepoName>" }
# List GitHub repository provided and check it exists $GitHubRepository = gh repo view $RepositoryName
if ($LASTEXITCODE -ne0) {
Write-Host $GitHubRepository -ForegroundColor Red
throw"The GitHub repository $RepositoryName does not exist. Please check the repository name and try again." }
Write-Host "The GitHub repository $RepositoryName exists..." -ForegroundColor Green
# PRE - Get the current GitHub repository labels and export to a CSV file in the current directory or where -OutputDirectory specifies if set to a valid directory path and the directory exists or can be created if it does not exist alreadyif ($RemoveExistingLabels -or $UpdateAndAddLabelsOnly) {
Write-Host "Getting the current GitHub repository (pre) labels for $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels = gh label list -R $RepositoryName -L $GitHubCliLimit --json name,description,color
if ($null -ne $GitHubRepositoryLabels -and $CreateCsvLabelExports -eq $true) {
$csvFileNamePathPre = "$OutputDirectory\$($RepositoryName.Replace('/', '_'))-Labels-Pre-$(Get-Date -Format FileDateTime).csv" Write-Host "Exporting the current GitHub repository (pre) labels for $RepositoryName to $csvFileNamePathPre" -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | Export-Csv -Path $csvFileNamePathPre -NoTypeInformation
}
}
# Remove all pre-existing labels if -RemoveExistingLabels is set to $true and user confirms they want to remove all pre-existing labelsif ($null -ne $GitHubRepositoryLabels) {
$GitHubRepositoryLabelsJson = $GitHubRepositoryLabels | ConvertFrom-Json
if ($RemoveExistingLabels -eq $true -and $NoUserPrompts -eq $false -and $UpdateAndAddLabelsOnly -eq $false) {
$RemoveExistingLabelsConfirmation = Read-Host "Are you sure you want to remove all $($GitHubRepositoryLabelsJson.Count) pre-existing labels from $($RepositoryName)? (Y/N)"if ($RemoveExistingLabelsConfirmation -eq"Y") {
Write-Host "Removing all pre-existing labels from $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | ForEach-Object {
Write-Host "Removing label $($_.name) from $RepositoryName..." -ForegroundColor DarkRed
gh label delete -R $RepositoryName $_.name --yes
}
}
}
if ($RemoveExistingLabels -eq $true -and $NoUserPrompts -eq $true -and $UpdateAndAddLabelsOnly -eq $false) {
Write-Host "Removing all pre-existing labels from $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | ForEach-Object {
Write-Host "Removing label $($_.name) from $RepositoryName..." -ForegroundColor DarkRed
gh label delete -R $RepositoryName $_.name --yes
}
}
}
if ($null -eq $GitHubRepositoryLabels) {
Write-Host "No pre-existing labels to remove or not selected to be removed from $RepositoryName..." -ForegroundColor Magenta
}
# Check LabelsToApplyCsvUri is valid and contains a CSV content Write-Host "Checking $LabelsToApplyCsvUri is valid..." -ForegroundColor Yellow
$LabelsToApplyCsvUriValid = $LabelsToApplyCsvUri -match"^https?://"if ($false -eq $LabelsToApplyCsvUriValid) {
throw"The LabelsToApplyCsvUri $LabelsToApplyCsvUri is not valid. Please check the URI and try again. The format must be a valid URI." }
Write-Host "The LabelsToApplyCsvUri $LabelsToApplyCsvUri is valid..." -ForegroundColor Green
# Create AVM lables from the AVM labels CSV file stored on the web using the convertfrom-csv cmdlet $avmLabelsCsv = Invoke-WebRequest -Uri $LabelsToApplyCsvUri | ConvertFrom-Csv
# Check if the AVM labels CSV file contains the following columns: Name, Description, HEX $avmLabelsCsvColumns = $avmLabelsCsv | Get-Member -MemberType NoteProperty | Select-Object -ExpandProperty Name
$avmLabelsCsvColumnsValid = $avmLabelsCsvColumns -contains"Name"-and $avmLabelsCsvColumns -contains"Description"-and $avmLabelsCsvColumns -contains"HEX"if ($false -eq $avmLabelsCsvColumnsValid) {
throw"The labels CSV file does not contain the required columns: Name, Description, HEX. Please check the CSV file and try again. It contains the following columns: $avmLabelsCsvColumns" }
Write-Host "The labels CSV file contains the required columns: Name, Description, HEX" -ForegroundColor Green
# Create the AVM labels in the GitHub repository Write-Host "Creating/Updating the $($avmLabelsCsv.Count) AVM labels in $RepositoryName..." -ForegroundColor Yellow
$avmLabelsCsv | ForEach-Object {
if ($GitHubRepositoryLabelsJson.name -contains $_.name) {
Write-Host "The label $($_.name) already exists in $RepositoryName. Updating the label to ensure description and color are consitent..." -ForegroundColor Magenta
gh label create -R $RepositoryName "$($_.name)" -c $_.HEX -d $($_.Description) --force
}
else {
Write-Host "The label $($_.name) does not exist in $RepositoryName. Creating label $($_.name) in $RepositoryName..." -ForegroundColor Cyan
gh label create -R $RepositoryName "$($_.Name)" -c $_.HEX -d $($_.Description) --force
}
}
# POST - Get the current GitHub repository labels and export to a CSV file in the current directory or where -OutputDirectory specifies if set to a valid directory path and the directory exists or can be created if it does not exist alreadyif ($CreateCsvLabelExports -eq $true) {
Write-Host "Getting the current GitHub repository (post) labels for $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels = gh label list -R $RepositoryName -L $GitHubCliLimit --json name,description,color
if ($null -ne $GitHubRepositoryLabels) {
$csvFileNamePathPre = "$OutputDirectory\$($RepositoryName.Replace('/', '_'))-Labels-Post-$(Get-Date -Format FileDateTime).csv" Write-Host "Exporting the current GitHub repository (post) labels for $RepositoryName to $csvFileNamePathPre" -ForegroundColor Yellow
$GitHubRepositoryLabels | ConvertFrom-Json | Export-Csv -Path $csvFileNamePathPre -NoTypeInformation
}
}
# If -RemoveExistingLabels is set to $true and user confirms they want to remove all pre-existing labels check that only the avm labels exist in the repositoryif ($RemoveExistingLabels -eq $true -and ($RemoveExistingLabelsConfirmation -eq"Y"-or $NoUserPrompts -eq $true) -and $UpdateAndAddLabelsOnly -eq $false) {
Write-Host "Checking that only the AVM labels exist in $RepositoryName..." -ForegroundColor Yellow
$GitHubRepositoryLabels = gh label list -R $RepositoryName -L $GitHubCliLimit --json name,description,color
$GitHubRepositoryLabels | ConvertFrom-Json | ForEach-Object {
if ($avmLabelsCsv.Name -notcontains $_.name) {
throw"The label $($_.name) exists in $RepositoryName but is not in the CSV file." }
}
Write-Host "Only the CSV labels exist in $RepositoryName..." -ForegroundColor Green
}
Write-Host "The CSV labels have been created/updated in $RepositoryName..." -ForegroundColor Green
Module owners MUST test that child and extension resources and those Bicep or Terreform interface resources that are supported by their modules, are validated in E2E tests as per SNFR2 to ensure they deploy and are configured correctly.
These MAY be tested in a separate E2E test and DO NOT have to be tested in each E2E test.
Module owners MUST set the default resource name prefix for child, extension, and interface resources to the associated abbreviation for the specific resource as documented in the following CAF article Abbreviation examples for Azure resources, if specified and documented. This reduces the amount of input values a module consumer MUST provide by default when using the module.
For example, a Private Endpoint that is being deployed as part of a resource module, via the mandatory interfaces, MUST set the Private Endpoint’s default name to begin with the prefix of pep-.
Module owners MUST also provide the ability for these default names, including the prefixes, to be overridden via a parameter/variable if the consumer wishes to.
Furthermore, as per RMNFR2, Resource Modules MUST not have a default value specified for the name of the primary resource and therefore the name MUST be provided and specified by the module consumer.
The name provided MAY be used by the module owner to generate the rest of the default name for child, extension, and interface resources if they wish to. For example, for the Private Endpoint mentioned above, the full default name that can be overridden by the consumer, MAY be pep-<primary-resource-name>.
Tip
If the resource does not have a documented abbreviation in Abbreviation examples for Azure resources, then the module owner is free to use a sensible prefix instead.
Modules SHOULD implement unit testing to ensure logic and conditions within parameters/variables/locals are performing correctly. These tests MUST pass before a module version can be published.
Unit Tests test specific module functionality, without deploying resources. Used on more complex modules. In Bicep and Terraform these live in tests/unit.
Modules MUST use static analysis, e.g., linting, security scanning (PSRule, tflint, etc.). These tests MUST pass before a module version can be published.
There may be differences between languages in linting rules standards, but the AVM core team will try to close these and bring them into alignment over time.
Modules MUST implement idempotency end-to-end (deployment) testing. E.g. deploying the module twice over the top of itself.
Modules SHOULD pass the idempotency test, as we are aware that there are some exceptions where they may fail as a false-positive or legitimate cases where a resource cannot be idempotent.
For example, Virtual Machine Image names must be unique on each resource creation/update.
A module MUST have an owner that is defined and managed by a GitHub Team in the Azure GitHub organization.
Today this is only Microsoft FTEs, but everyone is welcome to contribute. The module just MUST be owned by a Microsoft FTE (today) so we can enforce and provide the long-term support required by this initiative.
Note
The names for the GitHub teams for each approved module are already defined in the respective Module Indexes. These teams MUST be created (and used) for each module.
In AVM there will be multiple different teams involved throughout the initiatives lifecycle and ongoing long-term support. These teams will be listed below alongside their definitions.
Important
Individuals can be members of multiple teams, at once, that are defined below.
The Azure Bicep & Terraform Product Groups are responsible for:
Backup/Additional support for orphaned modules to the AVM Core Team
Providing inputs and feedback on AVM
Taking on feedback and feature requests on their products, Bicep & Terraform, from AVM usage
Note
We are investigating working with all Azure Product Groups as a future investment area that they take on ownership, or contribute to, the AVM modules for their service/product.
RACI
RACI Definition
R = Responsible β Those who do the work to complete the task/responsibility.
A = Accountable β The one answerable for the correct and thorough completion of the task. There must be only one accountable person per task/responsibility. Typically has ‘sign-off’.
C = Consulted β Those whose opinions are sought.
I = Informed β Those who are kept up to date on progress.
The below table defines a RACI that is proposed to be adopted by AVM and all parties referenced in the table. This will give consumers faith and trust in these modules so that they can consume and contribute to the initiative at scale.
Action/Task/Responsibility
Module Owners
Module Contributors
AVM Core Team
Product Groups
Notes
Build/Construct an AVM Module
R, A
R, C
C, I
I
Publish a Bicep AVM Module to the Bicep Public Registry
R, A
C, I
C, I
I
Publish a Terraform AVM Module to the Terraform Registry
R, A
C, I
C, I
I
Manage and maintain tooling/testing frameworks pertaining to module quality
C, I
C, I
R, A
C, I
Manage/run the AVM central backlog (module proposals, orphaned modules, test enhancements, etc.)
Module owners MAY cross-references other modules to build either Resource or Pattern modules. However, they MUST be referenced only by a HashiCorp Terraform registry reference to a pinned version e.g.,
Authors SHOULD NOT output entire resource objects as these may contain sensitive outputs and the schema can change with API or provider versions. Instead, authors SHOULD output the computed attributes of the resource as discreet outputs. This kind of pattern protects against provider schema changes and is known as an anti-corruption layer.
Remember, you SHOULD NOT output values that are already inputs (other than name).
E.g.,
# Resource output, computed attribute.
output"foo" {
description = "MyResource foo attribute"value = azurerm_resource_myresource.foo}# Resource output for resources that are deployed using `for_each`. Again only computed attributes.
output"childresource_foos" {
description = "MyResource children's foo attributes"value = {
forkey, valueinazurerm_resource_mychildresource:key => value.foo }
}# Output of a sensitive attribute
output"bar" {
description = "MyResource bar attribute"value = azurerm_resource_myresource.barsensitive = true}
Where descriptions for variables and outputs spans multiple lines. The description MAY provide variable input examples for each variable using the HEREDOC format and embedded markdown.
Example:
variable"my_complex_input" {
type = map(object({
param1 = stringparam2 = optional(number, null)
}))
description = <<DESCRIPTION A complex input variable that is a map of objects.
Each object has two attributes:
- `param1`: A required string parameter.
- `param2`: (Optional) An optional number parameter.
Example Input:
```terraform
my_complex_input = {
"object1" = {
param1 = "value1"
param2 = 2
}
"object2" = {
param1 = "value2"
}
}
```
DESCRIPTION }
Sometimes we need to ensure that the resources created are compliant to some rules at a minimum extent, for example a subnet has to be connected to at least one network_security_group. The user SHOULD pass in a security_group_id and ask us to make a connection to an existing security_group, or want us to create a new security group.
The disadvantage of this approach is if the user create a security group directly in the root module and use the id as a variable of the module, the expression which determines the value of count will contain an attribute from another resource, the value of this very attribute is “known after apply” at plan stage. Terraform core will not be able to get an exact plan of deployment during the “plan” stage.
For this kind of parameters, wrapping with object type is RECOMMENDED:
variable"security_group" {
type:object({
id = string })
default = null}
The advantage of doing so is encapsulating the value which is “known after apply” in an object, and the object itself can be easily found out if it’s null or not. Since the id of a resource cannot be null, this approach can avoid the situation we are facing in the first example, like the following:
ID: TFNFR14 - Category: Inputs - Not allowed variables
Since Terraform 0.13, count, for_each and depends_on are introduced for modules, module development is significantly simplified. Module’s owners MUST NOT add variables like enabled or module_depends_on to control the entire module’s operation. Boolean feature toggles are acceptable however.
variable used as feature switches SHOULD apply a positive statement, use xxx_enabled instead of xxx_disabled. Avoid double negatives like !xxx_disabled.
Please use xxx_enabled instead of xxx_disabled as name of a variable.
ID: TFNFR17 - Category: Code Style - Variables with Descriptions
The target audience of description is the module users.
For a newly created variable (Eg. variable for switching dynamic block on-off), it’s descriptionSHOULD precisely describe the input parameter’s purpose and the expected data type. descriptionSHOULD NOT contain any information for module developers, this kind of information can only exist in code comments.
For object type variable, description can be composed in HEREDOC format:
variable"kubernetes_cluster_key_management_service" {
type:object({
key_vault_key_id = stringkey_vault_network_access = optional(string)
})
default = nulldescription = <<-EOT - `key_vault_key_id` - (Required) Identifier of Azure Key Vault key. See [key identifier format](https://learn.microsoft.com/en-us/azure/key-vault/general/about-keys-secrets-certificates#vault-name-and-object-name) for more details. When Azure Key Vault key management service is enabled, this field is required and must be a valid key identifier. When `enabled` is `false`, leave the field empty.
- `key_vault_network_access` - (Optional) Network access of the key vault Network access of key vault. The possible values are `Public` and `Private`. `Public` means the key vault allows public access from all networks. `Private` means the key vault disables public access and enables private link. Defaults to `Public`.
EOT}
ID: TFNFR19 - Category: Code Style - Sensitive Data Variables
If variable’s type is object and contains one or more fields that would be assigned to a sensitive argument, then this whole variableSHOULD be declared as sensitive = true, otherwise you SHOULD extract sensitive field into separated variable block with sensitive = true.
Nullable SHOULD be set to false for collection values (e.g. sets, maps, lists) when using them in loops. However for scalar values like string and number, a null value MAY have a semantic meaning and as such these values are allowed.
Sometimes we will find names for some variable are not suitable anymore, or a change SHOULD be made to the data type. We want to ensure forward compatibility within a major version, so direct changes are strictly forbidden. The right way to do this is move this variable to an independent deprecated_variables.tf file, then redefine the new parameter in variable.tf and make sure it’s compatible everywhere else.
Deprecated variableMUST be annotated as DEPRECATED at the beginning of the description, at the same time the replacement’s name SHOULD be declared. E.g.,
variable"enable_network_security_group" {
type = stringdefault = nulldescription = "DEPRECATED, use `network_security_group_enabled` instead; Whether to generate a network security group and assign it to the subnet. Changing this forces a new resource to be created."}
A cleanup of deprecated_variables.tfSHOULD be performed during a major version release.
The terraform.tf file MUST only contain one terraform block.
The first line of the terraform block MUST define a required_version property for the Terraform CLI.
The required_version property MUST include a constraint on the minimum version of the Terraform CLI. Previous releases of the Terraform CLI can have unexpected behavior.
The required_version property MUST include a constraint on the maximum major version of the Terraform CLI. Major version releases of the Terraform CLI can introduce breaking changes and MUST be tested.
The required_version property constraint SHOULD use the ~> #.# or the >= #.#.#, < #.#.# format.
Note: You can read more about Terraform version constraints in the documentation.
ID: TFNFR26 - Category: Code Style - Providers in required_providers
The terraform block in terraform.tfMUST contain the required_providers block.
Each provider used directly in the module MUST be specified with the source and version properties. Providers in the required_providers block SHOULD be sorted in alphabetical order.
Do not add providers to the required_providers block that are not directly required by this module. If submodules are used then each submodule SHOULD have its own versions.tf file.
The source property MUST be in the format of namespace/name. If this is not explicitly specified, it can cause failure.
The version property MUST include a constraint on the minimum version of the provider. Older provider versions may not work as expected.
The version property MUST include a constraint on the maximum major version. A provider major version release may introduce breaking change, so updates to the major version constraint for a provider MUST be tested.
The version property constraint SHOULD use the ~> #.# or the >= #.#.#, < #.#.# format.
Note: You can read more about Terraform version constraints in the documentation.
By rules, in the module code providerMUST NOT be declared. The only exception is when the module indeed need different instances of the same kind of provider(Eg. manipulating resources across different locations or accounts), you MUST declare configuration_aliases in terraform.required_providers. See details in this document.
provider block declared in the module MUST only be used to differentiate instances used in resource and data. Declaration of fields other than alias in provider block is strictly forbidden. It could lead to module users unable to utilize count, for_each or depends_on. Configurations of the provider instance SHOULD be passed in by the module users.
Module owners MUST set a branch protection policy on their GitHub Repositories for AVM modules against their default branch, typically main, to do the following:
Requires a Pull Request before merging
Require approval of the most recent reviewable push
Dismiss stale pull request approvals when new commits are pushed
Require linear history
Prevents force pushes
Not allow deletions
Require CODEOWNERS review
Do not allow bypassing the above settings
Above settings MUST also be enforced to administrators
Tip
If you use the template repository as mentioned in the contribution guide, the above will automatically be set.
Sometimes we notice that the name of certain output is not appropriate anymore, however, since we have to ensure forward compatibility in the same major version, its name MUST NOT be changed directly. It MUST be moved to an independent deprecated_outputs.tf file, then redefine a new output in output.tf and make sure it’s compatible everywhere else in the module.
A cleanup SHOULD be performed to deprecated_outputs.tf and other logics related to compatibility during a major version upgrade.
ID: TFNFR31 - Category: Code Style - locals.tf for Locals Only
In locals.tf, file we could declare multiple locals blocks, but only locals blocks are allowed.
You MAY declare locals blocks next to a resource block or data block for some advanced scenarios, like making a fake module to execute some light-weight tests aimed at the expressions.
From Terraform AzureRM 3.0, the default value of prevent_deletion_if_contains_resources in provider block is true. This will lead to an unstable test because the test subscription has some policies applied, and they will add some extra resources during the run, which can cause failures during destroy of resource groups.
Since we cannot guarantee our testing environment won’t be applied some Azure Policy Remediation Tasks in the future, for a robust testing environment, prevent_deletion_if_contains_resourcesSHOULD be explicitly set to false.
newres is a command-line tool that generates Terraform configuration files for a specified resource type. It automates the process of creating variables.tf and main.tf files, making it easier to get started with Terraform and reducing the time spent on manual configuration.
Module owners MAY use newres when they’re trying to add new resource block, attribute, or nested block. They MAY generate the whole block along with the corresponding variable blocks in an empty folder, then copy-paste the parts they need with essential refactoring.
The module’s owners MUST use map(xxx) or set(xxx) as resource’s for_each collection, the map’s key or set’s element MUST be static literals.
Good example:
resource"azurerm_subnet""pair" {
for_each = var.subnet_map // `map(string)`, when user call this module, it could be: `{ "subnet0": "subnet0" }`, or `{ "subnet0": azurerm_subnet.subnet0.name }`
name = "${each.value}"-pairresource_group_name = azurerm_resource_group.example.namevirtual_network_name = azurerm_virtual_network.example.nameaddress_prefixes = ["10.0.1.0/24"]
}
Bad example:
resource"azurerm_subnet""pair" {
for_each = var.subnet_name_set // `set(string)`, when user use `toset([azurerm_subnet.subnet0.name])`, it would cause an error.
name = "${each.value}"-pairresource_group_name = azurerm_resource_group.example.namevirtual_network_name = azurerm_virtual_network.example.nameaddress_prefixes = ["10.0.1.0/24"]
}
There are 3 types of assignment statements in a resource or data block: argument, meta-argument and nested block. The argument assignment statement is a parameter followed by =:
If you cannot find guidance for what you need, please let us know via GitHub Issues π
Subsections of Contributing
Bicep Contribution Guide
Important
While this page describes and summarizes important aspects of contributing to AVM, it may not reference All of the shared and language specific requirements.
Therefore, this contribution guide MUST be used in conjunction with the Bicep specifications. ALL AVM modules (Resource and Pattern modules) MUST meet the respective requirements described in these specifications!
Summary
This section lists AVM’s Bicep-specific contribution guidance.
While this page describes and summarizes important aspects of the composition of AVM modules, it may not reference All of the shared and language specific requirements.
Therefore, this guide MUST be used in conjunction with the Bicep specifications. ALL AVM modules (Resource and Pattern modules) MUST meet the respective requirements described in these specifications!
Important
Before jumping on implementing your contribution, please review the AVM Module specifications, in particular the Bicep specification page, to make sure your contribution complies with the AVM module’s design and principles.
Directory and File Structure
Each Bicep AVM module that lives within the Azure/bicep-registry-modules (BRM) repository in the avm directory will have the following directories and files:
tests/ - (for unit tests and additional E2E/integration if required - e.g. Pester etc.)
e2e/ - (all examples must deploy successfully - these will be used to automatically generate the examples in the README.md for the module)
modules/ - (for sub-modules only if used and NOT children of the primary resource. e.g. RBAC role assignments)
/... - (Module files that live in the root of module directory)
main.bicep (AVM Module main .bicep file and entry point/orchestration module)
main.json (auto generated and what is published to the MCR via BRM)
For a new module (res or ptn), the files can be created automatically, once the parent folder exists. This example shows how to create a res module res/compute/virtual-machine.
Modules enable you to reuse code from a Bicep file in other Bicep files. As such, for resource modules they’re normally leveraged for deploying child resources (e.g., file services in a storage account), cross referenced resources (e.g., network interface in a virtual machine) or extension resources (e.g., role assignments in a key vault). Pattern modules, normally reuse resource modules combined together.
Make sure to review all specifications covering module properties and usage.
Tip
See examples in specifications BCPFR1 for resource modules and PMNFR2 for pattern modules.
Outputs
Make sure to review all specifications of Category: Inputs/Outputs within the Bicep specific pages.
This section is only relevant for contributions to resource modules.
To meet RMFR4 and RMFR5 AVM resource modules must leverage consistent interfaces for all the optional features/extension resources supported by the AVM module primary resource.
Please refer to the Bicep Interfaces page. If the primary resource of the AVM resource module you are developing supports any of the listed features/extension resources, please follow the corresponding provided Bicep schema to develop them.
Deprecation
Breaking changes are sometimes not avoidable. The impact should be kept as low as possible. A recommendation is to deprecate parameters, instead of completely removing them for a couple of versions. The Semantic Versioning sections offers information about versioning AVM modules.
In case you need to deprecate an input parameter, this sample shows you how this can be achieved.
Note
Since all modules are versioned, nothing will change for existing deployments, as the parameter usage does not change for any existing versions.
Example-Scenario
An AVM module is modified, and the parameters will change, which breaks backward compatibility.
parameters are changing to a custom type
the parameter structure is changing
backward compatibility will be maintained
Existing input parameters used to be defined as follows (reducing the examples to the minimum):
Before you begin to modify anything, it is recommended to create a new test case (e.g. deprecated), in addition to the already existing tests, to make sure that the changes are not breaking backward compatibility until you decide to finally remove the deprecated parameters (see BCPRMNFR1 - Category: Testing - Expected Test Directories for more details about the requirements).
The test should include all previously used parameters to make sure they are covered before any changes to the new parameter layout are done.
Code Changes
The new parameter structure requires a change to the used parameters and moves them to a different location and looks like:
// main.bicep:param item itemType?
type itemtype: {
name: string // the name parameter did not change properties ={
osType: 'Linux' | 'Windows'? // the new place for the osType variant: {
size: string? // the new place for the variant size }?
}
// keep these for backward compatibility in the new type @description('Optional. Note: This is a deprecated property, please use the corresponding `properties.osType` instead.')
osType: string? // the old parameter location @description('Optional. Note: This is a deprecated property, please use the corresponding `properties.variant.size` instead.')
variant: string? // the old parameter location}
The original parameter item is of type object and does not give the user any clue of what the syntax is and what is expected to be added to it. The tests could bring light into the darkness, but this is not ideal. In order to retain backward compatibility, the previously used parameters need to be added to the new type, as they would be invalid otherwise. Now that the new type is in place, some logic needs to be implemented to make sure the module can handle the different sources of data (new and old parameters).
resource<modulename>'Microsoft.xy/yz@2024-01-01' = {
name: name
properties: {
osType: item.?properties.?osType ?? item.?osType ??'Linux'// add a default here, if needed variant: {
size: item.?properties.?variant.?size ?? item.?variant
}
}
}
By choosing this order for the Coalesce operator, the new format takes precedence over the old syntax. Also note the safe-dereference ensures that no null reference exception will occure if the property has optional parameters.
The tests can now be changed to adapt the new parameter structure for the new version of the module. They will not cover the old parameter structure anymore.
Changes to modules (resource or pattern) can bei implemented in two ways.
Implement changes with backward compatibility
In this scenario, you need to make sure that the code does not break backward compatibility by:
adding new parameters
marking other parameters as deprecated
create a test case for the old usage syntax
increase the minor version number of the module (0.x)
Introduce breaking changes
The easier way to introduce a new major version requires fewer steps:
adding new parameters
create a test case for the usage
increase the major version number of the module (x.0.0)
Note
Be aware that currently no module has been released as 1.0.0 (or beyond), which lets you implement breaking changes without increasing the major version.
Bicep Contribution Flow
High-level contribution flow
---
config:
nodeSpacing: 20
rankSpacing: 20
diagramPadding: 50
padding: 5
flowchart:
wrappingWidth: 300
padding: 5
layout: elk
elk:
mergeEdges: true
nodePlacementStrategy: LINEAR_SEGMENTS
---
flowchart TD
A("1 - Fork the module source repository")
click A "/Azure-Verified-Modules/contributing/bicep/bicep-contribution-flow/#1-fork-the-module-source-repository"
B(2 - Configure a deployment identity in Azure)
click B "/Azure-Verified-Modules/contributing/bicep/bicep-contribution-flow/#2-configure-a-deployment-identity-in-azure"
C("3 - Configure CI environment for module tests")
click C "/Azure-Verified-Modules/contributing/bicep/bicep-contribution-flow/#3-configure-your-ci-environment"
D("4 - Implementing your contribution<br>(Refer to Gitflow Diagram below)")
click D "/Azure-Verified-Modules/contributing/bicep/bicep-contribution-flow/#4-implement-your-contribution"
E(5 - Workflow test completed successfully?)
click E "/Azure-Verified-Modules/contributing/bicep/bicep-contribution-flow/#5-createupdate-and-run-tests"
F(6 - Create a pull request to the upstream repository)
click F "/Azure-Verified-Modules/contributing/bicep/bicep-contribution-flow/#6-create-a-pull-request-to-the-public-bicep-registry"
A --> B
B --> C
C --> D
D --> E
E -->|yes|F
E -->|no|D
GitFlow for contributors
The GitFlow process outlined here introduces a central anchor branch. This branch should be treated as if it were a protected branch. It serves to synchronize the forked repository with the original upstream repository. The use of the anchor branch is designed to give contributors the flexibility to work on several modules simultaneous.
When implementing the GitFlow process as described, it is advisable to configure the local clone with a remote for the upstream repository. This will enable the Git CLI and local IDE to merge changes directly from the upstream repository. Using GitHub Desktop, this is configured automatically when cloning the forked repository via the application.
PowerShell Helper Script To Setup Fork & CI Test Environment
Now defaults to OIDC setup
The PowerShell Helper Script has recently added support for the OIDC setup and configuration as documented in detail on this page. This is now the default for the script.
The easiest way to get yourself set back up, is to delete your fork repository, including the local clone of it that you have and start over with the script. This will ensure you have the correct setup for the OIDC authentication method for the AVM CI.
Important
To simplify the setup of the fork, clone and configuration of the required GitHub Environments, Secrets, User-Assigned Managed Identity (UAMI), Federated Credentials and RBAC assignments in your Azure environment for the CI framework to function correctly in your fork, we have created a PowerShell script that you can use to do steps 1, 2 & 3 below.
The script performs the following steps:
Forks the Azure/bicep-registry-modules to your GitHub Account.
Clones the repo locally to your machine, based on the location you specify in the parameter: -GitHubRepositoryPathForCloneOfForkedRepository.
Prompts you and takes you directly to the place where you can enable GitHub Actions Workflows on your forked repo.
Creates an User-Assigned Managed Identity (UAMI) and federated credentials for OIDC with your forked GitHub repo and grants it the RBAC roles of User Access Administrator & Contributor at Management Group level, if specified in the -GitHubSecret_ARM_MGMTGROUP_ID parameter, and at Azure Subscription level if you provide it via the -GitHubSecret_ARM_SUBSCRIPTION_ID parameter.
Creates the required GitHub Environments & required Secrets in your forked repo as per step 3, based on the input provided in parameters and the values from resources the script creates and configures for OIDC. Also set the workflow permissions to Read and write permissions as per step 3.3.
Pre-requisites
You must have the Azure PowerShell Modules installed and you need to be logged with the context set to the desired Tenant. You must have permissions to create an SPN and grant RBAC over the specified Subscription and Management Group, if provided.
You must have the GitHub CLI installed and need to be authenticated with the GitHub user account you wish to use to fork, clone and work with on AVM.
The New-AVMBicepBRMForkSetup.ps1 can be downloaded from here.
Once downloaded, you can run the script by running the below - Please change all the parameter values in the below script usage example to your own values (see the parameter documentation in the script itself)!:
.\<PATH-TO-SCRIPT-DOWNLOAD-LOCATION>\New-AVMBicepBRMForkSetup.ps1 -GitHubRepositoryPathForCloneOfForkedRepository "<pathToCreateForkedRepoIn>" -GitHubSecret_ARM_MGMTGROUP_ID "<managementGroupId>" -GitHubSecret_ARM_SUBSCRIPTION_ID "<subscriptionId>" -GitHubSecret_ARM_TENANT_ID "<tenantId>" -GitHubSecret_TOKEN_NAMEPREFIX "<unique3to5AlphanumericStringForAVMDeploymentNames>" -UAMIRsgLocation "<Azure Region/Location of your choice such as 'uksouth'>"
For more examples, see the below script’s parameters section.
ο»Ώ[Diagnostics.CodeAnalysis.SuppressMessageAttribute("PSAvoidUsingWriteHost", "", Justification = "Coloured output required in this script")]
#Requires-PSEdition Core#Requires-Modules @{ ModuleName="Az.Accounts"; ModuleVersion="2.19.0" }#Requires-Modules @{ ModuleName="Az.Resources"; ModuleVersion="6.16.2" }<#
.SYNOPSISThis function creates and sets up everything a contributor to the AVM Bicep project should need to get started with their contribution to a AVM Bicep Module.
.DESCRIPTIONThis function creates and sets up everything a contributor to the AVM Bicep project should need to get started with their contribution to a AVM Bicep Module. This includes:
- Forking and cloning the `Azure/bicep-registry-modules` repository
- Creating a new SPN and granting it the necessary permissions for the CI tests and configuring the forked repositories secrets, as per: https://azure.github.io/Azure-Verified-Modules/contributing/bicep/bicep-contribution-flow/#2-configure-a-deployment-identity-in-azure
- Enabling GitHub Actions on the forked repository
- Disabling all the module workflows by default, as per: https://azure.github.io/Azure-Verified-Modules/contributing/bicep/bicep-contribution-flow/enable-or-disable-workflows/
Effectively simplifying this process to a single command, https://azure.github.io/Azure-Verified-Modules/contributing/bicep/bicep-contribution-flow/
.PARAMETER GitHubRepositoryPathForCloneOfForkedRepository
Mandatory. The path to the GitHub repository to fork and clone. Directory will be created if does not already exist. Can use either relative paths or full literal paths.
.PARAMETER GitHubSecret_ARM_MGMTGROUP_ID
Optional. The group ID of the management group to test-deploy modules in. Is needed for resources that are deployed to the management group scope. If not provided CI tests on Management Group scoped modules will not work and you will need to manually configure the RBAC role assignments for the SPN and associated repository secret later.
.PARAMETER GitHubSecret_ARM_SUBSCRIPTION_ID
Mandatory. The ID of the subscription to test-deploy modules in. Is needed for resources that are deployed to the subscription scope.
.PARAMETER GitHubSecret_ARM_TENANT_ID
Mandatory. The tenant ID of the Azure Active Directory tenant to test-deploy modules in. Is needed for resources that are deployed to the tenant scope.
.PARAMETER GitHubSecret_TOKEN_NAMEPREFIX
Mandatory. Required. A short (3-5 character length), unique string that should be included in any deployment to Azure. Usually, AVM Bicep test cases require this value to ensure no two contributors deploy resources with the same name - which is especially important for resources that require a globally unique name (e.g., Key Vault). These characters will be used as part of each resourceβs name during deployment.
.PARAMETER SPNName
Optional. The name of the SPN (Service Principal) to create. If not provided, a default name of `spn-avm-bicep-brm-fork-ci-<GitHub Organization>` will be used.
.PARAMETER UAMIName
Optional. The name of the UAMI (User Assigned Managed Identity) to create. If not provided, a default name of `id-avm-bicep-brm-fork-ci-<GitHub Organization>` will be used.
.PARAMETER UAMIRsgName
Optional. The name of the Resource Group to create for the UAMI (User Assigned Managed Identity) to create. If not provided, a default name of `rsg-avm-bicep-brm-fork-ci-<GitHub Organization>-oidc` will be used.
.PARAMETER UAMIRsgLocation
Optional. The location of the Resource Group to create for the UAMI (User Assigned Managed Identity) to create. Also UAMI will be created in this location. This is required for OIDC deployments.
.PARAMETER UseOIDC
Optional. Default is `$true`. If set to `$true`, the script will use the OIDC (OpenID Connect) authentication method for the SPN instead of secrets as per https://azure.github.io/Azure-Verified-Modules/contributing/bicep/bicep-contribution-flow/#31-set-up-secrets. If set to `$false`, the script will use the Client Secret authentication method for the SPN and not OIDC.
.EXAMPLE.\<PATH-TO-SCRIPT-DOWNLOAD-LOCATION>\New-AVMBicepBRMForkSetup.ps1 -GitHubRepositoryPathForCloneOfForkedRepository "D:\GitRepos\" -GitHubSecret_ARM_MGMTGROUP_ID "alz" -GitHubSecret_ARM_SUBSCRIPTION_ID "1b60f82b-d28e-4640-8cfa-e02d2ddb421a" -GitHubSecret_ARM_TENANT_ID "c3df6353-a410-40a1-b962-e91e45e14e4b" -GitHubSecret_TOKEN_NAMEPREFIX "ex123" -UAMIRsgLocation "uksouth"
Example Subscription & Management Group scoped deployments enabled via OIDC with default generated UAMI Resource Group name of `rsg-avm-bicep-brm-fork-ci-<GitHub Organization>-oidc` and UAMI name of `id-avm-bicep-brm-fork-ci-<GitHub Organization>`.
.EXAMPLE.\<PATH-TO-SCRIPT-DOWNLOAD-LOCATION>\New-AVMBicepBRMForkSetup.ps1 -GitHubRepositoryPathForCloneOfForkedRepository "D:\GitRepos\" -GitHubSecret_ARM_MGMTGROUP_ID "alz" -GitHubSecret_ARM_SUBSCRIPTION_ID "1b60f82b-d28e-4640-8cfa-e02d2ddb421a" -GitHubSecret_ARM_TENANT_ID "c3df6353-a410-40a1-b962-e91e45e14e4b" -GitHubSecret_TOKEN_NAMEPREFIX "ex123" -UAMIRsgLocation "uksouth" -UAMIName "my-uami-name" -UAMIRsgName "my-uami-rsg-name"
Example with provided UAMI Name & UAMI Resource Group Name.
.EXAMPLE.\<PATH-TO-SCRIPT-DOWNLOAD-LOCATION>\New-AVMBicepBRMForkSetup.ps1 -GitHubRepositoryPathForCloneOfForkedRepository "D:\GitRepos\" -GitHubSecret_ARM_SUBSCRIPTION_ID "1b60f82b-d28e-4640-8cfa-e02d2ddb421a" -GitHubSecret_ARM_TENANT_ID "c3df6353-a410-40a1-b962-e91e45e14e4b" -GitHubSecret_TOKEN_NAMEPREFIX "ex123" -UseOIDC $false
DEPRECATED - USE OIDC INSTEAD.
Example Subscription scoped deployments enabled only with default generated SPN name of `spn-avm-bicep-brm-fork-ci-<GitHub Organization>`.
.EXAMPLE.\<PATH-TO-SCRIPT-DOWNLOAD-LOCATION>\New-AVMBicepBRMForkSetup.ps1 -GitHubRepositoryPathForCloneOfForkedRepository "D:\GitRepos\" -GitHubSecret_ARM_MGMTGROUP_ID "alz" -GitHubSecret_ARM_SUBSCRIPTION_ID "1b60f82b-d28e-4640-8cfa-e02d2ddb421a" -GitHubSecret_ARM_TENANT_ID "c3df6353-a410-40a1-b962-e91e45e14e4b" -GitHubSecret_TOKEN_NAMEPREFIX "ex123" -SPNName "my-spn-name" -UseOIDC $false
DEPRECATED - USE OIDC INSTEAD.
Example with provided SPN name.
#>[CmdletBinding(SupportsShouldProcess = $false)]
param (
[Parameter(Mandatory = $true)]
[string] $GitHubRepositoryPathForCloneOfForkedRepository,
[Parameter(Mandatory = $false)]
[string] $GitHubSecret_ARM_MGMTGROUP_ID,
[Parameter(Mandatory = $true)]
[string] $GitHubSecret_ARM_SUBSCRIPTION_ID,
[Parameter(Mandatory = $true)]
[string] $GitHubSecret_ARM_TENANT_ID,
[Parameter(Mandatory = $true)]
[string] $GitHubSecret_TOKEN_NAMEPREFIX,
[Parameter(Mandatory = $false)]
[string] $SPNName,
[Parameter(Mandatory = $false)]
[string] $UAMIName,
[Parameter(Mandatory = $false)]
[string] $UAMIRsgName = "rsg-avm-bicep-brm-fork-ci-oidc",
[Parameter(Mandatory = $false)]
[string] $UAMIRsgLocation,
[Parameter(Mandatory = $false)]
[bool] $UseOIDC = $true
)
# Check if the GitHub CLI is installed$GitHubCliInstalled = Get-Command gh -ErrorAction SilentlyContinue
if ($null -eq $GitHubCliInstalled) {
throw'The GitHub CLI is not installed. Please install the GitHub CLI and try again. Install link for GitHub CLI: https://github.com/cli/cli#installation'}
Write-Host 'The GitHub CLI is installed...' -ForegroundColor Green
# Check if GitHub CLI is authenticated$GitHubCliAuthenticated = gh auth status
if ($LASTEXITCODE -ne0) {
Write-Host $GitHubCliAuthenticated -ForegroundColor Red
throw"Not authenticated to GitHub. Please authenticate to GitHub using the GitHub CLI command of 'gh auth login', and try again."}
Write-Host 'Authenticated to GitHub with following details...' -ForegroundColor Cyan
Write-Host ''gh auth status
Write-Host ''# Ask the user to confirm if it's the correct GitHub accountdo {
Write-Host "Is the above GitHub account correct to coninue with the fork setup of the 'Azure/bicep-registry-modules' repository? Please enter 'y' or 'n'." -ForegroundColor Yellow
$userInput = Read-Host
$userInput = $userInput.ToLower()
switch ($userInput) {
'y' {
Write-Host '' Write-Host 'User Confirmed. Proceeding with the GitHub account listed above...' -ForegroundColor Green
Write-Host ''break }
'n' {
Write-Host ''throw"User stated incorrect GitHub account. Please switch to the correct GitHub account. You can do this in the GitHub CLI (gh) by logging out by running 'gh auth logout' and then logging back in with 'gh auth login'" }
default {
Write-Host '' Write-Host "Invalid input. Please enter 'y' or 'n'." -ForegroundColor Red
Write-Host '' }
}
} while ($userInput -ne'y'-and $userInput -ne'n')
# Fork and clone repository locallyWrite-Host "Changing to directory $GitHubRepositoryPathForCloneOfForkedRepository ..." -ForegroundColor Magenta
if (-not (Test-Path -Path $GitHubRepositoryPathForCloneOfForkedRepository)) {
Write-Host "Directory does not exist. Creating directory $GitHubRepositoryPathForCloneOfForkedRepository ..." -ForegroundColor Yellow
New-Item -Path $GitHubRepositoryPathForCloneOfForkedRepository -ItemType Directory -ErrorAction Stop
Write-Host ''}
Set-Location -Path $GitHubRepositoryPathForCloneOfForkedRepository -ErrorAction stop
$CreatedDirectoryLocation = Get-Location
Write-Host "Forking and cloning 'Azure/bicep-registry-modules' repository..." -ForegroundColor Magenta
gh repo fork 'Azure/bicep-registry-modules' --default-branch-only --clone=true
if ($LASTEXITCODE -ne0) {
throw"Failed to fork and clone the 'Azure/bicep-registry-modules' repository. Please check the error message above, resolve any issues, and try again."}
$ClonedRepoDirectoryLocation = Join-Path $CreatedDirectoryLocation 'bicep-registry-modules'Write-Host ''Write-Host "Fork of 'Azure/bicep-registry-modules' created successfully directory in $CreatedDirectoryLocation ..." -ForegroundColor Green
Write-Host ''Write-Host "Changing into cloned repository directory $ClonedRepoDirectoryLocation ..." -ForegroundColor Magenta
Set-Location $ClonedRepoDirectoryLocation -ErrorAction stop
# Check is user is logged in to Azure$UserLoggedIntoAzure = Get-AzContext -ErrorAction SilentlyContinue
if ($null -eq $UserLoggedIntoAzure) {
throw'You are not logged into Azure. Please log into Azure using the Azure PowerShell module using the command of `Connect-AzAccount` to the correct tenant and try again.'}
$UserLoggedIntoAzureJson = $UserLoggedIntoAzure | ConvertTo-Json -Depth 10 | ConvertFrom-Json
Write-Host "You are logged into Azure as '$($UserLoggedIntoAzureJson.Account.Id)' ..." -ForegroundColor Green
# Check user has access to desired subscription$UserCanAccessSubscription = Get-AzSubscription -SubscriptionId $GitHubSecret_ARM_SUBSCRIPTION_ID -ErrorAction SilentlyContinue
if ($null -eq $UserCanAccessSubscription) {
throw"You do not have access to the subscription with the ID of '$($GitHubSecret_ARM_SUBSCRIPTION_ID)'. Please ensure you have access to the subscription and try again."}
Write-Host "You have access to the subscription with the ID of '$($GitHubSecret_ARM_SUBSCRIPTION_ID)' ..." -ForegroundColor Green
Write-Host ''# Get GitHub Login/Org Name$GitHubUserRaw = gh api user
$GitHubUserConvertedToJson = $GitHubUserRaw | ConvertFrom-Json -Depth 10$GitHubOrgName = $GitHubUserConvertedToJson.login
$GitHubOrgAndRepoNameCombined = "$($GitHubOrgName)/bicep-registry-modules"# Create SPN if not using OIDCif ($UseOIDC -eq $false) {
if ($SPNName -eq'') {
Write-Host "No value provided for the SPN Name. Defaulting to 'spn-avm-bicep-brm-fork-ci-<GitHub Organization>' ..." -ForegroundColor Yellow
$SPNName = "spn-avm-bicep-brm-fork-ci-$($GitHubOrgName)" }
$newSpn = New-AzADServicePrincipal -DisplayName $SPNName -Description "Service Principal Name (SPN) for the AVM Bicep CI Tests in the $($GitHubOrgName) fork. See: https://azure.github.io/Azure-Verified-Modules/contributing/bicep/bicep-contribution-flow/#2-configure-a-deployment-identity-in-azure" -ErrorAction Stop
Write-Host "New SPN created with a Display Name of '$($newSpn.DisplayName)' and an Object ID of '$($newSpn.Id)'." -ForegroundColor Green
Write-Host ''# Create RBAC Role Assignments for SPN Write-Host 'Starting 120 second sleep to allow the SPN to be created and available for RBAC Role Assignments (eventual consistency) ...' -ForegroundColor Yellow
Start-Sleep -Seconds 120 Write-Host "Creating RBAC Role Assignments of 'Contributor' and 'User Access Administrator' for the Service Principal Name (SPN) '$($newSpn.DisplayName)' on the Subscription with the ID of '$($GitHubSecret_ARM_SUBSCRIPTION_ID)' ..." -ForegroundColor Magenta
New-AzRoleAssignment -ApplicationId $newSpn.AppId -RoleDefinitionName 'User Access Administrator' -Scope "/subscriptions/$($GitHubSecret_ARM_SUBSCRIPTION_ID)" -ErrorAction Stop
New-AzRoleAssignment -ApplicationId $newSpn.AppId -RoleDefinitionName 'Contributor' -Scope "/subscriptions/$($GitHubSecret_ARM_SUBSCRIPTION_ID)" -ErrorAction Stop
Write-Host "RBAC Role Assignments of 'Contributor' and 'User Access Administrator' for the Service Principal Name (SPN) '$($newSpn.DisplayName)' created successfully on the Subscription with the ID of '$($GitHubSecret_ARM_SUBSCRIPTION_ID)'." -ForegroundColor Green
Write-Host ''if ($GitHubSecret_ARM_MGMTGROUP_ID -eq'') {
Write-Host "No Management Group ID provided as input parameter to '-GitHubSecret_ARM_MGMTGROUP_ID', skipping RBAC Role Assignments upon Management Groups" -ForegroundColor Yellow
Write-Host '' }
if ($GitHubSecret_ARM_MGMTGROUP_ID -ne'') {
Write-Host "Creating RBAC Role Assignments of 'Contributor' and 'User Access Administrator' for the Service Principal Name (SPN) '$($newSpn.DisplayName)' on the Management Group with the ID of '$($GitHubSecret_ARM_MGMTGROUP_ID)' ..." -ForegroundColor Magenta
New-AzRoleAssignment -ApplicationId $newSpn.AppId -RoleDefinitionName 'User Access Administrator' -Scope "/providers/Microsoft.Management/managementGroups/$($GitHubSecret_ARM_MGMTGROUP_ID)" -ErrorAction Stop
New-AzRoleAssignment -ApplicationId $newSpn.AppId -RoleDefinitionName 'Contributor' -Scope "/providers/Microsoft.Management/managementGroups/$($GitHubSecret_ARM_MGMTGROUP_ID)" -ErrorAction Stop
Write-Host "RBAC Role Assignments of 'Contributor' and 'User Access Administrator' for the Service Principal Name (SPN) '$($newSpn.DisplayName)' created successfully on the Management Group with the ID of '$($GitHubSecret_ARM_MGMTGROUP_ID)'." -ForegroundColor Green
Write-Host '' }
}
# Create UAMI if using OIDCif ($UseOIDC) {
if ($UAMIName -eq'') {
Write-Host "No value provided for the UAMI Name. Defaulting to 'id-avm-bicep-brm-fork-ci-<GitHub Organization>' ..." -ForegroundColor Yellow
$UAMIName = "id-avm-bicep-brm-fork-ci-$($GitHubOrgName)" }
if ($UAMIRsgName -eq'') {
Write-Host "No value provided for the UAMI Resource Group Name. Defaulting to 'rsg-avm-bicep-brm-fork-ci-<GitHub Organization>-oidc' ..." -ForegroundColor Yellow
$UAMIRsgName = "rsg-avm-bicep-brm-fork-ci-$($GitHubOrgName)-oidc" }
Write-Host "Selecting the subscription with the ID of '$($GitHubSecret_ARM_SUBSCRIPTION_ID)' to create Resource Group & UAMI in for OIDC ..." -ForegroundColor Magenta
Select-AzSubscription -Subscription $GitHubSecret_ARM_SUBSCRIPTION_ID
Write-Host ''if ($UAMIRsgLocation -eq'') {
Write-Host "No value provided for the UAMI Location ..." -ForegroundColor Yellow
$UAMIRsgLocation = Read-Host -Prompt "Please enter the location for the UAMI and the Resource Group to be created in for OIDC deployments. e.g. 'uksouth' or 'eastus', etc..." $UAMIRsgLocation = $UAMIRsgLocation.ToLower()
$availableLocations = Get-AzLocation | Where-Object {$_.RegionType -eq'Physical'} | Select-Object -ExpandProperty Location
if ($availableLocations -notcontains $UAMIRsgLocation) {
Write-Host "Invalid location provided. Please provide a valid location from the list below ..." -ForegroundColor Yellow
Write-Host '' Write-Host "Available Locations: $($availableLocations -join ', ')" -ForegroundColor Yellow
do {
$UAMIRsgLocation = Read-Host -Prompt "Please enter the location for the UAMI and the Resource Group to be created in for OIDC deployments. e.g. 'uksouth' or 'eastus', etc..." } until (
$availableLocations -icontains $UAMIRsgLocation
)
}
}
Write-Host "Creating Resource Group for UAMI with the name of '$($UAMIRsgName)' and location of '$($UAMIRsgLocation)'..." -ForegroundColor Magenta
$newUAMIRsg = New-AzResourceGroup -Name $UAMIRsgName -Location $UAMIRsgLocation -ErrorAction Stop
Write-Host "New Resource Group created with a Name of '$($newUAMIRsg.ResourceGroupName)' and a Location of '$($newUAMIRsg.Location)'." -ForegroundColor Green
Write-Host '' Write-Host "Creating UAMI with the name of '$($UAMIName)' and location of '$($UAMIRsgLocation)' in the Resource Group with the name of '$($UAMIRsgName)..." -ForegroundColor Magenta
$newUAMI = New-AzUserAssignedIdentity -ResourceGroupName $newUAMIRsg.ResourceGroupName -Name $UAMIName -Location $newUAMIRsg.Location -ErrorAction Stop
Write-Host "New UAMI created with a Name of '$($newUAMI.Name)' and an Object ID of '$($newUAMI.PrincipalId)'." -ForegroundColor Green
Write-Host ''# Create Federated Credentials for UAMI for OIDC Write-Host "Creating Federated Credentials for the User-Assigned Managed Identity Name (UAMI) for OIDC ... '$($newUAMI.Name)' for OIDC ..." -ForegroundColor Magenta
New-AzFederatedIdentityCredentials -ResourceGroupName $newUAMIRsg.ResourceGroupName -IdentityName $newUAMI.Name -Name 'avm-gh-env-validation' -Issuer "https://token.actions.githubusercontent.com" -Subject "repo:$($GitHubOrgAndRepoNameCombined):environment:avm-validation" -ErrorAction Stop
Write-Host ''# Create RBAC Role Assignments for UAMI Write-Host 'Starting 120 second sleep to allow the UAMI to be created and available for RBAC Role Assignments (eventual consistency) ...' -ForegroundColor Yellow
Start-Sleep -Seconds 120 Write-Host "Creating RBAC Role Assignments of 'Contributor' and 'User Access Administrator' for the User-Assigned Managed Identity Name (UAMI) '$($newUAMI.Name)' on the Subscription with the ID of '$($GitHubSecret_ARM_SUBSCRIPTION_ID)' ..." -ForegroundColor Magenta
New-AzRoleAssignment -ObjectId $newUAMI.PrincipalId -RoleDefinitionName 'User Access Administrator' -Scope "/subscriptions/$($GitHubSecret_ARM_SUBSCRIPTION_ID)" -ErrorAction Stop
New-AzRoleAssignment -ObjectId $newUAMI.PrincipalId -RoleDefinitionName 'Contributor' -Scope "/subscriptions/$($GitHubSecret_ARM_SUBSCRIPTION_ID)" -ErrorAction Stop
Write-Host "RBAC Role Assignments of 'Contributor' and 'User Access Administrator' for the User-Assigned Managed Identity Name (UAMI) '$($newUAMI.Name)' created successfully on the Subscription with the ID of '$($GitHubSecret_ARM_SUBSCRIPTION_ID)'." -ForegroundColor Green
Write-Host ''if ($GitHubSecret_ARM_MGMTGROUP_ID -eq'') {
Write-Host "No Management Group ID provided as input parameter to '-GitHubSecret_ARM_MGMTGROUP_ID', skipping RBAC Role Assignments upon Management Groups" -ForegroundColor Yellow
Write-Host '' }
if ($GitHubSecret_ARM_MGMTGROUP_ID -ne'') {
Write-Host "Creating RBAC Role Assignments of 'Contributor' and 'User Access Administrator' for the User-Assigned Managed Identity Name (UAMI) '$($newSpn.DisplayName)' on the Management Group with the ID of '$($GitHubSecret_ARM_MGMTGROUP_ID)' ..." -ForegroundColor Magenta
New-AzRoleAssignment -ObjectId $newUAMI.PrincipalId -RoleDefinitionName 'User Access Administrator' -Scope "/providers/Microsoft.Management/managementGroups/$($GitHubSecret_ARM_MGMTGROUP_ID)" -ErrorAction Stop
New-AzRoleAssignment -ObjectId $newUAMI.PrincipalId -RoleDefinitionName 'Contributor' -Scope "/providers/Microsoft.Management/managementGroups/$($GitHubSecret_ARM_MGMTGROUP_ID)" -ErrorAction Stop
Write-Host "RBAC Role Assignments of 'Contributor' and 'User Access Administrator' for the User-Assigned Managed Identity Name (UAMI) '$($newUAMI.Name)' created successfully on the Management Group with the ID of '$($GitHubSecret_ARM_MGMTGROUP_ID)'." -ForegroundColor Green
Write-Host '' }
}
# Set GitHub Repo Secrets (non-OIDC)if ($UseOIDC -eq $false) {
Write-Host "Setting GitHub Secrets on forked repository (non-OIDC) '$($GitHubOrgAndRepoNameCombined)' ..." -ForegroundColor Magenta
Write-Host 'Creating and formatting secret `AZURE_CREDENTIALS` with details from SPN creation process (non-OIDC) and other parameter inputs ...' -ForegroundColor Cyan
$FormattedAzureCredentialsSecret = "{ 'clientId': '$($newSpn.AppId)', 'clientSecret': '$($newSpn.PasswordCredentials.SecretText)', 'subscriptionId': '$($GitHubSecret_ARM_SUBSCRIPTION_ID)', 'tenantId': '$($GitHubSecret_ARM_TENANT_ID)' }" $FormattedAzureCredentialsSecretJsonCompressed = $FormattedAzureCredentialsSecret | ConvertFrom-Json | ConvertTo-Json -Compress
if ($GitHubSecret_ARM_MGMTGROUP_ID -ne'') {
gh secret set ARM_MGMTGROUP_ID --body $GitHubSecret_ARM_MGMTGROUP_ID -R $GitHubOrgAndRepoNameCombined
}
gh secret set ARM_SUBSCRIPTION_ID --body $GitHubSecret_ARM_SUBSCRIPTION_ID -R $GitHubOrgAndRepoNameCombined
gh secret set ARM_TENANT_ID --body $GitHubSecret_ARM_TENANT_ID -R $GitHubOrgAndRepoNameCombined
gh secret set AZURE_CREDENTIALS --body $FormattedAzureCredentialsSecretJsonCompressed -R $GitHubOrgAndRepoNameCombined
gh secret set TOKEN_NAMEPREFIX --body $GitHubSecret_TOKEN_NAMEPREFIX -R $GitHubOrgAndRepoNameCombined
Write-Host '' Write-Host "Successfully created and set GitHub Secrets (non-OIDC) on forked repository '$($GitHubOrgAndRepoNameCombined)' ..." -ForegroundColor Green
Write-Host ''}
# Set GitHub Repo Secrets & Environment (OIDC)if ($UseOIDC) {
Write-Host "Setting GitHub Environment (avm-validation) and required Secrets on forked repository (OIDC) '$($GitHubOrgAndRepoNameCombined)' ..." -ForegroundColor Magenta
Write-Host "Creating 'avm-validation' environment on forked repository' ..." -ForegroundColor Cyan
$GitHubEnvironment = gh api --method PUT -H "Accept: application/vnd.github+json""repos/$($GitHubOrgAndRepoNameCombined)/environments/avm-validation" $GitHubEnvironmentConvertedToJson = $GitHubEnvironment | ConvertFrom-Json -Depth 10if ($GitHubEnvironmentConvertedToJson.name -ne'avm-validation') {
throw"Failed to create 'avm-validation' environment on forked repository. Please check the error message above, resolve any issues, and try again." }
Write-Host "Successfully created 'avm-validation' environment on forked repository' ..." -ForegroundColor Green
Write-Host '' Write-Host "Creating and formatting secrets for 'avm-validation' environment with details from UAMI creation process (OIDC) and other parameter inputs ..." -ForegroundColor Cyan
gh secret set VALIDATE_CLIENT_ID --body $newUAMI.ClientId -R $GitHubOrgAndRepoNameCombined -e 'avm-validation' gh secret set VALIDATE_SUBSCRIPTION_ID --body $GitHubSecret_ARM_SUBSCRIPTION_ID -R $GitHubOrgAndRepoNameCombined -e 'avm-validation' gh secret set VALIDATE_TENANT_ID --body $GitHubSecret_ARM_TENANT_ID -R $GitHubOrgAndRepoNameCombined -e 'avm-validation' Write-Host "Creating and formatting secrets for repo with details from UAMI creation process (OIDC) and other parameter inputs ..." -ForegroundColor Cyan
if ($GitHubSecret_ARM_MGMTGROUP_ID -ne'') {
gh secret set ARM_MGMTGROUP_ID --body $GitHubSecret_ARM_MGMTGROUP_ID -R $GitHubOrgAndRepoNameCombined
}
gh secret set ARM_SUBSCRIPTION_ID --body $GitHubSecret_ARM_SUBSCRIPTION_ID -R $GitHubOrgAndRepoNameCombined
gh secret set ARM_TENANT_ID --body $GitHubSecret_ARM_TENANT_ID -R $GitHubOrgAndRepoNameCombined
gh secret set TOKEN_NAMEPREFIX --body $GitHubSecret_TOKEN_NAMEPREFIX -R $GitHubOrgAndRepoNameCombined
Write-Host '' Write-Host "Successfully created and set GitHub Secrets in 'avm-validation' environment and repo (OIDC) on forked repository '$($GitHubOrgAndRepoNameCombined)' ..." -ForegroundColor Green
Write-Host ''}
Write-Host "Opening browser so you can enable GitHub Actions on newly forked repository '$($GitHubOrgAndRepoNameCombined)' ..." -ForegroundColor Magenta
Write-Host "Please select click on the green button stating 'I understand my workflows, go ahead and enable them' to enable actions/workflows on your forked repository via the website that has appeared in your browser window and then return to this terminal session to continue ..." -ForegroundColor Yellow
Start-Process "https://github.com/$($GitHubOrgAndRepoNameCombined)/actions" -ErrorAction Stop
Write-Host ''$GitHubWorkflowPlatformToggleWorkflows = '.Platform - Toggle AVM workflows'$GitHubWorkflowPlatformToggleWorkflowsFileName = 'platform.toggle-avm-workflows.yml'do {
Write-Host "Did you successfully enable the GitHub Actions/Workflows on your forked repository '$($GitHubOrgAndRepoNameCombined)'? Please enter 'y' or 'n'." -ForegroundColor Yellow
$userInput = Read-Host
$userInput = $userInput.ToLower()
switch ($userInput) {
'y' {
Write-Host '' Write-Host "User Confirmed. Proceeding to trigger workflow of '$($GitHubWorkflowPlatformToggleWorkflows)' to disable all workflows as per: https://azure.github.io/Azure-Verified-Modules/contributing/bicep/bicep-contribution-flow/enable-or-disable-workflows/..." -ForegroundColor Green
Write-Host ''break }
'n' {
Write-Host '' Write-Host 'User stated no. Ending script here. Please review and complete any of the steps you have not completed, likely just enabling GitHub Actions/Workflows on your forked repository and then disabling all workflows as per: https://azure.github.io/Azure-Verified-Modules/contributing/bicep/bicep-contribution-flow/enable-or-disable-workflows/' -ForegroundColor Yellow
exit
}
default {
Write-Host '' Write-Host "Invalid input. Please enter 'y' or 'n'." -ForegroundColor Red
Write-Host '' }
}
} while ($userInput -ne'y'-and $userInput -ne'n')
Write-Host "Setting Read/Write Workflow permissions on forked repository '$($GitHubOrgAndRepoNameCombined)' ..." -ForegroundColor Magenta
gh api --method PUT -H "Accept: application/vnd.github+json" -H "X-GitHub-Api-Version: 2022-11-28""/repos/$($GitHubOrgAndRepoNameCombined)/actions/permissions/workflow"-f"default_workflow_permissions=write"Write-Host ''Write-Host "Triggering '$($GitHubWorkflowPlatformToggleWorkflows) on '$($GitHubOrgAndRepoNameCombined)' ..." -ForegroundColor Magenta
Write-Host ''gh workflow run $GitHubWorkflowPlatformToggleWorkflows -R $GitHubOrgAndRepoNameCombined
Write-Host ''Write-Host 'Starting 120 second sleep to allow the workflow run to complete ...' -ForegroundColor Yellow
Start-Sleep -Seconds 120Write-Host ''Write-Host "Workflow '$($GitHubWorkflowPlatformToggleWorkflows) on '$($GitHubOrgAndRepoNameCombined)' should have now completed, opening workflow in browser so you can check ..." -ForegroundColor Magenta
Start-Process "https://github.com/$($GitHubOrgAndRepoNameCombined)/actions/workflows/$($GitHubWorkflowPlatformToggleWorkflowsFileName)" -ErrorAction Stop
Write-Host ''Write-Host "Script execution complete. Fork of '$($GitHubOrgAndRepoNameCombined)' created and configured and cloned to '$($ClonedRepoDirectoryLocation)' as per Bicep contribution guide: https://azure.github.io/Azure-Verified-Modules/contributing/bicep/bicep-contribution-flow/ you are now ready to proceed from step 4. Opening the Bicep Contribution Guide for you to review and continue..." -ForegroundColor Green
Start-Process 'https://azure.github.io/Azure-Verified-Modules/contributing/bicep/bicep-contribution-flow/'
Each time in the following sections we refer to ‘your xyz’, it is an indicator that you have to change something in your own environment.
Bicep AVM Modules (Resource, Pattern and Utility modules) are located in the /avm directory of the Azure/bicep-registry-modules repository, as per SNFR19.
Module owners are expected to fork the Azure/bicep-registry-modules repository and work on a branch from within their fork, before creating a Pull Request (PR) back into the Azure/bicep-registry-modules repository’s upstream main branch.
To do so, simply navigate to the Public Bicep Registry repository, select the 'Fork' button to the top right of the UI, select where the fork should be created (i.e., the owning organization) and finally click ‘Create fork’.
1.1 Create a GitHub environment
Create the avm-validation environment in your fork.
β How to: Create an environment in GitHub
Navigate to the repository’s Settings.
In the list of settings, expand Environments. You can create a new environment by selecting New environment on the top right.
In the opening view, provide avm-validation for the environment Name. Click on the Configure environment button.
Create a new or leverage an existing user-assigned managed identity with at least Contributor & User Access Administrator permissions on the Management-Group/Subscription you want to test the modules in. You might find the following links useful:
Some Azure resources may require additional roles to be assigned to the deployment identity. An example is the avm/res/aad/domain-service module, which requires the deployment identity to have the Domain Services Contributor Azure role to create the required Domain Services resources.
In those cases, for the first PR adding such modules to the public registry, we recommend the author to reach out to AVM maintainers or, alternatively, to create a CI environment GitHub issue in BRM, specifying the additional prerequisites. This ensures that the required additional roles get assigned in the upstream CI environment before the corresponding PR gets merged.
Configure a federated identity credential on a user-assigned managed identity to trust tokens issued by GitHub Actions to your GitHub repository.
In the Microsoft Entra admin center, navigate to the user-assigned managed identity you created. Under Settings in the left nav bar, select Federated credentials and then Add Credential.
In the Federated credential scenario dropdown box, select GitHub Actions deploying Azure resources
For the Organization, specify your GitHub organization name, for the Repository the value bicep-registry-modules.
For the Entity type, select Environment and specify the value avm-validation.
Add a Name for the federated credential, for example, avm-gh-env-validation.
The Issuer, Audiences, and Subject identifier fields auto-populate based on the values you entered.
Select Add to configure the federated credential.
You might find the following links & information useful:
If configuring the federated credential via API (e.g. Bicep, PowerShell etc.), you will need the following information points that are configured automatically for you via the portal experience:
β Option 2 [Deprecated]: Configure Service Principal + Secret
Create a new or leverage an existing Service Principal with at least Contributor & User Access Administrator permissions on the Management-Group/Subscription you want to test the modules in. You might find the following links useful:
To use the Continuous Integration environment’s workflows you should set up the following repository secrets:
Secret Name
Example
Description
ARM_MGMTGROUP_ID
11111111-1111-1111-1111-111111111111
The group ID of the management group to test-deploy modules in. Is needed for resources that are deployed to the management group scope.
ARM_SUBSCRIPTION_ID
22222222-2222-2222-2222-222222222222
The ID of the subscription to test-deploy modules in. Is needed for resources that are deployed to the subscription scope. Note: This repository secret will be deprecated in favor of the VALIDATE_SUBSCRIPTION_ID environment secret required by the OIDC authentication.
ARM_TENANT_ID
33333333-3333-3333-3333-333333333333
The tenant ID of the Azure Active Directory tenant to test-deploy modules in. Is needed for resources that are deployed to the tenant scope. Note: This repository secret will be deprecated in favor of the VALIDATE_TENANT_ID environment secret required by the OIDC authentication.
TOKEN_NAMEPREFIX
cntso
Required. A short (3-5 character length), unique string that should be included in any deployment to Azure. Usually, AVM Bicep test cases require this value to ensure no two contributors deploy resources with the same name - which is especially important for resources that require a globally unique name (e.g., Key Vault). These characters will be used as part of each resource’s name during deployment. For more information, see the [Special case: TOKEN_NAMEPREFIX] note below.
Special case: TOKEN_NAMEPREFIX
To lower the barrier to entry and allow users to easily define their own naming conventions, we introduced a default ’name prefix’ for all deployed resources.
This prefix is only used by the CI environment you validate your modules in, and doesn’t affect the naming of any resources you deploy as part of any solutions (applications/workloads) based on the modules.
Each workflow in AVM deploying resources uses a logic that automatically replaces “tokens” (i.e., placeholders) in any module test file. These tokens are, for example, included in the resources names (e.g. 'name: kvlt-${namePrefix}'). Tokens are stored as repository secrets to facilitate maintenance.
β How to: Add a repository secret to GitHub
Navigate to the repository’s Settings.
In the list of settings, expand Secrets and select Actions. You can create a new repository secret by selecting New repository secret on the top right.
In the opening view, you can create a secret by providing a secret Name, a secret Value, followed by a click on the Add secret button.
3.1.2 Authentication secrets
In addition to shared repository secrets detailed above, additional GitHub secrets are required to allow the deploying identity to authenticate to Azure.
Expand and follow the option corresponding to the deployment identity setup chosen at Step 2 and use the information you gathered during that step.
β Option 1 [Recommended]: Authenticate via OIDC
Create the following environment secrets in the avm-validation GitHub environment created at Step 1
Secret Name
Example
Description
VALIDATE_CLIENT_ID
44444444-4444-4444-4444-444444444444
The login credentials of the deployment principal used to log into the target Azure environment to test in. The format is described here.
VALIDATE_SUBSCRIPTION_ID
22222222-2222-2222-2222-222222222222
Same as the ARM_SUBSCRIPTION_ID repository secret set up above. The ID of the subscription to test-deploy modules in. Is needed for resources that are deployed to the subscription scope.
VALIDATE_TENANT_ID
33333333-3333-3333-3333-333333333333
Same as the ARM_TENANT_ID repository secret set up above. The tenant ID of the Azure Active Directory tenant to test-deploy modules in. Is needed for resources that are deployed to the tenant scope.
β How to: Add an environment secret to GitHub
Navigate to the repository’s Settings.
In the list of settings, select Environments. Click on the previously created avm-validation environment.
In the Environment secrets Section click on the Add environment secret button.
In the opening view, you can create a secret by providing a secret Name, a secret Value, followed by a click on the Add secret button.
β Option 2 [Deprecated]: Authenticate via Service Principal + Secret
Create the following environment repository secret:
The login credentials of the deployment principal used to log into the target Azure environment to test in. The format is described here. For more information, see the [Special case: AZURE_CREDENTIALS] note below.
Special case: AZURE_CREDENTIALS
This secret represent the service connection to Azure, and its value is a compressed JSON object that must match the following format:
Make sure you create this object as one continuous string as shown above - using the information you collected during Step 2. Failing to format the secret as above, causes GitHub to consider each line of the JSON object as a separate secret string. If you’re interested, you can find more information about this object here.
3.2. Enable actions
Finally, ‘GitHub Actions’ are disabled by default and hence, must be enabled first.
To do so, perform the following steps:
Navigate to the Actions tab on the top of the repository page.
Next, select ‘I understand my workflows, go ahead and enable them’.
3.3. Set Read/Write Workflow permissions
To let the workflow engine publish their results into your repository, you have to enable the read / write access for the GitHub actions.
Navigate to the Settings tab on the top of your repository page.
Within the section Code and automation click on Actions and General
Make sure to enable Read and write permissions
Tip
Once you enabled the GitHub actions, your workflows will behave as they do in the upstream repository. This includes a scheduled trigger to continuously check that all modules are working and compliant with the latest tests. However, testing all modules can incur substantial costs with the target subscription. Therefore, we recommend disabling all workflows of modules you are not working on. To make this as easy as possible, we created a workflow that disables/enables workflows based on a selected toggle & naming pattern. For more information on how to use this workflow, please refer to the corresponding documentation.
4. Implement your contribution
To implement your contribution, we kindly ask you to first review the Bicep specifications and composition guidelines in particular to make sure your contribution complies with the repository’s design and principles.
If you’re working on a new module, we’d also ask you to create its corresponding workflow file. Each module has its own file, but only differs in very few details, such as its triggers and pipeline variables. As a result, you can either copy & update any other module workflow file (starting with 'avm.[res|ptn|utl].') or leverage the following template:
β Module workflow template
# >>> UPDATE to for example "avm.res.key-vault.vault" and remove this commentname: "avm.[res|ptn|utl].[provider-namespace].[resource-type]"on:
workflow_dispatch:
inputs:
staticValidation:
type: booleandescription: "Execute static validation"required: falsedefault: truedeploymentValidation:
type: booleandescription: "Execute deployment validation"required: falsedefault: trueremoveDeployment:
type: booleandescription: "Remove deployed module"required: falsedefault: truecustomLocation:
type: stringdescription: "Default location overwrite (e.g., eastus)"required: falsepush:
branches:
- mainpaths:
- ".github/actions/templates/avm-**" - ".github/workflows/avm.template.module.yml"# >>> UPDATE to for example ".github/workflows/avm.res.key-vault.vault.yml" and remove this comment - ".github/workflows/avm.[res|ptn|utl].[provider-namespace].[resource-type].yml"# >>> UPDATE to for example "avm/res/key-vault/vault/**" and remove this comment - "avm/[res|ptn|utl]/[provider-namespace]/[resource-type]/**" - "utilities/pipelines/**" - "!utilities/pipelines/platform/**" - "!*/**/README.md"env:
# >>> UPDATE to for example "avm/res/key-vault/vault" and remove this commentmodulePath: "avm/[res|ptn|utl]/[provider-namespace]/[resource-type]"# >>> Update to for example ".github/workflows/avm.res.key-vault.vault.yml" and remove this commentworkflowPath: ".github/workflows/avm.[res|ptn|utl].[provider-namespace].[resource-type].yml"concurrency:
group: ${{ github.workflow }}jobs:
############################ Initialize pipeline ############################job_initialize_pipeline:
runs-on: ubuntu-latestname: "Initialize pipeline"steps:
- name: "Checkout"uses: actions/checkout@v4with:
fetch-depth: 0 - name: "Set input parameters to output variables"id: get-workflow-paramuses: ./.github/actions/templates/avm-getWorkflowInputwith:
workflowPath: "${{ env.workflowPath}}" - name: "Get module test file paths"id: get-module-test-file-pathsuses: ./.github/actions/templates/avm-getModuleTestFileswith:
modulePath: "${{ env.modulePath }}"outputs:
workflowInput: ${{ steps.get-workflow-param.outputs.workflowInput }}moduleTestFilePaths: ${{ steps.get-module-test-file-paths.outputs.moduleTestFilePaths }}psRuleModuleTestFilePaths: ${{ steps.get-module-test-file-paths.outputs.psRuleModuleTestFilePaths }}modulePath: "${{ env.modulePath }}"############################### Call reusable workflow ###############################call-workflow-passing-data:
name: "Run"permissions:
id-token: write# For OIDCcontents: write# For release tagsneeds:
- job_initialize_pipelineuses: ./.github/workflows/avm.template.module.ymlwith:
workflowInput: "${{ needs.job_initialize_pipeline.outputs.workflowInput }}"moduleTestFilePaths: "${{ needs.job_initialize_pipeline.outputs.moduleTestFilePaths }}"psRuleModuleTestFilePaths: "${{ needs.job_initialize_pipeline.outputs.psRuleModuleTestFilePaths }}"modulePath: "${{ needs.job_initialize_pipeline.outputs.modulePath}}"secrets: inherit
Tip
After any change to a module and before running tests, we highly recommend running the Set-AVMModule utility to update all module files that are auto-generated (e.g., the main.json & readme.md files).
5. Create/Update and run tests
Before opening a Pull Request to the Bicep Public Registry, ensure your module is ready for publishing, by validating that it meets all the Testing Specifications as per SNFR1, SNFR2, SNFR3, SNFR4, SNFR5, SNFR6, SNFR7.
For example, to meet SNFR2, ensure the updated module is deployable against a testing Azure subscription and compliant with the intended configuration.
Depending on the type of contribution you implemented (for example, a new resource module feature) we would kindly ask you to also update the e2e test run by the pipeline. For a new parameter this could mean to either add its usage to an existing test file, or to add an entirely new test as per BCPRMNFR1.
Once the contribution is implemented and the changes are pushed to your forked repository, we kindly ask you to validate your updates in your own cloud environment before requesting to merge them to the main repo. Test your code leveraging the forked AVM CI environment you configured before
Tip
In case your contribution involves changes to a module, you can also optionally leverage the Validate module locally utility to validate the updated module from your local host before validating it through its pipeline.
Creating end-to-end tests
As per BCPRMNFR1, a resource module must contain a minimum set of deployment test cases, while for pattern modules there is no restriction on the naming each deployment test must have. In either case, you’re free to implement any additional, meaningful test that you see fit. Each test is implemented in its own test folder, containing at least a main.test.bicep and optionally any amount of extra deployment files that you may require (e.g., to deploy dependencies using a dependencies.bicep that you reference in the test template file).
To get started implementing your test in the main.test.bicep file, we recommend the following guidelines:
As per BCPNFR13, each main.test.bicep file should implement metadata to render the test more meaningful in the documentation
The main.test.bicep file should deploy any immediate dependencies (e.g., a resource group, if required) and invoke the module’s main template while providing all parameters for a given test scenario.
Parameters
Each file should define a parameter serviceShort. This parameter should be unique to this file (i.e, no two test files should share the same) as it is injected into all resource deployments, making them unique too and account for corresponding requirements.
As a reference you can create a identifier by combining a substring of the resource type and test scenario (e.g., in case of a Linux Virtual Machine Deployment: vmlin).
For the substring, we recommend to take the first character and subsequent ‘first’ character from the resource type identifier and combine them into one string. Following you can find a few examples for reference:
db-for-postgre-sql/flexible-server with a test folder default could be: dfpsfsdef
storage/storage-account with a test folder waf-aligned could be: ssawaf
π‘ If the combination of the servicesShort with the rest of a resource name becomes too long, it may be necessary to bend the above recommendations and shorten the name. This can especially happen when deploying resources such as Virtual Machines or Storage Accounts that only allow comparatively short names.
If the module deploys a resource-group-level resource, the template should further have a resourceGroupName parameter and subsequent resource deployment. As a reference for the default name you can use dep-<namePrefix><providerNamespace>.<resourceType>-${serviceShort}-rg.
Each file should also provide a location parameter that may default to the deployments default location
It is recommended to define all major resource names in the main.test.bicep file as it makes later maintenance easier. To implement this, make sure to pass all resource names to any referenced module (including any resource deployed in the dependencies.bicep).
Further, for any test file (including the dependencies.bicep file), the usage of variables should be reduced to the absolute minimum. In other words: You should only use variables if you must use them in more than one place. The idea is to keep the test files as simple as possible
References to dependencies should be implemented using resource references in combination with outputs. In other words: You should not hardcode any references into the module template’s deployment. Instead use references such as nestedDependencies.outputs.managedIdentityPrincipalId
Important
As per BCPNFR12 you must use the header module testDeployment '../.*main.bicep' = when invoking the module’s template.
The dependencies.bicep should optionally be used if any additional dependencies must be deployed into a nested scope (e.g. into a deployed Resource Group).
Note that you can reuse many of the assets implemented in other modules. For example, there are many recurring implementations for Managed Identities, Key Vaults, Virtual Network deployments, etc.
A special case to point out is the implementation of Key Vaults that require purge protection (for example, for Customer Managed Keys). As this implies that we cannot fully clean up a test deployment, it is recommended to generate a new name for this resource upon each pipeline run using the output of the utcNow() function at the time.
π If your test case requires any value that you cannot / should not specify in the test file itself (e.g., tenant-specific object IDs or secrets), please refer to the Custom CI secrets feature.
Reusable assets
There are a number of additional scripts and utilities available here that may be of use to module owners/contributors. These contain both scripts and Bicep templates that you can re-use in your test files (e.g., to deploy standadized dependencies, or to generate keys using deployment scripts).
Example: Certificate creation script
If you need a Deployment Script to set additional non-template resources up (for example certificates/files, etc.), we recommend to store it as a file in the shared utilities/e2e-template-assets/scripts folder and load it using the template function loadTextContent() (for example: scriptContent: loadTextContent('../../../../../../utilities/e2e-template-assets/scripts/New-SSHKey.ps1')). This approach makes it easier to test & validate the logic and further allows reusing the same logic across multiple test cases.
Example: Diagnostic Settings dependencies
To test the numerous diagnostic settings targets (Log Analytics Workspace, Storage Account, Event Hub, etc.) the AVM core team have provided a dependencies .bicep file to help create all these pre-requisite targets that will be needed during test runs.
β Diagnostic Settings Dependencies - Bicep File
// ========== //// Parameters //// ========== //@description('Required. The name of the storage account to create.')
@maxLength(24)
param storageAccountName string
@description('Required. The name of the log analytics workspace to create.')
param logAnalyticsWorkspaceName string
@description('Required. The name of the event hub namespace to create.')
param eventHubNamespaceName string
@description('Required. The name of the event hub to create inside the event hub namespace.')
param eventHubNamespaceEventHubName string
@description('Optional. The location to deploy resources to.')
param location string = resourceGroup().location
// ============ //// Dependencies //// ============ //resource storageAccount 'Microsoft.Storage/storageAccounts@2021-08-01' = {
name: storageAccountName
location: location
kind: 'StorageV2' sku: {
name: 'Standard_LRS' }
properties: {
allowBlobPublicAccess: false }
}
resource logAnalyticsWorkspace 'Microsoft.OperationalInsights/workspaces@2021-12-01-preview' = {
name: logAnalyticsWorkspaceName
location: location
}
resource eventHubNamespace 'Microsoft.EventHub/namespaces@2021-11-01' = {
name: eventHubNamespaceName
location: location
resource eventHub 'eventhubs@2021-11-01' = {
name: eventHubNamespaceEventHubName
}
resource authorizationRule 'authorizationRules@2021-06-01-preview' = {
name: 'RootManageSharedAccessKey' properties: {
rights: [
'Listen''Manage''Send' ]
}
}
}
// ======= //// Outputs //// ======= //@description('The resource ID of the created Storage Account.')
output storageAccountResourceId string = storageAccount.id
@description('The resource ID of the created Log Analytics Workspace.')
output logAnalyticsWorkspaceResourceId string = logAnalyticsWorkspace.id
@description('The resource ID of the created Event Hub Namespace.')
output eventHubNamespaceResourceId string = eventHubNamespace.id
@description('The resource ID of the created Event Hub Namespace Authorization Rule.')
output eventHubAuthorizationRuleId string = eventHubNamespace::authorizationRule.id
@description('The name of the created Event Hub Namespace Event Hub.')
output eventHubNamespaceEventHubName string = eventHubNamespace::eventHub.name
6. Create a Pull Request to the Public Bicep Registry
Finally, once you are satisfied with your contribution and validated it, open a PR for the module owners or core team to review. Make sure you:
Provide a meaningful title in the form of feat: <module name> to align with the Semantic PR Check.
Provide a meaningful description.
Follow instructions you find in the PR template.
If applicable (i.e., a module is created/updated), please reference the badge status of your pipeline run. This badge will show the reviewer that the code changes were successfully validated & tested in your environment. To create a badge, first select the three dots (...) at the top right of the pipeline, and then chose the Create status badge option.
In the opening pop-up, you first need to select your branch and then click on the Copy status badge Markdown
Note
If you’re the sole owner of the module, the AVM core team must review and approve the PR. To indicate that your PR needs the core team’s attention, apply the Β Needs: Core Team π§Β label on it!
Subsections of Contribution Flow
Custom CI Secrets
When working on a module, and more specifically its e2e deployment validation test cases, it may be necessary to leverage tenant-specific information such as:
(sensitive) principal credentials (e.g., a custom service principal’s application id and secret)
The challenge with the former is that the value would be different from the contributor’s test tenant compared to the Upstream AVM one. This requires the contributor to temporarily change the value to their own tenant’s value during the contribution’s creation & testing, and for the reviewer to make sure the value is changed back before merging a PR in. The challenge with the later is more critical as it would require the contributor to store sensitive information in source control and as such publish it.
To mitigate this challenge, the AVM CI provides you with the feature to store any such information in a custom Azure Key Vault and automatically pass it into your test cases in a dynamic & secure way.
Important
Since all modules must pass the tests in the AVM environment, it is important that you inform the maintainers when you add a new custom secret. The same secret must then also be set up in the upstream environment before the pull request is merged.
To make this matter not too complicated, we would like to ask you to emphasize this requirement in the description of your PR, for example by adding a text similar to:
- [ ] @avm-core-team-technical-bicep TODO: Add custom secret 'mySecret' to AVM CI
Example use case
Let’s assume you need a tenant-specific value like the object id of Azure’s Backup Management Service Enterprise Application for one of your tests. As you want to avoid hardcoding and consequently changing its value each time you want to contribute from your Fork to the main AVM repository, you want to instead have it be automatically pulled into your test cases.
To do so, you create a new parameter in your test case’s main.test.bicep file that you call, for example,
assuming that it would be provided with the correct value by the AVM CI. You consequently reference it in your test case as you would with any other Bicep parameter.
Next, you create a new secret of the same name with a prefix CI- in a previously created Azure Key Vault of your test subscription (e.g., CI-backupManagementServiceEnterpriseApplicationObjectId). Its value would be the object id the Enterprise Application has in the tenant of your test subscription.
Assuming that also the CI_KEY_VAULT_NAME GitHub Repository variable is configured correctly, you can now run your test pipeline and observe how the CI automatically pulls the secret and passes it into your test cases, IF, they have a parameter with a matching name.
Setup
Pre-Requisites
To use this feature, there are really only three prerequisites:
Create an Azure Key Vault in your test subscription
Grant the principal you use for testing in the CI at least `Key Vault Secrets User’ permissions on that Key Vault to enable it to pull secrets from it
Configure the name of that Key Vault as a ‘Repository variable’ CI_KEY_VAULT_NAME in your Fork.
The above will enable the CI to identify your Key Vault, look for matching secrets in it, and pull their values as needed.
Configuring a secret
Building upon the prerequisites you only have to implement two actions per value to dynamically populate them during deployment validation:
Create a @secure() parameter in your test file (main.test.bicep) that you want to populate and use it as you see fit.
For example:
@description('Required. My parameter\'s description. This value is tenant-specific and must be stored in the CI Key Vault in a secret named \'CI-MySecret\'.')
@secure()
param mySecret string = ''
Important
It is mandatory to declare the parameter as secure() as Key Vault secrets will be pulled and passed into the deployment as SecureString values.
Also, it must have an empty default to be compatible with the PSRule scans that require a value for all parameters.
Configure a secret of the same name, but with a CI- prefix and corresponding value in the Azure Key Vault you set up as per the prerequisites.
How it works
Assuming you completed both the prerequisites & setup steps and triggered your module’s workflow, the CI will perform the following actions:
When approaching the deployment validation steps, the workflow will lookup the CI_KEY_VAULT_NAME repository variable
If it has a value, it will subsequently pull all available secret references (not their values!) from that Key Vault, filtered down to only the secrets that match the CI- prefix
It will then loop through these secret references and check if any match a parameter in the targeted test.main.bicep of the same name, but without the CI- prefix
Only for a match, the workflow with then pull the secret from the Key Vault and pass its value as a SecureString as a parameter into the template deployment.
When reviewing the log during or after a run, you can see each matching and pulled secret is/was added as part of the AdditionalParameters object as seen in the following:
Background: Why not simply use GitHub secrets?
When reviewing the above, you may wonder why an Azure Key Vault was used as opposed to simple GitHub secrets.
While the simplicity of GitHub secrets would be preferred, it unfortunately turned out that they would not provide us with the level of flexibility we need for our purposes.
Most notably, GitHub secrets are not automatically available in referenced GitHub actions. Instead, you have to declare every secret you want to use explicitly in the workflow’s template, requiring the contributor to update both the module’s workflow template as well as test files each time a new value would be added. This characteristic is not only unfortunate for our use case, but is also a lot more likely to lead to mistakes.
Further, with the use of OIDC via Managed Identities, the hurdle to bootstrap & populate an Azure Key Vault is significantly lowered.
Enable or Disable Workflows
When forking the BRM repository, all workflows from the CI environment are also part of your fork. In an earlier step it was explained, how to set them up correctly, to verify your module development.
Due to the trigger mechanism of the workflows, eventually all of them run at some point in time, creating and deleting resources on Azure in your environment. That will also happen for modules, you are not working on. This will create costs in your own subscription and it can also create a queue for workflow runs, due to the lack of enough free agents.
To limit those workflow runs, you can manually disable each pipeline you do not want to run. As this is a time consuming task, there is script in the BRM repository, to disable (or enable) pipelines in a batch process, that can also be run via a workflow. You can also use RegEx to specify which pipelines should be included and which should be excluded.
Browse to Actions and select the workflow from the list
Run the workflow platform.toggle-avm-workflows and set the following settings:
Enable or disable workflows to enable or disable workflows
RegEx which workflows are included include a specific set of workflows, using a RegEx.
RegEx which workflows are excluded exclude a specific set of workflows, using a RegEx.
Typical use cases
Disable all but one workflow
Enable or disable workflows to Disable
RegEx which workflows are included to avm\.(?:res|ptn|utl) (this is the default setting)
RegEx which workflows are excluded to avm.res.compute.virtual-machine (use the name of your own workflow. This example uses the workflow for virtual machine)
Disable all but multiple workflows
Enable or disable workflows to Disable
RegEx which workflows are included to avm\.(?:res|ptn|utl) (this is the default setting)
RegEx which workflows are excluded to (?:avm.res.compute.virtual-machine|avm.res.compute.image|avm.res.compute.disk) (use the names of your own workflows. This example uses the workflows for virtual machine, image, and disk)
Enable all workflows
Enable or disable workflows to Enable
RegEx which workflows are included to avm\.(?:res|ptn|utl) (this is the default setting)
RegEx which workflows are excluded to ^$ (this is the default setting)
Limitations
Please keep in mind, that the workflow run disables all workflows that match the RegEx at that point in time. If you sync your fork with the original repository and new workflows are there, they will be synced to your repository and will be enabled by default. So you will need to run the workflow to disable the new ones again after the sync.
Important
The workflow can only be triggered in forks.
Owner Contribution Flow
This section describes the contribution flow for module owners who are responsible for creating and maintaining Bicep Modules.
Important
This contribution flow is for Module Owners only.
As a Bicep Module Owner you need to be aware of the AVM Contribution Process Overview, Bicep specifications (including Bicep Interfaces) as these need to be followed during pull request reviews for the modules you own. The purpose of this Owner Contribution Flow is to simplify and list the most important activities of an owner and to help you understand your responsibilities as an owner.
Note
Additional internal content for ongoing module maintenance available for Microsoft FTEs, here.
avm-res-compute-virtualmachine-module-owners-bicep and added avm-technical-reviewers-bicep as parent.
avm-res-compute-virtualmachine-module-contributors-bicep and added avm-module-contributors-bicep as parent.
If a secondary owner is required, add the secondary owner to the avm-res-<RP>-<modulename>-module-owners-bicep team.
Only fulltime Microsoft employees can be added at this time.
Info
Once the teams have been created the AVM Core Team will review the team name and parent team membership for accuracy. A notification will automatically be sent to the AVM Core Team to inform them that their review needs to be completed.
Add teams to CODEOWNERS file as outlined in SNFR20.
Ensure your module has been tested before raising a PR. You can do this your own or in another module contributor’s environment - if any. Also, once a PR is raised, a GitHub workflow pipeline is required to be run successfully before the PR can be merged. This is to ensure that the module is working as expected and is compliant with the AVM specifications.
Note
If you’re the sole owner of the module, the AVM core team must review and approve the PR. To indicate that your PR needs the core team’s attention, apply the Β Needs: Core Team π§Β label on it!
Ensure that the module(s) you own are compliant with the AVM Bicep specifications and are working as expected.
Watch Pull Request (PR) activity for your module(s) in the BRM repository (Bicep Registry Modules repository - where all Bicep AVM modules are published) and ensure that PRs are reviewed and merged in a timely manner as outlined in SNFR11.
Watch AVM module issue and AVM question/feedback activity for your module(s) in the BRM repository.
2. Module Handover Activities
Under certain circumstances, you may find yourself unable to continue as the module owner. In such cases, it is advisable to designate a new module owner. The following steps outline this transition:
Leave a comment on the original module proposal, indicating that you’d like to hand the ownership over to somebody else. Mention the person who originally helped triage the issue or the @Azure/avm-core-team-technical-bicep team. You must wait for someone from the AVM Core Team to respond first, as the module index must be updated before you can continue handing over the ownership.
Add the new owner’s GitHub account as a “maintainer” on your modules GitHub teams.
Remove your GitHub account from your module’s GitHub teams.
If a new module owner cannot be identified then the module will need to be “Orphaned”. Please follow the step outlined when-a-module-becomes-orphaned.
As a module owner, it’s important that you receive notifications when any of your AVM modules experience activity or when you or any groups you belong to are explicitly mentioned (using the @ operator). This document describes how to configure your GitHub and Email settings to ensure you receive email notifications for these types of scenarios within GitHub.
Ensure your Default Notifications Email address is set to the email address you intend to use.
(Optional) If you would like to automatically watch repositories that you are active in, ensure Automatically watch repositories is set to “On.”
(Required) If you would like to automatically subscribe to team-level notifications whenever you join a new team, ensure Automatically watch teams is set to “On.”
(Required) To receive notifications whenever a change is made to a repository or conversation that you are Watching, ensure the Notify Me setting has at least Email enabled.
(Required)To receive notifications whenever you or a group you belong to are @mentioned, ensure the Notify Me setting has at least Email enabled.
Watch a Repository
Optionally, you may consider “watching” (following most or all activities in) an entire repository. The primary repository that owners should watch is the Bicep-Registry-Modules (BRM) repository. Notifications from this repository will notify you of issues concerning your module and any direct or team @mentions. It is important that you read and react to these messages.
To watch the BRM repository, visit Bicep-Registry-Modules, click the Watch button in the top-right of the page, then select Participating and @mentions. Optionally, if you would like to be notified for all activity within the repository, you can select All Activity.
Note
Enabling All Activity will result in a lot of notifications! If you choose to go this route, you should set up filters within your email client. See Configure Email Inbox Notification Filters.
Configure Email Inbox Notification Filters
GitHub uses a unique email address sender for each type of notification it sends. This allows us to set up filters within our email client to sort our inboxes depending on the type of notifications that was sent. The table below lists all of the relevant email addresses that may be useful for filtering notifications from GitHub.
Info
GitHub will use the following email addresses to Cc you if you’re subscribed to a conversation. The second Cc email address matches the notification reason.
This checklist can be used in the development of AVM Bicep Modules.
Before beginning any work a new module a valid Issue: New AVM Module Proposal needs to be created. Instructions for creating the module proposal are outlined in the issue template. Pay particular attention to the questions and associated links to fill out the proposal accurately. Please do not start work on your proposed module until you receive a notification that your proposal has been accepted.
Fork the bicep-registry-modules BRM repository. If you use an existing fork, ensure it’s up to date with origin/BRM.
Ensure all workflows are disabled by default once you forked the BRM repo, to prevent any accidental deployments into your Azure test environment resulted by an automated deployment.
Create a new branch from your forked repository to develop your module.
If you’re working on a new module you have to create its corresponding workflow file (see here).
In order to run your e2e tests in your fork, this workflow file has to be put into the main branch first, so it can be run against your feature branch (GitHub Workflows can only be run on feature branches when they are already present in the main branch).
Since all workflows are disabled by default you have to enable your module’s specific GitHub workflow to run your e2e tests.
In addition to testing your module via GitHub pipeline, you can also test-locally. The following helper script facilitates local testing.
β Local Test Helper Script
# Start pwsh if not started yetpwsh
# Set default directory$folder = "<your directory>/bicep-registry-modules"# Dot source functions. $folder/utilities/tools/Set-AVMModule.ps1
. $folder/utilities/tools/Test-ModuleLocally.ps1
# Variables$modules = @(
# "service-fabric/cluster", # Replace with your module"network/private-endpoint"# Replace with your module)
# Generate Readmeforeach ($module in $modules) {
Write-Output "Generating ReadMe for module $module" Set-AVMModule -ModuleFolderPath "$folder/avm/res/$module" -Recurse
# Set up test settings $testcases = "waf-aligned", "max", "defaults" $TestModuleLocallyInput = @{
TemplateFilePath = "$folder/avm/res/$module/main.bicep" ModuleTestFilePath = "$folder/avm/res/$module/tests/e2e/max/main.test.bicep" PesterTest = $true
ValidationTest = $false
DeploymentTest = $false
ValidateOrDeployParameters = @{
Location = '<your location>' SubscriptionId = '<your subscriptionId>' RemoveDeployment = $true
}
AdditionalTokens = @{
namePrefix = '<your prefix>' TenantId = '<your tenantId>' }
}
# Run testsforeach ($testcase in $testcases) {
Write-Output "Running test case $testcase on module $module" $TestModuleLocallyInput.ModuleTestFilePath = "$folder/avm/res/$module/tests/e2e/$testcase/main.test.bicep" Test-ModuleLocally @TestModuleLocallyInput
}
}
Create a PR and reference the status badge of your pipeline run - see here.
Note
If you’re the sole owner of the module, the AVM core team must review and approve the PR. To indicate that your PR needs the core team’s attention, apply the Β Needs: Core Team π§Β label on it!
After a pull request has been created, it is important to update the AVM module proposal issue associated with your module, with a link to the pull request you created in BRM and mention the person who helped triage your module or the @Azure/avm-core-team-technical-bicep team.
Once your BRM pull request has been approved and merged into main update the AVM module proposal issue associated with your module, with a Merged comment and mention the person who helped triage your module, or the @Azure/avm-core-team-technical-bicep team.
Generate Bicep Module Files
As per the module design structure (BCPFR3), every module in the AVM library requires
a up-to-date ReadMe markdown (readme.md) file documenting the set of deployable resource types, input and output parameters and a set of relevant template references from the official Azure Resource Reference documentation
an up-to-date compiled template (main.json) file
The Set-AVMModule utility aims to simplify contributing to the AVM library, as it supports
idempotently generating the AVM folder structure for a module (including any child resource)
generating the module’s ReadMe file from scratch or updating it
compiling/building the module template
To ease maintenance, you can run the utility with a Recurse flag from the root of your folder to update all files automatically.
To do so, it searches for any required folder path / file missing and adds them. For several files, it will also provide some default content to get you started. The sources files for this action can be found here
compiles its bicep template
updates the readme (recursively, specified)
If the intended readMe file does not yet exist in the expected path, it is generated with a skeleton (with e.g., a generated header name)
The script then goes through all sections defined as SectionsToRefresh (by default all) and refreshes the sections’ content (for example, for the Parameters) based on the values in the ARM/JSON Template. It detects sections by their header and always regenerates the full section.
Once all are refreshed, the current ReadMe file is overwritten. Note: The script can be invoked combining the WhatIf and Verbose switches to just receive an console-output of the updated content.
How to use it
For details on how to use the function, please refer to the script’s local documentation.
Note
The script must be loaded (’dot-sourced’) before the function can be invoked.
. 'C:/dev/Set-AVMModule.ps1'Set-AVMModule (...)
Tip
For modules that require the generation of files on multiple-levels (for example, a module with child modules such as the ‘Key Vault’ module with its ‘Secret’ child module) it is highly recommended to make use of the -Recurse parameter.
This parameter will ensure that the script not only generates the files for the provided module folder path, but also all its nested module folder paths.
Tip
While readme files are always generated from scratch, you can add custom content is specific places that the script will preserve:
The module’s description in the main.bicep file’s metadata
The description of parameters & outputs
A section with the header ## Notes
If the utility finds a section with the heading ## Notes, it temporarily saves this content when it regenerates the readme file and then re-inserts (i.e. appends) the section towards the end of the readme file. This section may contain images, which must be stored in a subfolder /src in the root directory of the module.
Both for the text & images, please make sure to only add what provides tangible value as the content must be manually maintained and should not run stale. Further, for images, please make sure to only store them with an appropriate resolution & size to keep their impact on the repository’s size manageable.
Validate Module Locally
Use this script to test a module from your PC locally, without a CI environment. You can use it to run only the static validation (Pester tests), a deployment validation (dryRun) or an actual deployment to Azure. In the latter cases the script also takes care to replace placeholder tokens in the used module test & template files for you.
If the switch for Pester tests (-PesterTest) is provided the script will
Invoke the module test for the provided template file path and run all tests for it.
If the switch for either the validation test (-ValidationTest) or deployment test (-DeploymentTest) is provided alongside a HashTable for the token replacement (-ValidateOrDeployParameters), the script will
Either fetch all module test files of the module’s tests folder (default) or you can specify a single module test file by leveraging the -ModuleTestFilePath parameter instead.
Create a dictionary to replace all tokens in these module test files with actual values. This dictionary will consist
of the subscriptionID & managementGroupID of the provided ValidateOrDeployParameters object,
add all key-value pairs of the -AdditionalTokens object to it,
and optionally also add all key-value pairs specified in the settings.yml, under the ’local tokens settings'.
If the -ValidationTest parameter was set, it runs a deployment validation using the Test-TemplateDeployment script.
If the -DeploymentTest parameter was set, it runs a deployment using the New-TemplateDeployment script (with no retries).
As a final step, it rolls the module test files back to their original state if either the -ValidationTest or -DeploymentTest parameters were provided.
How to use it
For details on how to use the function, please refer to the script’s local documentation.
Note
The script must be loaded (’dot-sourced’) before the function can be invoked.
Important: As the script emulates the testing logic of the CI environment, also tokens such as #_namePrefix_# are replaced by the script. However, in addition to the CI environment, it also reverses the token replacement to recover the files’ original state. As such, ensure that you use a namePrefix value that is unlikely to overlap with any string value in module folder you want to test.
For example, do not use avm, as the reverse token replacement would incorrectly replace the deployment name avmTelemetry found in each module to #_namePrefix_#Telemetry.
Bicep Contribution Prerequisites
GitHub Account Link and Access
You need to have a personal GitHub account which is linked to your Microsoft corporate identity. Once the link step is complete you must join the Azure organization.
Recommended Learning
Before you start contributing to the AVM, it is highly recommended that you complete the following Microsoft Learn paths, modules & courses:
To enhance streamlined integration during interactions with upstream repositories, GitHub Desktop will automatically configure your local git repository to use the upstream repository as a remote.
Contribution Q&A
Tip
Check out the FAQ for more answers to common questions about the AVM initiative in general.
Proposing a module
Who can propose a new module and where can I submit a new module proposal / request?
Everyone can propose a module
To propose a new module, simply create an issue/complete the form here.
Can I just propose / create any module?
For example, can I propose one for managed disks or NICs or diagnostic settings? What about patterns?
No, you cannot propose or create just any module. You can only propose modules that are aligned with requirements documented in the module specifications section.
Below, we provide some guidance on what modules you can / cannot propose.
Resource modules: resource modules have bring extra value to the end user (can’t just be simple wrappers) and MUST mapped 1:1 to RPs (resource providers) and top level resources. You MUST follow the module specifications and your modules SHOULD be WAF aligned.
Good examples:
Virtual machine: the VM module is highly complex and therefore, it brings extra value to the end user by providing a wide variety of features (e.g., diagnostics, RBAC, domain join, disk encryption, backup and more).
Storage account: even though, this module is mainly built around one RP, it brings extra value by providing easy access to its child resources, such as file/table/queue services, as well as additional standard interfaces (e.g., diagnostics, RBAC, encryption, firewall, etc.).
Bad examples:
NIC or Public IP (PIP) module: these would be simple wrappers around the NIC/PIP resource and wouldn’t bring any extra value. NICs and PIPs SHOULD be surfaced as part of the VM module (or any other primary resources that require them).
Diagnostic settings: these are too low-level “sub resources”, and highly dependent on their “primary resource’s” RP defined as “interfaces” and therefore MUST be used as part of a resource module holding a primary resource - see Diagnostic Settings documentation about the correct implementation.
Pattern modules: In case of pattern modules, ideally you should start from architectural patterns, published in the Azure Architecture Center, and build your pattern module by leveraging resource modules that are required to implement the pattern. AVM does not provide architectural guidance on how you should design your pattern, but you MUST follow the module specifications and your modules SHOULD be WAF aligned.
Good examples:
Landing zone accelerators for N-tier web application; AKS cluster; SAP: there are numerous examples for these architectures in Azure Architecture Center that already have baked in guidance / smart defaults that are WAF Aligned, therefore these are good candidates for pattern modules. Module owners MAY leverage resource modules to implement the pattern.
Hub and spoke topology: it’s a common pattern that is used by many customers and there are great examples available through Azure Architecture Center, as well as Azure Landing Zones. Also a good candidate for a pattern module.
Bad examples:
A pair of Virtual machines: being a simple wrapper, this solution wouldn’t bring any extra value as it doesn’t provide a complete solution.
Key Vault that deploys automatically generated secrets: this is aligned with the definition of a resource modules, therefore it should be categorized as such.
Where do I need to go to make sure the module I’d like to propose is not already in the works?
The AVM core team maintains the list of Bicep and Terraform modules and tracks the status of each module. Based on this list, you can check if the module you’d like to build is already in the works (e.g., it’s being worked on in a feature branch but hasn’t been published yet).
To see the formatted lists with additional information, please visit the AVM Module Indexes page.
I need a new module but I cannot own/author it for various reasons, what should I do?
You sign up to be a module owner (and optionally, you can find additional contributors to help you).
You find / request someone else to be the module owner (and optionally, you can be a contributor).
You propose a module and wait until the AVM core team finds a module owner for you (who then can optionally leverage the help of additional contributors).
As these options are increasingly more time consuming, we recommend you to start with considering option 1 and only if you cannot own the module, should you move to option 2 and then 3.
How long will it take for someone to respond and a module to be created/updated and published?
While there are SLAs defined for providing support for existing modules, there are currently no SLAs in place for the creation of new modules. The AVM core team is a small team and is currently working on automating the module creation process to make it as easy as possible for module owners to create and publish modules on their own.
Beside of providing program level governance, the AVM core team is mainly responsible for defining the module specifications, providing tooling (such as test frameworks and pipelines), guidance and support to module owners, as well as facilitating the creation of new modules by maintaining the module catalog and identifying volunteers for owning the modules. However, modules will be created and maintained by a broader community of module owners.
How do I let the AVM team know I really need an AVM module to unblock me / my project / my company?
If you’re an external user, you can propose a module here and provide as much context as possible under the “Module Details” section (e.g., why do you need the module, what’s the business impact of not having it, etc.).
If you’re a Microsoft employee and have already proposed a module here, you can reach out to the AVM core team directly via Teams to provide more details internally.
The AVM core team will then triage the request and get back to you with next steps. You can accelerate the process of creating the module by volunteering to be a module owner.
Developing a module
Who is developing a modules?
Every module has an owner that is responsible for module development and maintenance. One owner can own one or multiple modules. An owner can develop modules alone or lead a team that will develop a module. If you want to join a team and to contribute on specific module, please contact module owner.
At this moment, only Microsoft FTEs can be module owners.
What do I need so I can start developing a module?
Feel free to reach out to the AVM Core team in case that additional help is needed.
What do I do about existing modules that are available doing a similar thing to my module that I am proposing to develop and release?
As part of the Module Proposal process, the AVM core team will work with you to triage your proposal. We also want to make sure that no similar existing modules from known Microsoft projects are already on their way to be migrated to AVM.
If there aren’t any, then you can proceed with developing your module from scratch once given approval to proceed by the AVM core team.
However, if there are existing modules from Microsoft projects we would invite you to help us complete the migration to AVM of this module; this may also entail working with the existing module owner/team.
For existing modules that may not be directly owned and developed by Microsoft or their employees you should first review the license applied to the GitHub repository hosting the module and understand its terms and conditions. More information on GitHub repositories and licenses can be found here in Licensing a repository Most modules will use a license that will allow you to take inspiration and copy all or parts from the module source code. However, to confirm, you should always check the license and any conditions you may have to meet by doing this.
What are the mandatory labels that needs to be used while managing issues, pull requests and discussions on GitHub repositories where module are held?
Where module will live? Do I need to create separate repo or to place it in specific folder?
Bicep
For Bicep, both Resource and Pattern, AVM Modules will be homed in the Azure/bicep-registry-modules repository and live within an avm directory that will be located at the root of the repository.
If you are module owner, it is expected that you will fork the Azure/bicep-registry-modules repository and work on a branch from within their fork, before then creating a Pull Request (PR) back into the Azure/bicep-registry-modules repositories main branch. In Bice contribution guide, you can discover Directory and File structure that will be used and examples.
Terraform
Each Terraform AVM module will have its own GitHub Repository in the Azure GitHub Organization. This repo will be created by the Module Owners and the AVM Core team collaboratively, including the configuration of permissions. To read more about how to start, navigate to Terraform AVM contribution guide.
I get the error ‘The repository ********** already exists on this account’ when I try to create a new repository, what should I do?
If you get this error, it means that the repository already exists in the Azure GitHub Organization. This can happen if someone has already created a repository with the same name in the past and then archived it.
To determine if this is the case you’ll need to navigate to the Microsoft Open Source Management Portal, then search for the repository name you are trying to create. Click on the repository and you will find the owner. Reach out the owner to ask them to transfer the repo to you or delete it. You’ll want them to delete it if it was not created from the template.
Where can I test my module during development?
During initial module development module owners/developers need to use your own environment (Azure subscriptions) to test module. In later phase, during publishing process, we will conduct automated test that will use AVM dedicated environment.
Updating and managing a module
I’m already using a module today, but its missing a feature, what should I do?
You should use GitHub issues to propose changes or improvements for specific module. Issue request will be router to module owner that MUST respond to logged issues within 3 business days. In case that module currently don’t have owner, AVM Core Team will handle request.
I am using module without owner. What will happened if I need update?
AVM core team will work to assign owner for every module, but it can happen during a time that there are modules without owner. If you would like to own that module, feel free to ask to take ownership. At this moment, only Microsoft FTEs can be module owners.
How will the support SLAs be automatically enforced?
All issues created in a module repo will be automatically be picked up and tracked by the GitHub Policy Service. This service will take the necessary steps when escalation is needed as per the SLAs defined in the Module Support chapter.
Process Overview
This page provides an overview of the contribution process for AVM modules.
New Module Proposal & Creation
Important
Each AVM module MUST have a Module Proposal issue created and approved by the AVM core team before it can be created/migrated!
---
config:
nodeSpacing: 20
rankSpacing: 20
diagramPadding: 5
padding: 5
useWidth: 100
flowchart:
wrappingWidth: 400
padding: 5
---
flowchart TD
ModuleIdea[Consumer has an idea for a new AVM Module] -->CheckIndex(Check AVM Module Indexes)
click CheckIndex "/Azure-Verified-Modules/indexes/"
CheckIndex -->IndexExistenceCheck{Is the module<br>in the index?}
IndexExistenceCheck -->|No|A
IndexExistenceCheck -->|Yes|EndExistenceCheck(Review existing/proposed AVM module)
EndExistenceCheck -->OrphanedCheck{ Is the module<br>orphaned? }
click OrphanedCheck "/Azure-Verified-Modules/specs/shared/module-lifecycle/#orphaned-avm-modules"
OrphanedCheck -->|No|ContactOwner[Contact module owner,<br> via GitHub issues on the related <br>repo, to discuss enhancements/<br>bugs/opportunities to contribute etc.]
OrphanedCheck -->|Yes|OrphanOwnerYes(Locate the related issue <br> and comment on:<br> - A feature/enhancement suggestion <br> - Indicating you wish to become the owner)
click OrphanOwnerYes "/Azure-Verified-Modules/specs/shared/module-lifecycle/#orphaned-avm-modules"
OrphanOwnerYes -->B
A[[ Create Module Proposal ]] -->|GitHub Issue/Form Submitted| B{ AVM Core Team<br>Triage }
click A "https://aka.ms/avm/moduleproposal"
click B "/Azure-Verified-Modules/help-support/issue-triage/avm-issue-triage/#avm-core-team-triage-explained"
B -->|Module Approved for Creation| C[["Module Owner(s) Identified & assigned to GitHub issue/proposal" ]]
B -->|Module Rejected| D(Issue closed with reasoning)
C -->E[[ Module index CSV files updated by AVM Core Team]]
click E "/Azure-Verified-Modules/indexes/"
E -->E1[[Repo/Directory Created following the <br> Contribution Guide ]]
click E1 "/Azure-Verified-Modules/contributing/"
E1 -->F("Module Developed by Owner(s) & their Contributors")
F -->G[[ Module & AVM Compliance Tests ]]
click G "/Azure-Verified-Modules/spec/SNFR3"
G -->|Tests Fail|I(Modules/Tests Fixed <br> To Make Them Pass)
I -->F
G -->|Tests Pass|J[[Pre-Release v0.1.0 created]]
J -->K[[Publish to Bicep/Terraform Registry]]
K -->L(Take Feedback from v0.1.0 Consumers)
L -->M{Anything<br>to be resolved <br> before 1.0.0<br>release? }
click M "/Azure-Verified-Modules/contributing/process/#avm-preview-notice"
M -->|Yes|FixPreV1("Module feedback incorporated by Owner(s) & their Contributors")
FixPreV1 -->PreV1Tests[[Self & AVM Module Tests]]
PreV1Tests -->|Tests Fail|PreV1TestsFix(Modules/Tests Fixed To Make Them Pass)
PreV1TestsFix -->N
M -->|No|N[[Publish 1.0.0 Release]]
N -->O[[Publish to IaC Registry]]
O -->P[[ Module BAU Starts ]]
click P "/Azure-Verified-Modules/help-support/module-support/"
Provide details for module proposals
When proposing a module, please include the information in the description that is mentioned for the triage process here:
As the overall AVM framework is not GA (generally available) yet - the CI framework and test automation is not fully functional and implemented across all supported languages yet - breaking changes are expected, and additional customer feedback is yet to be gathered and incorporated. Hence, modules MUST NOT be published at version 1.0.0 or higher at this time.
All module MUST be published as a pre-release version (e.g., 0.1.0, 0.1.1, 0.2.0, etc.) until the AVM framework becomes GA.
However, it is important to note that this DOES NOT mean that the modules cannot be consumed and utilized. They CAN be leveraged in all types of environments (dev, test, prod etc.). Consumers can treat them just like any other IaC module and raise issues or feature requests against them as they learn from the usage of the module. Consumers should also read the release notes for each version, if considering updating to a more recent version of a module to see if there are any considerations or breaking changes etc.
Module Owner Has Issue/Is Blocked/Has A Request
In the event that a module owner has an issue or is blocked due to specific AVM missing guidance, test environments, permission requirements, etc. they should follow the below steps:
Tip
Common issues/blockers/asks/request are:
Subscription level features
Resource Provider Registration
Preview Services Enablement
Entra ID (formerly Azure Active Directory) configuration (SPN creation, etc.)
Please note for module specific issues, these should be logged in the module’s source repository, not the AVM repository.
Terraform Contribution Guide
Important
While this page describes and summarizes important aspects of contributing to AVM, it only references some of the shared and language specific requirements.
Therefore, this contribution guide MUST be used in conjunction with the Terraform specifications. ALL AVM modules (Resource and Pattern modules) MUST meet the respective requirements described in these specifications!
Summary
This section lists AVM’s Terraform-specific contribution guidance.
While this page describes and summarizes important aspects of the composition of AVM modules, it may not reference All of the shared and language specific requirements.
Therefore, this guide MUST be used in conjunction with the Terraform specifications. ALL AVM modules (Resource and Pattern modules) MUST meet the respective requirements described in these specifications!
Important
Before jumping on implementing your contribution, please review the AVM Module specifications, in particular the Terraform specification pages, to make sure your contribution complies with the AVM module’s design and principles.
This section is only relevant for contributions to resource modules.
To meet RMFR4 and RMFR5 AVM resource modules must leverage consistent interfaces for all the optional features/extension resources supported by the AVM module primary resource.
To meet SFR3 & SFR4. We use a telemetry provider modtm. This is a lightweight telemetry provider that sends telemetry data to Azure Application Insights via a HTTP POST front end service.
The telemetry provider is included in the module by default and is enabled by default. You do not need to change the configuration included in the template repo.
You must make sure to have the modtm provider in your required_providers. The linter will check this for you. However, inside your terraform.tf file please make sure you have this entry:
When creating modules, it is important to understand that the Azure Resource Manager (ARM) API is sometimes eventually consistent. This means that when you create a resource, it may not be available immediately. A good example of this is data plane role assignments. When you create such a role assignment, it may take some time for the role assignment to be available. We can use an optional time_sleep resource to wait for the role assignment to be available before creating resources that depend on it.
# In variables.tf...
variable"wait_for_rbac_before_foo_operations" {
type:object({
create =optional(string, "30s")
destroy =optional(string, "0s")
})
default = {}
description =<<DESCRIPTIONThisvariablecontrolstheamountoftimetowaitbeforeperformingfoooperations.
Itonlyapplieswhen`var.role_assignments`and`var.foo`arebothset.
Thisisusefulwhenyouarecreatingroleassignmentsonthebarresourceandimmediatelycreatingfooresourcesinit.
Thedefaultis30secondsforcreateand0secondsfordestroy.
DESCRIPTION}# In main.tf...
resource"time_sleep" "wait_for_rbac_before_foo_operations" {
count =length(var.role_assignments) >0&&length(var.foo) >0?1:0 depends_on = [
azurerm_role_assignment.this ]
create_duration =var.wait_for_rbac_before_foo_operations.create destroy_duration =var.wait_for_rbac_before_foo_operations.destroy # This ensures that the sleep is re-created when the role assignments change.
triggers = {
role_assignments =jsonencode(var.role_assignments)
}
}
resource"azurerm_foo" "this" {
for_each =var.foo depends_on = [
time_sleep.wait_for_rbac_before_foo_operations ] # ...
}
Terraform Contribution Flow
High-level contribution flow
---
config:
nodeSpacing: 20
rankSpacing: 20
diagramPadding: 50
padding: 5
flowchart:
wrappingWidth: 300
padding: 5
layout: elk
elk:
mergeEdges: true
nodePlacementStrategy: LINEAR_SEGMENTS
---
flowchart TD
A(1 - Fork the module source repository)
click A "/Azure-Verified-Modules/contributing/terraform/terraform-contribution-flow/#1-fork-the-module-source-repository"
B(2 - Setup your Azure test environment)
click B "/Azure-Verified-Modules/contributing/terraform/terraform-contribution-flow/#2-prepare-your-azure-test-environment"
C(3 - Implement your contribution)
click C "/Azure-Verified-Modules/contributing/terraform/terraform-contribution-flow/#3-implement-your-contribution"
D{4 - Pre-commit<br>checks successful?}
click D "/Azure-Verified-Modules/contributing/terraform/terraform-contribution-flow/#4-run-pre-commit-checks"
E(5 - Create a pull request to the upstream repository)
click E "/Azure-Verified-Modules/contributing/terraform/terraform-contribution-flow/#5-create-a-pull-request-to-the-upstream-repository"
A --> B
B --> C
C --> D
D -->|yes|E
D -->|no|C
GitFlow for contributors
The GitFlow process outlined here depicts and suggests a way of working with Git and GitHub. It serves to synchronize the forked repository with the original upstream repository. It is not a strict requirement to follow this process, but it is highly recommended to do so.
When implementing the GitFlow process as described, it is advisable to configure the local clone of your forked repository with an additional remote for the upstream repository. This will allow you to easily synchronize your locally forked repository with the upstream repository. Remember, there is a difference between the forked repository on GitHub and the clone of the forked repository on your local machine.
Note
Each time in the following sections we refer to ‘your xyz’, it is an indicator that you have to change something in your own environment.
Prepare your developer environment
1. Fork the module source repository
Important
Each Terraform AVM module will have its own GitHub repository in the Azure GitHub Organization as per SNFR19.
This repository will be created by the Module owners and the AVM Core team collaboratively, including the configuration of permissions as per SNFR9
Module contributors are expected to fork the corresponding repository and work on a branch from within their fork, before then creating a Pull Request (PR) back into the source repository’s main branch.
To do so, simply navigate to your desired repository, select the 'Fork' button to the top right of the UI, select where the fork should be created (i.e., the owning organization) and finally click ‘Create fork’.
Note
If the module repository you want to contribute to is not yet available, please get in touch with the respective module owner which can be tracked in the Terraform Resource Modules index see PrimaryModuleOwnerGHHandle column.
Optional: The usage of local source branches
For consistent contributors but also Azure-org members in general it is possible to get invited as collaborator of the module repository which enables you to work on branches instead of forks. To get invited get in touch with the module owner since it’s the module owner’s decision who gets invited as collaborator.
2. Prepare your Azure test environment
AVM performs end-to-end (e2e) test deployments of all modules in Azure for validation. We recommend you to perform a local e2e test deployment of your module before you create a PR to the upstream repository. Especially because the e2e test deployment will be triggered automatically once you create a PR to the upstream repository.
Have/create an Azure Active Directory Service Principal with at least Contributor & User Access Administrator permissions on the Management-Group/Subscription you want to test the modules in. You might find the following links useful:
# Linux/MacOsexport ARM_SUBSCRIPTION_ID=$(az account show --query id --output tsv)# or set <subscription_id>export ARM_TENANT_ID=$(az account show --query tenantId --output tsv)# or set <tenant_id>export ARM_CLIENT_ID=<client_id>
export ARM_CLIENT_SECRET=<service_principal_password>
# Windows/Powershell$env:ARM_SUBSCRIPTION_ID =$(az account show --query id --output tsv)# or set <subscription_id>$env:ARM_TENANT_ID =$(az account show --query tenantId --output tsv)# or set <tenant_id>$env:ARM_CLIENT_ID ="<client_id>"$env:ARM_CLIENT_SECRET ="<service_principal_password>"
Change to the root of your module repository and run ./avm docscheck (Linux/MacOs) / avm.bat docscheck (Windows) to verify the container image is working as expected or needs to be pulled first. You will need this later.
3. Implement your contribution
To implement your contribution, we kindly ask you to first review the Terraform specifications and composition guidelines in particular to make sure your contribution complies with the repository’s design and principles.
Tip
To get a head start on developing your module, consider using the tooling recommended per spec TFNFR37. For example you can use the newres tool to help with creating variables.tf and main.tf if you’re developing a module using Azurerm provider.
4. Run Pre-commit Checks
Important
Make sure you have Docker installed and running on your machine.
Note
To simplify and help with the execution of commands like pre-commit, pr-check, docscheck, fmt, test-example, etc. there is now a simplified avm script available distributed to all repositories via terraform-azurerm-avm-template which combines all scripts from the avm_scripts folder in the tfmod-scaffold repository using avmmakefile.
The avm script also makes sure to pull the latest mcr.microsoft.com/azterraform:latest container image before executing any command.
4.1. Run pre-commit and pr-check
The following commands will run all pre-commit checks and the pr-check.
With the help of the avm script and the commands ./avm test-example (Linux/MacOs) / avm.bat test-example (Windows) you will be able to run it in a more simplified way. Currently the test-example command is not completely ready yet and will be released soon. Therefore please use the below docker command for now.
Run e2e tests with the help of the azterraform docker container image.
Make sure to replace <client_id> and <service_principal_password> with the values of your service principal as well as <example_folder> (e.g. default) with the name of the example folder you want to run e2e tests for.
Run e2e tests with the help of terraform init/plan/apply.
Simply run terraform init and terraform apply in the example folder you want to run e2e tests for. Make sure to set the environment variables ARM_SUBSCRIPTION_ID, ARM_TENANT_ID, ARM_CLIENT_ID and ARM_CLIENT_SECRET before you run terraform init and terraform apply or make sure you have a valid Azure CLI session and are logged in with az login.
5. Create a pull request to the upstream repository
Once you are satisfied with your contribution and validated it, submit a pull request to the upstream repository and work with the module owner to get the module reviewed by the AVM Core team, by following the initial module review process for Terraform Modules, described here. This is a prerequisite for publishing the module. Once the review process is complete and your PR is approved, merge it into the upstream repository and the Module owner will publish the module to the HashiCorp Terraform Registry.
5.1 Create the Pull Request [Contributor]
These steps are performed by the contributor:
Navigate to the upstream repository and click on the Pull requests tab.
Click on the New pull request button.
Ensure the base repository is set to the upstream AVM repo.
Ensure the base branch is set to main.
Ensure your head repository and compare branch are set to your fork and the branch you are working on.
Click on the Create pull request button.
5.2 Review the Pull Request [Owner]
IMPORTANT: The module owner must first check for any malicious code or changes to workflow files. If they are found, the owner should close the PR and report the contributor.
Review the changes made by the contributor and determine whether end to end tests need to be run.
If end to end tests do not need to be run (e.g. doc changes, small changes, etc) then so long as the static analysis passes, the PR can be merged to main.
If end to end tests do need to be run, then follow the steps in 5.3.
5.3 Release Branch and Run End to End Tests [Owner]
IMPORTANT: The module owner must first check for any malicious code or changes to workflow files. If they are found, the owner should close the PR and report the contributor.
Create a release branch from main. Suggested naminmg convention is release/<description-of-change>.
Open the PR created by the contributor and click Edit at the top right of the PR.
Change the base branch to the release branch you just created.
Wait for the PR checks to run, validate the code looks good and then merge the PR into the release branch.
Create a new PR from the release branch to the main branch of the AVM module.
The end to end tests should trigger and you can approve the run.
Once the end to end tests have passed, merge the PR into the main branch.
If the end to end tests fail, investigate the failure. You have two options:
Work with the contributor to resolve the issue and ask them to submit a new PR from their fork branch to the release branch.
Re-run the tests and merge to main. Repeat the loop as required.
If the issue is a simple fix, resolve it directly in the release branch, re-run the tests and merge to main.
Common mistakes to avoid and recommendations to follow
If you contribute to a new module then search and update TODOs (which are coming with the terraform-azurerm-avm-template) within the code and remove the TODO comments once complete
terraform.lock.hcl shouldn’t be in the repository as per the .gitignore file
Update the support.md file
\_header.md needs to be updated
support.md needs to be updated
Exclude terraform.tfvars file from the repository
Subsections of Contribution Flow
Terraform Owner Contribution Flow
This section describes the contribution flow for module owners who are responsible for creating and maintaining Terraform Module repositories.
Make sure module authors/contributors tested their module in their environment before raising a PR. The PR uses e2e checks with 1ES agents in the 1ES subscriptions. At the moment their is no read access to the 1ES subscription. Also if more than two subscriptions are required for testing, that’s currently not supported.
Watch Pull Request (PR) and issue (questions/feedback) activity for your module(s) in your repository and ensure that PRs are reviewed and merged in a timely manner as outlined in SNFR11.
Info
Make sure module authors/contributors tested their module in their environment before raising a PR. Also because once a PR is raised a e2e GitHib workflow pipeline is required to be run successfully before the PR can be merged. This is to ensure that the module is working as expected and is compliant with the AVM specifications.
2. GitHub repository creation and configuration
Familiarise yourself with the AVM Resource Module Naming in the module index csv’s.
Make sure the branch protection rules for the main branch are inherited from the Azure/terraform-azurerm-avm-template repository:
Require a pull request before merging
Dismiss stale pull request approvals when new commits are pushed
Require review from Code Owners
Require linear history
Do not allow bypassing the above settings
The respoitory environment test will be automatically created within 4 hours, it will have approvals and secrets applied to it ready to run end to end tests. You should not create this environment manually.
If you wish to use your own tenant and subscription for end to end tests, you can override the secrets by setting ARM_TENANT_ID_OVERRIDE, ARM_SUBSCRIPTION_ID_OVERRIDE, and ARM_CLIENT_ID_OVERRIDE secrets.
If you need to supply additional secrets or variables for your end to end tests, you can add them to the test environment. They must be prefixed with TF_VAR_, otherwise they will be ignored.
3. GitHub Repository Labels
As per SNFR23 the repositories created by module owners MUST have and use the pre-defined GitHub labels. To apply these labels to the repository review the PowerShell script Set-AvmGitHubLabels.ps1 that is provided in SNFR23.
Add new owner as maintainer in your avm-res-<RP>-<modulename>-module-owners-tf team and remove any other individual including yourself.
In case primary owner leaves, switches roles or abandons the repo and the corresponding team then the parent team (if assigned) doesn’t have the permissions to gain back access and a ticket with GitHub support needs to be created (but the team can still be removed from the repo since the team avm-core-team has permissions on it).
5. Grept
Grept is a linting tool for repositories, ensures predefined standards, maintains codebase consistency, and quality. It’s using the grept configuration files from the Azure-Verified-Modules-Grept repository.
Once the development of the module has been completed, get the module reviewed from the AVM Core team by following the AVM Review of Terraform Modules process here which is a pre-requisite for the next step.
7. Publish the module
Once a module has been reviewed and is ready to be published, follow the below steps to publish the module to the HashiCorp Registry.
Ensure your module is ready for publishing:
Create a tag for the module version you want to publish.
Create tag: git tag -a 0.1.0 -m "0.1.0"
Push tag: git push
Create a release on Github based on the tag you just created. Make sure to generate the release notes using the Generate release notes button.
Optional: Instead of creating the tag via git cli, you can also create both the tag and release via Github UI. Just go to the releases tab and click on Draft a new release. Make sure to create the tag from the main branch.
Elevate your respository access using the Open Source Management Portal (aka.ms/opensource/portal).
Publish a module by selecting the Publish button in the top right corner, then Module
Select the repository and accept the terms.
Info
Once a module gets updated and becomes a new version/release it will be automatically published with the latest published release version to the HashiCorp Registry.
Important
When an AVM Module is published to the HashiCorp Registry, it MUST follow the below requirements:
Resource Module: terraform-<provider>-avm-res-<rp>-<ARM resource type> as per RMNFR1
Pattern Module: terraform-<provider>-avm-ptn-<patternmodulename> as per PMNFR1
Terraform Contribution Prerequisites
GitHub Account Link and Access
To contribute to this project, you need to have a GitHub account which is linked to your Microsoft corporate identity account and be a member of the Azure organization.
Tooling
Required Tooling
Tip
We recommend to use Linux or MacOS for your development environment. You can use Windows Subsystem for Linux (WSL) if you are using Windows.
To contribute to this project the following tooling is required:
Inside Visual Studio Code, add editor.bracketPairColorization.enabled: true to your settings.json, to enable bracket pair colorization.
Review of Terraform Modules
The AVM module review is a critical step before an AVM Terraform module gets published to the Terraform Registry and made publicly available for customers, partners and wider community to consume and contribute to. It serves as a quality assurance step to ensure that the AVM Terraform module complies with the Terraform specifications of AVM. The below process outlines the steps that both the module owner and module reviewer need to follow.
The module owner completes the development of the module in their branch or fork.
The module owner submits a pull request (PR) titled AVM-Review-PR and ensures that all checks are passing on that PR as that is a pre-requisite to request a review.
The module owner assigns the avm-core-team-technical-terraform GitHub team as reviewer on the PR.
The module owner leaves the following comment as it is on the module proposal in the AVM - Module Triage project by searching for their module proposal by name there.
β AVM Terraform Module Review Request
I have completed my initial development of the module and I would like to request a review of my module before publishing it to the Terraform Registry. The latest code is in a PR titled [AVM-Review-PR](REPLACE WITH URL TO YOUR PR) on the module repo and all checks on that PR are passing.
The AVM team moves the module proposal from “In Development” to “In Review” in the AVM - Module Triage project.
The AVM team will assign a module reviewer who will open a blank issue on the module titled “AVM-Review” and populate it with the below mark down. This template already marks the specs as compliant which are covered by the checks that run on the PR. There are some specs which don’t need to be checked at the time of publishing the module therefore they are marked as NA.
β AVM Terraform Module Review Issue
Dear module owner,
As per the module ownership requirements and responsibilities at the time of [assignment](REPLACE WITH THE LINK TO THE AVM MODULE PROPOSAL), the AVM Team is opening this issue, requesting you to validate your module against the below AVM specifications and confirm its compliance.
Please don’t close this issue and merge your AVM-Review-PR until advised to do so. This review is a prerequisite for publishing your module’s v0.1.0 in the Terraform Registry. The AVM team is happy to assist with any questions you might have.
Requested Actions
Complete the below task list by ticking off the tasks.
Complete the below table by updating the Compliant column with Yes, No or NA as possible values.
Please use the comments columns to provide additional details especially if the Compliant column is updated to No or NA.
### Tasks
- [ ] Address comments on AVM-Review-PR if any
- [ ] Ensure that all checks on AVM-Review-PR are passing
- [ ] Make sure you have run [grept](https://azure.github.io/Azure-Verified-Modules/contributing/terraform/terraform-contribution-flow/owner-contribution-flow/#5-grept) and [pre-commit and pr-check](https://azure.github.io/Azure-Verified-Modules/contributing/terraform/terraform-contribution-flow/#41-run-pre-commit-and-pr-check).
- [ ] Tick this to acknowledge specs with comment "Module Owner to action this spec post-publish as appropriate" in the table below.
- [ ] Please update the _header.md file as it contains instructions which - once actioned - need to be replaced with Module Name and Description.
The module reviewer can update the Compliance column for specs in line 42 to 47 to NA, in case the module being reviewed isn’t a pattern module.
The module reviewer reviews the code in the PR and leaves comments to request any necessary updates.
The module reviewer assigns the AVM-Review issue to the module owner and links the AVM-Review Issue to the AVM-Review-PR so that once the module reviewer approves the PR and the module owner merges the AVM-Review-PR, the AMV-Review issue is automatically closed. The module reviews responds to the module owner’s comment on the Module Proposal in AVM Repo with the following
Thank you for requesting a review of your module. The AVM module review process has been initiated, please perform the **Requested Actions** on the AVM-Review issue on the module repo.
The module owner updates the check list and the table in the AVM-Review issue and notifies the module reviewer in a comment.
The module reviewer performs the final review and ensures that all checks in the checklist are complete and the specifications table has been updated with no requirements having compliance as ‘No’.
The module reviewer approves the AVM-Review-PR, and leaves the following comment on the AVM-Review issue with the following comment.
Thank you for contributing this module and completing the review process per AVM specs. The AVM-Review-PR has been approved and once you merge it that will close this AVM-Review issue. You may proceed with [publishing](/Azure-Verified-Modules/contributing/terraform/terraform-contribution-flow/owner-contribution-flow/#7-publish-the-module) this module to the HashiCorp Terraform Registry with an initial pre-release version of v0.1.0. Please keep future versions also pre-release i.e. < 1.0.0untilAVMbecomesgenerallyavailable(GA)ofwhichtheAVMteamwillnotifyyou.**Requested Action**:Oncepublishedpleaseupdateyour [module proposal](REPLACE WITH THE LINK TO THE MODULE PROPOSAL) withthefollowingcomment."Theinitialreviewofthismoduleiscomplete,andthemodulehasbeenpublishedtotheregistry.RequestingAVMteamtoclosethismoduleproposalandmarkthemoduleavailableinthemoduleindex.TerraformRegistryLink:<REPLACEWITHTHELINKOFTHEMODULEINTERRAFORMREGISTRY>
GitHub Repo Link: <REPLACEWITHTHELINKOFTHEMODULEINGITHUB>"
Once the module owner perform the requested action in the previous step, the module reviewer updates the module proposal by performing the following steps:
Assign label Status: Module Available :green_circle: to the module proposal.
Update the module index excel file and CSV file by creating a PR to update the module index and links the module proposal as an issue that gets closed once the PR is merged which will move the module proposal from “In Review” to “Done” in the AVM - Module Triage project.
Website Contribution Guide
Looking to contribute to the AVM Website, well you have made it to the right place/page. π
Follow the below instructions, especially the pre-requisites, to get started contributing to the library.
Context/Background
Before jumping into the pre-requisites and specific section contribution guidance, please familiarize yourself with this context/background on how this library is built to help you contribute going forward.
This site is built using Hugo, a static site generator, that’s source code is stored in the AVM GitHub repo (link in header of this site too) and is hosted on GitHub Pages, via the repo.
The reason for the combination of Hugo & GitHub pages is to allow us to present an easy to navigate and consume library, rather than using a native GitHub repo, which is not easy to consume when there are lots of pages and folders. Also, Hugo generates the site in such a way that it is also friendly for mobile consumers.
But I don’t have any skills in Hugo?
That’s okay and you really don’t need them. Hugo just needs you to be able to author markdown (.md) files and it does the rest when it generates the site π
Pre-Requisites
Read and follow the below sections to leave you in a “ready state” to contribute to AVM.
A “ready state” means you have a forked copy of the Azure/Azure-Verified-Modules repo cloned to your local machine and open in VS Code.
Run and Access a Local Copy of AVM Website During Development
When in VS Code you should be able to open a terminal and run the below commands to access a copy of the AVM website from a local web server, provided by Hugo, using the following address http://localhost:1313/Azure-Verified-Modules/:
cd docs
hugo server -D // you can add "--poll 700ms", if file changes are not detected
Software/Applications
To contribute to this website, you will need the following installed:
Tip
You can use winget to install all the pre-requisites easily for you. See the below section
Steps to do before contributing anything (after pre-requisites)
Run the following commands in your terminal of choice from the directory where you fork of the repo is located:
git checkout main
git pull
git fetch -p
git fetch -p upstream
git pull upstream main
git push
Doing this will ensure you have the latest changes from the upstream repo, and you are ready to now create a new branch from main by running the below commands:
git checkout main
git checkout -b <YOUR-DESIRED-BRANCH-NAME-HERE>
Top Tips
Sometimes the local version of the website may show some inconsistencies that don’t reflect the content you have created
If this happens, simply kill the Hugo local web server by pressing CTRL + C and then restart the Hugo web server by running hugo server -D from the docs/ directory.
Help & Support
Summary
This section provides information about AVM’s support.
This page provides guidance for members of the AVM Core Team on how to triage module proposals and generic issues filed in the AVM repository, as well as how to manage these GitHub issues throughout their lifecycle.
During the AVM Core Team Triage step, the following will be checked, completed and actioned by the AVM Core Team during their triage calls (which are currently twice per week).
Note
Every module needs a module proposal to be created in the AVM repository.
Tip
During the triage process, the AVM Core Team should also check the status of following queries:
Add the Β Status: In Triage πΒ label to indicate you’re in the process of triaging the issue.
Check module proposal issue/form:
Check the Bicep or Terraform module indexes for the proposed module to make sure it is not already available or being worked on.
Ensure the module’s details are correct as per specifications - naming, classification (resource/pattern) etc.
Check if the module is added to the “Proposed” column on the AVM - Modules Triage GitHub project board.
Check if the requestor is a Microsoft FTE.
If there’s any additional clarification needed, contact the requestor through comments (using their GH handle) or internal channels - for Microsoft FTEs only! You can look them up by their name or using the Microsoft internal “1ES Open Source Assistant Browser Extension”. Make sure you capture any decisions regarding the module in the comments section.
Make adjustments to the module’s name/classification as needed.
Change the name of the issue to reflect the module’s name, i.e.,
After the “[Module Proposal]:” prefix, change the issues name to the module’s approved name between backticks, i.e., ` and `, e.g., avm/res/sql/managed-instance for a Bicep module, or avm-res-compute-virtualmachine for a Terraform module.
Example:
“[Module Proposal]: avm/res/sql/managed-instance”
“[Module Proposal]: avm-res-sql-managedinstance”
Check if the GitHub Policy Service Bot has correctly applied the module language label: Β Language: Bicep πͺΒ or Β Language: Terraform πΒ
As part of the triage of pattern modules, the following points need to be considered/clarified with the module requestor:
Shouldn’t this be a resource module? What makes it a pattern - e.g., does it deploy multiple resources?
What is it for? What problem does it fix or provides a solution for?
What is/isn’t part of it? Which resource and/or pattern modules are planned to be leveraged in it? Provide a list of resources that would be part of the planned module.
Where is it coming from/what’s backing it - e.g., Azure Architecture Center (AAC), community request, customer example. Provide an architectural diagram and related documentation if possible - or a pointer to these if they are publicly available.
Don’t let the module’s scope to grow too big, split it up to multiple smaller ones that are more maintainable - e.g., hub & spoke networking should should be split to a generic hub networking and multiple workload specific spoke networking patterns.
The module’s name should be as descriptive as possible.
Scenario 1: Requestor doesn’t want to / can’t be module owner
Note
If requestor is interested in becoming a module owner, but is not a Microsoft FTE, the AVM core team will try to find a Microsoft FTE to be the module owner whom the requestor can collaborate with.
If the requestor indicated they didn’t want to or can’t become a module owner (or is not a Microsoft FTE), make sure the Β Needs: Module Owner π£Β label is assigned to the issue. Note: the GitHub Policy Service Bot should automatically do this, based on how the issue author responded to the related question.
Move the issue to the “Looking for owners” column on the AVM - Modules Triage GitHub project board.
Find module owners - if the requestor didn’t volunteer in the module proposal OR the requestor does not want or cannot be owner of the module:
Try to find an owner from the AVM communities or await a module owner to comment and propose themselves on the proposal issue.
When a new potential owner is identified, continue with the steps described as follows.
Scenario 2: Requestor wants to and can become module owner
If the requestor indicated they want to become the module owner, the GitHub Policy Service Bot will add the Β Status: Owners Identified π€Β label and will assign the issue to the requestor.
You MUST still confirm that the requestor is a Microsoft FTE and that they understand the implications of becoming the owner! If any of these conditions aren’t met, remove the Β Status: Owners Identified π€Β label and unassign the issue from the requestor.
Clarify the roles and responsibilities of the module owner:
Clarify they understand and accept what “module ownership” means by replying in a comment to the requestor/proposed owner:
β Standard AVM Core Team Reply to Proposed Module Owners
<!-- markdownlint-disable -->Hi @avm_module_owner,
Thanks for requesting/proposing to be an AVM module owner!
We just want to confirm **you agree to the below pages** that define what module ownership means:
- [Team Definitions & RACI](https://azure.github.io/Azure-Verified-Modules/specs/shared/team-definitions)
- [Module Specifications)](https://azure.github.io/Azure-Verified-Modules/specs/module-specs)
- [Module Support](https://azure.github.io/Azure-Verified-Modules/help-support/module-support)
Any questions or clarifications needed, let us know!
If you agree, please just **reply to this issue with the exact sentence below** (as this helps with our automation π):
"I CONFIRM I WISH TO OWN THIS AVM MODULE AND UNDERSTAND THE REQUIREMENTS AND DEFINITION OF A MODULE OWNER"
Thanks,
The AVM Core Team
#RR
<!-- markdownlint-restore -->
Once module owner identified has confirmed they understand and accept their roles and responsibilities as an AVM module owner
Make sure the issue is assigned to the confirmed module owner.
Move the issue into the “In development” column on the AVM - Modules Triage GitHub Project board.
Make sure the Β Status: Owners Identified π€Β label is added to the issue.
If applied earlier, remove the Β Needs: Module Owner π£Β label from the issue.
Remove the labels of Β Needs: Triage πΒ and Β Status: In Triage πΒ to indicate you’re done with triaging the issue.
Use the following text to approve module development
β Final Confirmation for Proposed Module Owners
<!-- markdownlint-disable -->Hi @avm_module_owner,
Thanks for confirming that you wish to own this AVM module and understand the related requirements and responsibilities!
Before starting development, please ensure ALL the following requirements are met.
**Please use the following values explicitly as provided in the [module index](https://azure.github.io/Azure-Verified-Modules/indexes/) page**:
- For your module:
-`ModuleName` - for naming your module
-`TelemetryIdPrefix` - for your module's [telemetry](https://azure.github.io/Azure-Verified-Modules/spec/SFR3)
- For your module's repository:
- Repo name and folder path are defined in `RepoURL`- Create GitHub teams for module owners and contributors and grant them permissions as outlined [here](https://azure.github.io/Azure-Verified-Modules/spec/SNFR20).
- Grant permissions for the AVM core team and PG teams on your GitHub repo as described [here](https://azure.github.io/Azure-Verified-Modules/spec/SNFR9).
Check if this module exists in the other IaC language. If so, collaborate with the other owner for consistency. π
You can now start the development of this module! β Happy coding! π
**Please respond to this comment and request a review from the AVM core team once your module is ready to be published! Please include a link pointing to your PR, once available. π**Any further questions or clarifications needed, let us know!
Thanks,
The AVM Core Team
<!-- markdownlint-restore -->
Important
Although, it’s not directly part of the module proposal triage process, to begin development, module owners and contributors might need additional help from the AVM core team, such as:
Update any Azure RBAC permissions for test tenants/subscription, if needed.
In case of Bicep modules only:
Look for the module owners confirmation on the related [Module Proposal] issue that they have created the required -module-owners- and -module-contributors- GitHub teams.
Ensure the -module-owners- and -module-contributors- GitHub teams have been assigned to their respective parent teams as outlined here.
The Module Proposal issue MUST remain open until the module is fully developed, tested and published to the relevant registry.
Do NOT close the issue before the successful publication is confirmed!
Once the module is fully developed, tested and published to the relevant registry, and the Module Proposal issue was closed, it MUST remain closed.
Orphaned modules
When a module becomes orphaned
If a module meets the criteria described in the “Orphaned AVM Modules” chapter, the modules is considered to be orphaned and the below steps must be performed.
An issue is considered to be an orphaned module if
The original Module Proposal issue related to the module in question MUST remain closed and intact.
Instead, a new Orphaned Module issue must be opened that MUST remain open until the ownership is fully confirmed!
Once the Orphaned Module issue was closed, it MUST remain closed. If the module will subsequently become orphaned again, a new Orphaned Module issue must be opened.
Place an information notice as per the below guidelines:
In case of a Bicep module:
Place the information notice - with the text below - in an ORPHANED.md file, in the module’s root.
Run the utilities/tools/Set-AVMModule.ps1 utility with the module path as an input. This re-generates the moduleβs README.md file, so that the README.md file will also contain the same notice in its header.
Make sure the content of the ORPHANED.md file is displayed in the README.md in its header (right after the title).
In case of a Terraform module, place the information notice - with the text below - in the README.md file, in the module’s root.
Once the information notice is placed, submit a Pull Request.
Include the following text in the information notice:
β Orphaned module notice for module README file
β οΈTHIS MODULE IS CURRENTLY ORPHANED.β οΈ
- Only security and bug fixes are being handled by the AVM core team at present.
- If interested in becoming the module owner of this orphaned module (must be Microsoft FTE), please look for the related "orphaned module" GitHub issue [here](https://aka.ms/AVM/OrphanedModules)!
Try to find a new owner using the AVM communities or await a new module owner to comment and propose themselves on the issue.
When a new potential owner is identified, clarify the roles and responsibilities of the module owner:
Clarify they understand and accept what “module ownership” means by replying in a comment to the requestor/proposed owner:
β Standard AVM Core Team Reply to New Owners of an Orphaned Module
<!-- markdownlint-disable -->Hi @avm_module_owner,
Thanks for requesting/proposing to be an AVM module owner!
We just want to confirm **you agree to the below pages** that define what module ownership means:
- [Team Definitions & RACI](https://azure.github.io/Azure-Verified-Modules/specs/shared/team-definitions)
- [Module Specifications](https://azure.github.io/Azure-Verified-Modules/specs/module-specs)
- [Module Support](https://azure.github.io/Azure-Verified-Modules/help-support/module-support)
Any questions or clarifications needed, let us know!
If you agree, please just **reply to this issue with the exact sentence below** (as this helps with our automation π):
"I CONFIRM I WISH TO OWN THIS AVM MODULE AND UNDERSTAND THE REQUIREMENTS AND DEFINITION OF A MODULE OWNER"
Thanks,
The AVM Core Team
#RR
<!-- markdownlint-restore -->
Once the new module owner candidate has confirmed they understand and accept their roles and responsibilities as an AVM module owner
Assign the issue to the confirmed module owner.
Remove the Β Status: Module Orphaned πΒ and the Β Needs: Module Owner π£Β labels from the issue.
Add the Β Status: Module Available π’Β and Β Status: Owners Identified π€Β labels to the issue.
Move the issue into the “Done” column on the AVM - Modules Triage GitHub Project board.
Get the new owner(s) and any new contributor(s) added to the related -module-owners- or -module-contributors- teams. See SNFR20 for more details.
Remove the information notice (i.e., the file that states that β οΈTHIS MODULE IS CURRENTLY ORPHANED.β οΈ, etc. ):
In case of a Bicep module:
Delete the ORPHANED.md file from the module’s root.
Run the utilities/tools/Set-AVMModule.ps1 utility with the module path as an input. This re-generates the moduleβs README.md file, so that it will no longer contain the orphaned module notice in its header.
Double check the previous steps was successful and the README.md file no longer has the information notice in its header (right after the title).
In case of a Terraform module, remove the information notice from the README.md file in the module’s root.
Once the information notice is removed, submit a Pull Request.
Use the following text to confirm the new ownership of an orphaned module:
β Final Confirmation for New Owners of an Orphaned Module
<!-- markdownlint-disable -->Hi @avm_module_owner,
Thanks for confirming that you wish to own this AVM module and understand the related requirements and responsibilities!
We just want to ask you to double check a few important things.
**Please use the following values explicitly as provided in the [module index](https://azure.github.io/Azure-Verified-Modules/indexes/) page**:
- You must be the owner of the GitHub teams as outlined [here](https://azure.github.io/Azure-Verified-Modules/spec/SNFR20).
- Please check that your name has been updated in the module index page (this should happen shortly after you confirmed ownership).
You can now take ownership of this module and start improving it as needed! β Happy coding! π
Any further questions or clarifications needed, let us know!
Thanks,
The AVM Core Team
<!-- markdownlint-restore -->
Close the Orphaned Module issue.
General feedback/question, documentation update and other standard issues
An issue is a “General Question/Feedback β” if it was opened through the “General Question/Feedback β” issue template, and has the labels of Β Type: Question/Feedback πββοΈΒ and Β Needs: Triage πΒ applied to it.
An issue is a “AVM Documentation Update π” if it was opened through the “AVM Documentation Update π” issue template, and has the labels of Β Type: Documentation πΒ and Β Needs: Triage πΒ applied to it.
An issue is considered to be a “standard issue” or “blank issue” if it was opened without using an issue template, and hence it does NOT have any labels assigned, OR only has the Β Needs: Triage πΒ label assigned.
When triaging the issue, consider adding one of the following labels as fits:
Β Type: Documentation πΒ
Β Type: Feature Request βΒ
Β Type: Bug πΒ
Β Type: Security Bug πΒ
To see the full list of available labels, please refer to the GitHub Repo Labels section.
Note
If an intended module proposal was mistakenly opened as a “General Question/Feedback β” or other standard issue, and hence, it doesn’t have the Β Type: New Module Proposal π‘Β label associated to it, a new issue MUST be created using the “New AVM Module Proposal π” issue template. The mistakenly created “General Question/Feedback β” or other standard issue MUST be closed.
BRM Issue Triage
Overview
This page provides guidance for Bicep module owners on how to triage AVM module issues and AVM question/feedback items filed in the BRM repository (Bicep Registry Modules repository - where all Bicep AVM modules are published), as well as how to manage these GitHub issues throughout their lifecycle.
As such, the following issues are to be filed in the BRM repository:
[AVM Module Issue]: Issues specifically related to an existing AVM module, such as feature requests, bug and security bug reports.
[AVM Question/Feedback]:Generic feedback and questions, related to existing AVM module, the overall framework, or its automation (CI environment).
Do NOT file the following types of issues in the BRM repository, as they MUST be tracked in the AVM repo:
[Orphaned Module]: Indicate that a module is orphaned (has no owner).
[Question/Feedback]: Generic questions/requests related to the AVM site or documentation.
Note
Every module needs a module proposal to be created in the AVM repository.
Module Owner Responsibilities
During the triage process, module owners are expected to check, complete and follow up on the items described in the sections below.
Module owners MUST meet the SLAs defined on the Module Support page! While there’s automation in place to support meeting these SLAs, module owners MUST check for new issues on a regular basis.
Important
The BRM repository includes other, non-AVM modules and related GitHub issues. As a module owner, make sure you’re only triaging, managing or otherwise working on issues that are related to AVM modules!
Tip
To look for items that need triaging, click on the following link to use this saved query β‘οΈ Β Needs: Triage πΒ β¬ οΈ.
To look for items that need attention, click on the following link to use this saved query β‘οΈ Β Needs: Attention πΒ β¬ οΈ.
Module issues can only be opened for existing AVM modules. Module issues MUST NOT be used to file a module proposal.
If the issue was opened as a misplaced module proposal, mention the @Azure/AVM-core-team-technical-bicep team in the comment section and ask them to move the issue to the AVM repository.
Triaging a Module Issue
Check the Module issue:
Make sure the issue has the Β Type: AVM π °οΈ βοΈ βοΈΒ applied to it.
Use the AVM module indexes to identify the module owner(s) and make sure they are assigned/mentioned/informed.
If the module is orphaned (has no owner), make sure there’s an orphaned module issue in the AVM repository.
Make sure the module’s details are captured correctly in the description - i.e., name, classification (resource/pattern), language (Bicep/Terraform), etc.
Make sure the issue is categorized using one of the following type labels:
Β Type: Feature Request βΒ
Β Type: Bug πΒ
Β Type: Security Bug πΒ
Apply relevant labels for module classification (resource/pattern): Β Class: Resource Module π¦Β or Β Class: Pattern Module π¦Β
Communicate next steps to the requestor (issue author).
Remove the Β Needs: Triage πΒ label.
When more detailed plans are available, communicate expected timeline for the update/fix to the requestor (issue author).
Only close the issue, once the next version of the module was fully developed, tested and published.
Triaging a Module PR
If the PR is submitted by the module owner and the module is owned by a single person, the AVM core team must review and approve the PR, (as the module owner can’t approve their on PR).
To indicate that the PR needs the core team’s attention, apply the Β Needs: Core Team π§Β label.
If the PR is submitted by a contributor (other than the module owner), or the module is owned by at least 2 people, one of the module owners should review and approve the PR.
Apply relevant labels
Make sure the PR is categorized using one of the following type labels:
Β Type: Feature Request βΒ
Β Type: Bug πΒ
Β Type: Security Bug πΒ
For module classification (resource/pattern): Β Class: Resource Module π¦Β or Β Class: Pattern Module π¦Β
If the module is orphaned (has no owner), make sure the related Orphaned module issue (in the AVM repository) is associated to the PR in a comment, so the new owner can easily identify all related issues and PRs when taking ownership.
Remove the Β Needs: Triage πΒ label.
Give your PR a meaningful title
Prefix: Start with one of the allowed keywords - fix: or feat: is the most common for module related changes.
Description: Add a few words, describing the nature of the change.
Module name: Add the module’s full name between backticks ( ` ) to make it pop.
General Question/Feedback and other standard issues
An issue is considered to be an “AVM Question/Feedback” if
An issue is considered to be a “standard issue” or “blank issue” if it was opened without using an issue template, and hence it does NOT have any labels assigned, OR only has the Β Needs: Triage πΒ label assigned.
Triaging a General Question/Feedback and other standard issues
When triaging the issue, consider adding one of the following labels as fits:
Β Type: Documentation πΒ
Β Type: Feature Request βΒ
Β Type: Bug πΒ
Β Type: Security Bug πΒ
To see the full list of available labels, please refer to the GitHub Repo Labels section.
Add any (additional) labels that apply.
Communicate next steps to the requestor (issue author).
Remove the Β Needs: Triage πΒ label.
When more detailed plans are available, communicate expected timeline for the update/fix to the requestor (issue author).
Once the question/feedback/topic is fully addressed, close the issue.
Note
If an intended module proposal was mistakenly opened as a “AVM Question/Feedback β” or other standard issue, a new issue MUST be created in the AVM repo using the “New AVM Module Proposal π” issue template. The mistakenly created “AVM Question/Feedback β” or other standard issue MUST be closed.
Issue Triage Automation
This page details the automation that is in place to help with the triage of issues and PRs raised against the AVM modules.
Schedule based automation
This section details all automation rules that are based on a schedule.
Note
When calculating the number of business days in the issue/triage automation, the built-in logic considers Monday-Friday as business days. The logic doesn’t consider any holidays.
To avoid this rule being (re)triggered, the Β Needs: Triage πΒ must be removed as part of the triage process (when the issue is first responded to).
To avoid this rule being (re)triggered, the Β Needs: Triage πΒ must be removed as part of the triage process (when the issue is first responded to).
Add a reply, mentioning the Azure/terraform-avm team.
Add the Β Needs: Immediate Attention βΌοΈΒ label.
ITA04
If an issue/PR has been labelled with Β Needs: Author Feedback πΒ and hasn’t had a response in 4 days, label with Β Status: No Recent Activity π€Β and add a comment.
Schedule:
Triggered every 3 hours.
Trigger criteria:
Is an open issue/PR.
Had no activity in the last 4 days.
Has the Β Needs: Author Feedback πΒ label added.
Does not have the Β Status: No Recent Activity π€Β label added.
Action(s):
Add the Β Status: No Recent Activity π€Β label.
Add a reply.
Tip
To prevent further actions to take effect, one of the following conditions must be met:
The author must respond in a comment within 3 days of the automatic comment left on the issue.
The Β Status: No Recent Activity π€Β label must be removed.
If applicable, the Β Status: Long Term β³Β or the Β Needs: Module Owner π£Β label must be added.
ITA05
Warning
This rule is currently disabled in the AVM and BRM repositories.
If an issue/PR has been labelled with Β Status: No Recent Activity π€Β and hasn’t had any update in 3 days from that point, automatically close it and comment, unless the issue/PR has a Β Status: Long Term β³Β - in which case, do not close it.
Schedule:
Triggered every 3 hours.
Trigger criteria:
Is an open issue.
Had no activity in the last 3 days.
Has the Β Needs: Author Feedback πΒ and the Β Status: No Recent Activity π€Β labels added.
Does not have the Β Needs: Module Owner π£Β or Β Status: Long Term β³Β labels added.
Action(s):
Add a reply.
Close the issue.
Tip
In case the issue needs to be reopened (e.g., the author responds after the issue was closed), the Β Status: No Recent Activity π€Β label must be removed.
ITA24
Remind module owner(s) to start or continue working on this module if there was no activity on the Module Proposal issue for more than 3 weeks. Add Β Needs: Attention πΒ label.
Schedule:
Triggered every 3 hours.
Trigger criteria:
Is an open issue.
Had no activity in the last 21 days.
Has the Β Type: New Module Proposal π‘Β and the Β Status: Owners Identified π€Β labels added.
Does not have the Β Status: Long Term β³Β label added.
Action(s):
Add a reply.
Add the Β Needs: Attention πΒ label.
Tip
To silence this notification, provide an update every 3 weeks on the Module Proposal issue, or add the Β Status: Long Term β³Β label.
Event based automation
This chapter details all automation rules that are based on an event.
ITA06
When a new issue or PR of any type is created add the Β Needs: Triage πΒ label.
Trigger criteria:
An issue or PR is opened.
Action(s):
Add the Β Needs: Triage πΒ label.
Add a reply to explain the action(s).
ITA08BCP
If AVM or “Azure Verified Modules” is mentioned in an uncategorized issue (i.e., one not using any template), apply the label of Β Type: AVM π °οΈ βοΈ βοΈΒ on the issue.
Trigger criteria:
An issue, issue comment, PR, or PR comment is opened, created or edited and the body or comment contains the strings of “AVM” or “Azure Verified Modules”.
Action(s):
Add the Β Type: AVM π °οΈ βοΈ βοΈΒ label.
ITA09
When #RR is used in an issue, add the label of Β Needs: Author Feedback πΒ .
Trigger criteria:
An issue comment or PR comment contains the string of “#RR”.
Action(s):
Add the Β Needs: Author Feedback πΒ label.
ITA10
When #wontfix is used in an issue, mark it by using the label of Β Status: Won’t Fix πΒ and close the issue.
Trigger criteria:
An issue comment or PR comment contains the string of “#RR”.
Action(s):
Add the Β Status: Won’t Fix πΒ label.
Close the issue.
ITA11
When the author replies, remove the Β Needs: Author Feedback πΒ label and label with Β Needs: Attention πΒ .
Trigger criteria:
Any action on an issue comment or PR comment except closing.
Has the Β Needs: Author Feedback πΒ label added.
The activity was initiated by the issue/PR author.
Action(s):
Remove the Β Needs: Author Feedback πΒ label.
Remove the Β Status: No Recent Activity π€Β label.
Add the Β Needs: Attention πΒ label.
ITA12
Clean up e-mail replies to GitHub Issues for readability.
Trigger criteria:
Any action on an issue comment.
Action(s):
Clean email reply. This is useful when someone directly responds to an email notification from GitHub, and the email signature is included in the comment.
ITA13
If the language is set to Bicep in the Module proposal, add the Β Language: Bicep πͺΒ label on the issue.
Trigger criteria:
An issue is opened with its body matching the below pattern.
### Bicep or Terraform?
Bicep
Action(s):
Add the Β Language: Bicep πͺΒ label.
ITA14
If the language is set to Terraform in the Module proposal, add the Β Language: Terraform πΒ label on the issue.
Trigger criteria:
An issue is opened with its body matching the below pattern.
### Bicep or Terraform?
Terraform
Action(s):
Add the Β Language: Terraform πΒ label.
ITA15
Remove the Β Needs: Triage πΒ label from a PR, if it already has a “Type: XYZΒ label added and is assigned to someone at the time of creating it.
Trigger criteria:
A PR is opened with any of the following labels added and is assigned to someone:
Β Type: Bug πΒ
Β Type: Documentation πΒ
Β Type: Duplicate π€²Β
Β Type: Feature Request βΒ
Β Type: Hygiene π§ΉΒ
Β Type: New Module Proposal π‘Β
Β Type: Question/Feedback πββοΈΒ
Β Type: Security Bug πΒ
Action(s):
Remove the Β Needs: Triage πΒ label.
ITA16
Add the Β Status: Owners Identified π€Β label when someone is assigned to a Module Proposal.
Trigger criteria:
Any action on an issue except closing.
Has the Β Type: New Module Proposal π‘Β added.
The issue is assigned to someone.
Action(s):
Add the Β Status: Owners Identified π€Β label.
ITA17
If the issue author says they want to be the module owner, assign the issue to the author and respond to them.
Trigger criteria:
An issue is opened with its body matching the below pattern.
### Do you want to be the owner of this module?
Yes
Action(s):
Assign the issue to the author.
Add the below reply and explain the action(s).
@${issueAuthor}, thanks for volunteering to be a module owner!
**Please don't start the development just yet!**The AVM core team will review this module proposal and respond to you first. Thank you!
ITA18
Send automatic response to the issue author if they don’t want to be module owner and don’t have any candidate in mind. Add the Β Needs: Module Owner π£Β label.
Trigger criteria:
An issue is opened with its body matching the below pattern.
### Do you want to be the owner of this module?
No
### Module Owner's GitHub Username (handle)
_No response_
Action(s):
Add the Β Needs: Module Owner π£Β label.
Add the below reply and explain the action(s).
@${issueAuthor}, thanks for submitting this module proposal!
The AVM core team will review it and will try to find a module owner.
ITA19
Send automatic response to the issue author if they don’t want to be module owner but have a candidate in mind. Add the Β Status: Owners Identified π€Β label.
Trigger criteria:
An issue is opened with its body matching the below pattern…
### Do you want to be the owner of this module?
No
@${issueAuthor}, thanks for submitting this module proposal with a module owner in mind!
**Please don't start the development just yet!**The AVM core team will review this module proposal and respond to you and/or the module owner first. Thank you!
ITA20
If the issue type is feature request, add the Β Type: Feature Request βΒ label on the issue.
Trigger criteria:
An issue is opened with its body matching the below pattern.
### Issue Type?
Feature Request
Action(s):
Add the Β Type: Feature Request βΒ label.
ITA21
If the issue type is bug, add the Β Type: Bug πΒ label on the issue.
Trigger criteria:
An issue is opened with its body matching the below pattern.
### Issue Type?
Bug
Action(s):
Add the Β Type: Bug πΒ label.
ITA22
If the issue type is security bug, add the Β Type: Security Bug πΒ label on the issue.
Trigger criteria:
An issue is opened with its body matching the below pattern.
### Issue Type?
Security Bug
Action(s):
Add the Β Type: Security Bug πΒ label.
ITA23
Remove the Β Status: In PR πΒ label from an issue when it’s closed.
Trigger criteria:
An issue is opened.
Action(s):
Remove the Β Status: In PR πΒ label.
ITA25
Inform module owners that they need to add the Β Needs: Core Team π§Β label to their PR if they’re the sole owner of their module.
Trigger criteria:
A PR is opened.
Action(s):
Inform module owners that they need to add the Β Needs: Core Team π§Β label to their PR if they’re the sole owner of their module.
Where to apply these rules?
The below table details which repositories the above rules are applied to.
This page provides guidance for Terraform Module owners on how to triage AVM module issues and AVM question/feedback items filed in their Terraform Module Repo(s), as well as how to manage these GitHub issues throughout their lifecycle.
The following issues can be filed in a Terraform repository:
AVM Module Issue: Issues specifically related to an existing AVM module, such as feature requests, bug and security bug reports.
AVM Question/Feedback: Generic feedback and questions, related to existing AVM module, the overall framework, or its automation (CI environment).
Do NOT file the following types of issues in a Terraform repository, as they MUST be tracked in the AVM repo:
[Orphaned Module]: Indicate that a module is orphaned (has no owner).
[Question/Feedback]: Generic questions/requests related to the AVM site or documentation.
Note
Every module needs a module proposal to be created in the AVM repository.
Module Owner Responsibilities
During the triage process, module owners are expected to check, complete and follow up on the items described in the sections below.
Module owners MUST meet the SLAs defined on the Module Support page! While there’s automation in place to support meeting these SLAs, module owners MUST check for new issues on a regular basis.
Tip
To look for items that need triaging, look for issue labled with β‘οΈ Β Needs: Triage πΒ β¬ οΈ.
To look for items that need attention, look for issue labled with β‘οΈ Β Needs: Attention πΒ β¬ οΈ.
Module Issue
An issue is considered to be an “AVM module issue” if
it was opened through the AVM Module Issue template in the Terraform repository,
it has the label of Β Needs: Triage πΒ applied to it, and
Module issues can only be opened for existing AVM modules. Module issues MUST NOT be used to file a module proposal.
If the issue was opened as a misplaced module proposal, mention the @Azure/AVM-core-team-technical-terraform team in the comment section and ask them to move the issue to the AVM repository.
Triaging a Module Issue
Check the Module issue:
Use the AVM module indexes to identify the module owner(s) and make sure they are assigned/mentioned/informed.
If the module is orphaned (has no owner), make sure there’s an orphaned module issue in the AVM repository.
Make sure the module’s details are captured correctly in the description - i.e., name, classification (resource/pattern), language (Bicep/Terraform), etc.
Make sure the issue is categorized using one of the following type labels:
Β Type: Feature Request βΒ
Β Type: Bug πΒ
Β Type: Security Bug πΒ
Apply relevant labels for module classification (resource/pattern): Β Class: Resource Module π¦Β or Β Class: Pattern Module π¦Β
Communicate next steps to the requestor (issue author).
Remove the Β Needs: Triage πΒ label.
When more detailed plans are available, communicate expected timeline for the update/fix to the requestor (issue author).
Only close the issue, once the next version of the module was fully developed, tested and published.
General Question/Feedback and other standard issues
An issue is considered to be an “AVM Question/Feedback” if
it was opened through the AVM Question/Feedback template in your Terraform repository,
it has the labels of Β Needs: Triage πΒ and Β Type: Question/Feedback πββοΈΒ applied to it, and
Triaging a General Question/Feedback and other standard issues
When triaging the issue, consider adding one of the following labels as fits:
Β Type: Documentation πΒ
Β Type: Feature Request βΒ
Β Type: Bug πΒ
Β Type: Security Bug πΒ
To see the full list of available labels, please refer to the GitHub Repo Labels section.
Add any (additional) labels that apply.
Communicate next steps to the requestor (issue author).
Remove the Β Needs: Triage πΒ label.
When more detailed plans are available, communicate expected timeline for the update/fix to the requestor (issue author).
Once the question/feedback/topic is fully addressed, close the issue.
Note
If an intended module proposal was mistakenly opened as a “AVM Question/Feedback β” or other standard issue, a new issue MUST be created in the AVM repo using the “New AVM Module Proposal π” issue template. The mistakenly created “AVM Question/Feedback β” or other standard issue MUST be closed.
Known Issues
Unfortunately, there will be times where issues are out of the AVM core team and module owners/contributor’s control and the issue may be something that has to be lived with for a longer than ideal duration - for example, in case of changes that are due to the way the Azure platform, or a resource behaves, or because of an IaC language issue.
This page will detail any of the known issues that consumers may come across when using AVM modules and provide links to learn more about them and where to get involved in discussions on these known issues with the rest of the community.
Important
Issues related to an AVM module must be raised on the repo they are hosted on, not the AVM Central (Azure/Azure-Verified-Modules) repo!
Although, if you think a known issue is missing from this page please create an issue on the AVM Central Azure/Azure-Verified-Modules repo.
If you accidentally raise an issue in the wrong place, we will transfer it to its correct home. π
Bicep
Bicep what-if compatibility with modules
Bicep/ARM What-If has a known issue today where it short-circuits whenever a runtime function is used in a nested template. And due to the way Bicep modules work, all module declarations in a Bicep file end up as a resulting nested template deployment in the underlying generated ARM template, thereby invoking this known issue.
The ARM/Bicep Product Group has recently announced on the issue that they are making progress in this space and are aiming provide a closer ETA in the near future; see the comment here.
While this isn’t an AVM issue, we understand that consumers of AVM Bicep modules may want to use what-if and are running into this known issue. Please keep adding your support to the issue mentioned above (Azure/arm-template-whatif #157), as the Product Group are actively engaging in the discussion there. π
Currently there are no known issues for AVM Terraform modules. π₯³
Module Support
As mentioned on the Overview page, we understand that long-term support from Microsoft in an initiative like AVM is critical to its adoption by consumers and therefore the success of AVM. Therefore we have aligned and provide the below support statement/process for AVM modules.
Important
Issues with an AVM module should be raised on the repo they are hosted on, not the AVM Central (Azure/Azure-Verified-Modules) repo!
Not an issue if you raise in the wrong place, we will transfer it to it’s correct home π
Azure Verified Modules are supported by the AVM teams, as defined here, using GitHub issues in the following order of precedence:
Module owners/contributors
If there is no response within 3 business days, then the AVM core team will step in by:
First attempting to contact the module owners/contributors to prompt them to act.
If there is no response within a further 24 hours (on business days), the AVM core team will take ownership, triage, and provide a response within a further 2 business days.
In the event of a security issue being unaddressed after 5 business days, escalation to the product group (Bicep/Terraform) to assist the AVM core team, will occur to provide additional support towards resolution; if required.
Note
Please note that the durations stated above are for a reasonable and useful response towards resolution of the issue raised, if possible, and not for a fix within these durations; although if possible this will of course happen.
All of this will be automated via the use of the Resource Management feature of the Microsoft GitHub Policy Service and GitHub Actions, where possible and appropriate.
Modules that have to have the AVM core team or Product Groups step in due to the module owners/contributors not responding, the AVM module will become “orphaned”; see Module Lifecycle for more info.
Important
You can also raise a ticket with Microsoft CSS (Microsoft Customer Services & Support) and your ticket will be triaged by them for any platform issues and if deemed not the platform but a module issue, it will be redirected to the AVM team and the module owner(s).
Telemetry
Microsoft uses the approach detailed in this section to identify the deployments of the AVM Modules. Microsoft collects this information to provide the best experiences with their products and to operate their business. Telemetry data is captured through the built-in mechanisms of the Azure platform; therefore, it never leaves the platform, providing only Microsoft with access. Deployments are identified through a specific GUID (Globally Unique ID), indicating that the code originated from AVM. The data is collected and governed by Microsoft’s privacy policies, located at the Trust Center.
Telemetry collected as described here does not provide Microsoft with insights into the resources deployed, their configuration or any customer data stored in or processed by Azure resources deployed by using code from AVM. Microsoft does not track the usage/consumption of individual resources using telemetry described here.
Note
While telemetry gathered as described here is only accessible by Microsoft. Bicep customers have access to the exact same deployment information on the Azure portal, under the Deployments section of the corresponding scope (Resource Group, Subscription, etc.). Terraform customers can view the information sent in the main.telemetry.tf file.
As detailed in SFR3 each AVM module contains a avmTelemetry deployment, which creates a deployment such as 46d3xbcp.res.compute-virtualmachine.1-2-3.eum3 (for Bicep) or 46d3xgtf.res.compute-virtualmachine.1-2-3.eum3 (for Terraform).
Opting Out
Albeit telemetry described in this section is optional, the implementation follows an opt-out logic, as most commercial software solutions, this project also requires continuously demonstrating evidence of usage, hence the AVM core team recommends leaving the telemetry setting on its default, enabled configuration.
This resource enables the AVM core team to query the number of deployments of a given module from Azure - and as such, get insights into its adoption.
To opt out you can set the parameters/variables listed below to false in the AVM module:
Bicep: enableTelemetry
Terraform: enable_telemetry
Telemetry vs Customer Usage Attribution
Though similar in principles, this approach is not to be confused and does not conflict with the usage of CUA IDs that are used to track Azure customer usage attribution of Azure marketplace solutions (partner solutions). The GUID-based telemetry approach described here can coexist and can be used side-by-side with CUA IDs. If you have any partner or customer scenarios that require the addition of CUA IDs, you can customize the AVM modules by adding the required CUA ID deployment while keeping the built-in telemetry solution.
Tip
If you’re a partner and want to build a solution that tracks customer usage attribution (using a CUA ID), we recommend implementing it on the consuming template’s level (i.e., the multi-module solution, such as workload/application) and apply the required naming format 'pid-' (without the suffix).
Resources
This page references additional resources available for Azure Verified Modules (AVM).
Note
Additional internal content available for Microsoft FTEs only, here.
Get ready to mark your calendars and ignite your enthusiasm because our community calls are here to add a dash of excitement to your quarterly routine! ποΈ
π Embracing Global Connections: Our community calls occur on a quarterly basis, ensuring everyone gets a chance to participate regardless of their time zone. We switch between time zones each quarter, ensuring inclusivity and accessibility for all members worldwide! π
π£ Engaging Content, Engaging Conversations: Dive into the first half of our calls, where the AVM team takes the stage to share thrilling updates, insightful content, and inspiring stories. Get ready to be dazzled by the latest happenings and developments in our community! π
π¬ Your Voice Matters: We open the floor to attendees, inviting you to bring your burning questions, captivating stories, ingenious suggestions, or simply your cheerful presence. It’s a space where every voice is valued and every story is celebrated! π€
πΉ Can’t Make It? No Problem! We understand that life gets busy, but FOMO (Fear of Missing Out) shouldn’t hold you back! If you can’t join us live, fret not! Every meeting is recorded and later published on our YouTube channel, ensuring you never miss a moment of the action! π½οΈ
Let’s make each community call a vibrant celebration of connection, collaboration, and camaraderie!
Upcoming Events
6th February 2025
Note
This occurrence is optimized for EMEA/APAC time zones.
Got an unanswered question? Create a GitHub Issue so we can get it answered and added here for everyone’s benefit π
Note
Microsoft FTEs only: check out the internal FAQ for additional information.
Tip
Check out the Contribution Q&A for more answers to common questions about the contribution process.
Timeline, history, plans
When will we have a library that has a “usable” stand? Not complete, but the most important resources?
Bicep: AVM evolved all modules of CARML (Common Azure Resource Module Library) for its Bicep resource module collection (see here). To initially populate AVM with Bicep resource modules, all existing CARML modules have been migrated to AVM. Resource modules can now be directly leveraged to support the IaC needs of a wide variety of Azure workloads. Pattern modules can also be developed building on these resource modules.
Terraform: In case of Terraform, there were significantly less modules available in TFVM (Terraform Verified Modules Library) compared to CARML, hence, most Terraform modules have been and are being built as people volunteer to be module owners. We’ve been prioritizing the development of the Terraform modules based on our learnings from former initiatives, as well as customer demand - i.e., which ones are the most frequently deployed modules.
What happened to existing initiatives like CARML and TFVM?
The AVM team worked/works closely with the teams behind the following initiatives:
All previously existing assets from these two libraries have been incorporated into AVM as resource or pattern modules.
All previously existing (non-AVM) modules that were published in the Public Bicep Registry (stored in the /modules folder of the BRM repository) have either been retired or transformed into an AVM module - while some are still being worked on.
CARML to AVM Evolution
CARML can be considered AVM’s predecessor. It was started by Industry Solutions Delivery (ISD) and the Customer Success Unit (CSU) and has been contributed to by many across Microsoft and has also had external contributions.
A lot of CARML’s principles and architecture decisions have formed the basis for AVM. Following a small number of changes to make them AVM compliant, all CARML modules have been transitioned to AVM as resource or pattern modules.
In summary, CARML evolved to and has been rebranded as the Bicep version of AVM. A notice has been placed on the CARML repo redirecting users and contributors to the AVM central repository.β
Terraform Timeline and Approach
As the AVM core team is not directly responsible for the development of the modules (that’s the responsibility of the module owners), there’s no specific timeline available for the publication of Terraform modules.
However, the AVM core team is focused on the following activities to facilitate and optimize the development process:
Leveraging customer demand, telemetry and learnings from former initiatives to prioritize the development of Terraform modules.
Providing automated tools and processes (CI environment and automated tests).
Accelerating the build-out of the Terraform module owners’ community.
Recruiting new volunteers to build and maintain Terraform modules.
Will existing Landing Zone Accelerators (Platform & Application) be migrated to become AVM pattern modules and/or built from AVM resource modules?
Not in the short/immediate term. Existing Landing Zone Accelerators (Platform & Application) will not be forced to convert their existing code bases, if available in either language, to AVM or to use AVM.
However, over time if new features or functionality are required by Landing Zone Accelerators, that team SHOULD consider migrating/refactoring that part of their code base to be constructed with the relevant AVM module if available. For example, the Bicep version of the “Sub Vending” solution is migrating to AVM shortly.
If the relevant AVM module isn’t available to use to assist the Landing Zone Accelerator, then a new AVM module proposal should be made, and if desired, the Landing Zone Accelerator team may decide to own this proposed module π
Does/will AVM cover Microsoft 365, Azure DevOps, GitHub, etc.?
While the principles and practices of AVM are largely applicable to other clouds and services such as, Microsoft 365 & Azure DevOps, the AVM program (today) only covers Azure cloud resources and architectures.
However, if you think this program, or a similar one, should exist to cover these other Microsoft Cloud offerings, please give a π or leave a comment on this GitHub Issue #71 in the AVM repository.
Will AVM also become a part of azd cli?
Yes, the AVM team is partnering with the AZD team and they are already using Bicep AVM modules from the public registry.
What is the difference between the Bicep Registry and AVM? (How) Do they come together?
The Public Bicep Registry (backed by the BRM repository) is Microsoft’s official Bicep Registry for 1st party-supported Bicep modules. It has existed for a while now and has seen quite some contributions.
As various teams inside Microsoft have come together to establish a “One Microsoft” IaC approach and library, we started the AVM initiative to bridge the gaps by defining specifications for both Bicep and Terraform modules.
In the BRM repo today, “vanilla modules” (non-AVM modules) can be found in the /modules folder, while AVM modules are located in the /avm folder. Both are being published to the same endpoint, the Public Bicep Registry. AVM Bicep modules are published in a dedicated namespace, using the avm/res & avm/ptn prefixes to make them distinguishable from the Public Registry’s “vanilla modules”.
Note
Going forward, AVM will become the single Microsoft standard for Bicep modules, published to the Public Bicep Registry, via the BRM repository.
In the upcoming period, existing “vanilla” modules will be retired or migrated to AVM, and new modules will be developed according to the AVM specifications.
How is AVM different from Bicep private registries and TemplateSpecs? Is AVM related to, or separate from Azure Radius?
AVM - with its modules published in the Public Bicep Registry (backed by the BRM repository) - represents the only standard from Microsoft for Bicep modules in the Public Registry.
Bicep private registries and TemplateSpecs are different ways of inner-sourcing, sharing and internally leveraging Bicep modules within an organization. We’re planning to provide guidance for theses scenarios in the future.
AVM has nothing to do with Radius (yet), but the AVM core team is constantly looking for additional synergies inside Microsoft.
At a high-level “WAF Aligned” means, where possible and appropriate, AVM Modules will align to recommendations and default input parameters/variables to values that algin to high impact/priority/severity recommendations in the following frameworks and resources:
For security recommendations we will also utilize the following frameworks and resources; again only for high impact/priority/severity recommendations:
Will all AVM modules be 100% “WAF Aligned” out of the box and good to go?
Not quite, but they’ll certainly be on the right path. By default, modules will only have to set defaults for input parameters/variables to values that align to high impact/priority recommendations, as detailed above.
To understand this further you first must understand that some of the “WAF Aligned” recommendations, from the sources above are more than just setting a string or boolean value to something particular to meet the recommendation; some will require additional resources to be created and exist and then linked together to help satisfy the recommendation.
In these scenarios the AVM modules will not enforce the additional resources to be deployed and configured, but will provide sufficient flexibility via their input parameters/variables to be able to support the configuration, if so desired by the module consumer.
Tip
This is why we only enforce AVM module alignment to high impact/priority recommendations, as the the majority of recommendations that are not high impact/priority will require additional resources to be used together to be compliant, as the below example will show.
Some examples
Recommendation
Will Be Set By Default in AVM Modules?
TLS version should always be set the latest/highest version TLS 1.3
Yes, as string value
Key Vault should use RBAC instead of access policies for authorization
Yes, as string/boolean value
Container registries should use private link
No, as requires additional Private Endpoint and DNS configuration as well as, potentially, additional costs
API Management services should use a virtual network
No, as requires additional Virtual Network and Subnet configuration as well as, potentially, additional costs
Important
While every Well-Architected Framework pillar’s recommendations should equally be considered by the module owners/contributors, within AVM we are taking an approach to prioritize reliability and security over cost optimization. This provides consumers of the AVM modules, by default, more resilient and secure resources and patterns.
However, please note these defaulted values can be altered via input parameter/variables in each of the modules so that you can meet your specific requirements.
What is a “Primary Resource” in the context of AVM?
The definition of a Primary Resource is detailed in the glossary.
How does AVM align and assist with the Secure Future Initiative (SFI)?
AVM modules are continuously being improved with the security and reliability recommendations of the Well-Architected Framework (for more details, see what AVM means by “WAF-aligned”). The AVM team is continuously reviewing SFI recommendations and if required rolling out updates to the AVM initiative to remain SFI compliant as well as assisting module owners to ensure their modules help their consumers align to SFI where appropriate.
Contribution, module ownership
Can I be an AVM module owner if I’m not a Microsoft FTE?
Every module MUST have an owner who is responsible for module development and maintenance. One owner can own one or multiple modules. An owner can develop modules alone or lead a team that will develop a module.
Today, only Microsoft FTEs can be module owners. This is to ensure we can enforce and provide the long-term support required by this initiative.
How can I contribute to AVM without being a module owner?
Yes, you can contribute to a module without being its owner, but you’ll still need a module owner whom you can collaborate with. For context, see the answer to this question.
Tip
If you’re a Microsoft FTE, you should consider volunteering to be a module owner. You can propose a new module, or look for orphaned modules and volunteer to be the owner for any of them.
If you’re not a Microsoft FTE or don’t want to be a module owner, you can still contribute to AVM. You have multiple options:
You can propose a new module and provide as much context as possible under the “Module Details” section (e.g., why do you need the module, what’s the business impact of not having it, etc.). The AVM core team will try to find a Microsoft FTE to be the module owner whom you can collaborate with.
You can contact the current owner of any existing module and offer to contribute to it. You can find the current owners of all AVM modules in the module indexes.
You can look for orphaned modules and use the comment section to indicate that you’d be interested in contributing to this module, once a new owner is found.
Are there different ways to contribute to AVM?
Yes, there are multiple ways to contribute to AVM!
You can contribute to modules:
Become an owner (preferred):
Propose and develop a new module (Bicep or Terraform) or pick up a module someone else proposed.
Become the owner of an orphaned module (mainly Bicep) - look for “orphaned module” issues here or see the “Orphaned” swimlane here
Become an administrative owner and work with other contributors or co-owners on developing and maintaining modules.
Volunteer as a co-owner or module contributor to an existing module, and work along other contributors and the (administrative) module owner.
You can submit a PR with a small proposed change without officially becoming a module owner or contributor.
Or you can contribute to the AVM website/documentation, by following this guidance.
Note
New modules can’t be created and published without having a module owner assigned.
Where can I find modules I can contribute to?
You can find modules missing owners in the following places:
To indicate your interest in owning or contributing to a module, just leave a comment on the respective issue.
Note
If any of these queries don’t return any results, it means that no module in the selected category is looking for an owner or contributor at the moment.
I want to become the owner of XYZ modules, where can I indicate this, and what are the expected actions from me?
If exists, you can comment on the Module Proposal issue of the module that you are interested in and the AVM Core Team will do the triage providing information about next steps.
Can I submit a PR with new features to an existing module? If so, is this a good way to contribute too?
Of course! As all modules are open source, anyone can submit a PR to an existing module. But we’d suggest opening an issue first to discuss the suggested changes with the module owner before investing time in the code.
Are there any videos on how to get started with contribution? E.g., how to set up a local environment for development, how to write a unit test etc.?
No videos on the technical details of contribution are available (yet), but a detailed, written guidance can be found for both Bicep and Terraform, here:
Is AVM a Microsoft official service/product/library or is this classified as an OSS backed by Microsoft?
AVM is an officially supported OSS project from Microsoft, across all organizations.
AVM is owned, developed & supported by Microsoft, you may raise a GitHub issue on this repository or the module’s repository directly to get support or log feature requests.
You can also log a support ticket and these will be redirected to the AVM team and the module owner(s).
Yes, and if they cannot resolve it (and/or it’s not related to a Microsoft service/platform/api/etc.) they will pass the ticket to the module owner(s) to resolve.
Module owners are tasked to do with two types of maintenance:
Proactive: keeping track of the modules’ underlying technology evolving, and keep modules up to date with the latest features and API versions.
Reactive: sometimes, mistakes are made that result in bugs and/or there might be features consumers ask for faster than module owners could proactively implement them. Consumers can request feature updates and bug fixes for existing modules here.
Can AVM module be used in production before it is marked as “GA” / v1.0?
As the overall AVM framework is not GA (generally available) yet - the CI framework and test automation is not fully functional and implemented across all supported languages yet - breaking changes are expected, and additional customer feedback is yet to be gathered and incorporated. Hence, modules must not be published at version 1.0.0 or higher at this time. All module must be published as a pre-release version (e.g., 0.1.0, 0.1.1, 0.2.0, etc.) until the AVM framework becomes GA.
However, it is important to note that this does not mean that the modules cannot be consumed and utilized. They can be leveraged in all types of environments (dev, test, prod etc.). Consumers can treat them just like any other IaC module and raise issues or feature requests against them as they learn from the usage of the module. Consumers should also read the release notes for each version, if considering updating to a more recent version of a module to see if there are any considerations or breaking changes etc.
Technical questions
Should pattern modules leverage resource modules? What if (some of) the required resource modules are not available?
The initial focus of development and migration from CARML/TFVM has solely been on resource modules. Now that the most important resource modules are published, pattern modules can leverage them as and where needed. This however doesn’t mean that the development of pattern modules is blocked in any way if a resource module is not available, since they may use native resources (“vanilla code”). If you’re about to develop a pattern module and would need a resource modules that doesn’t exist today, please consider building the resource module first, so that others can leverage it for their pattern modules as well.
Does AVM have same limitations as ARM (4 MB) size and 255 parameters only?
Yes, as AVM is just a collection of official Bicep/Terraform modules, it still has same Bicep/Terraform language or Azure platform limitations.
Does/will AVM support Managed Identity, and Microsoft Entra objects automation?
Managed Identities - Yes, they are supported in all resources today. Entra objects - May come as new modules if/when the Graph provider will be released which is still in private preview.
How does AVM ensure code quality?
AVM utilizes a number of validation pipelines for both Bicep and Terraform. These pipelines are run on every PR and ensure that the code is compliant with the AVM specifications and that the module is working as expected.
For example, in case of Bicep, as part of the PR process, we’re asking contributors to provide a workflow status badge as a proof of successful validation using our testing pipelines.
The validation includes 2 main stages run in sequence:
Static validation: to ensure that the module complies to AVM specifications.
Deployment validation: to ensure all test examples are working from a deployment perspective.
These same validations are also run in the BRM repository after merge. The new version of the contributed module is published to the Public Bicep Registry only if all validations are successful.
What’s the guidance on transitioning to new module versions?
AVM is not different compared to any other solution using semantic versioning.
Customer should consider updating to a newer version of a module if:
They need a new feature the new version has introduced.
It fixes a bug they were having.
They’d like ot use the latest and greatest version.
To do this they just change the version in their module declaration for either Terraform or Bicep and then run it through their pipelines to roll it out.
The high level steps are:
Check module documentation for any version-incompatibility notes.
Increase the version (point to the selected published version of the module).
Do a what-if (Bicep) or terraform plan (Terraform) & review the changes proposed.
If all good, proceed to deployment/apply.
If not, make required changes to make the plan/what-if as expected.
Using AVM
How can I use Bicep modules through the Public Registry?
Do I need to allow a specific URL to access the Public Registry?
In a regulated environment, network traffic might be limited, especially when using private build agents. The AVM Bicep templates are served from the Microsoft Container Registry. To access this container registry, the URL https://mcr.microsoft.com must be accessible from the network. So, if your network settings or firewall rules prevent access to this URL, you would need to allow it to ensure proper functioning.
Aren’t AVM resource modules too complex for people less skilled in IaC technologies?
TLDR: Resource modules have complexity inside, so they can be flexibly used from the outside.
Resource modules are written in a flexible way; therefore, you don’t need to modify them from project to project, use case to use case, as they aim to cover most of the functionality that a given resource type can provide, in a way that you can interact with any module just by using the required parameters - i.e., you don’t have to know how the template of the particular module works inside, just take a look at the README.md file of the given module to learn how to leverage it.
Resource modules are multi-purpose; therefore, they contain a lot of dynamic expressions (functions, variables, etc.), so there’s no need to maintain multiple instances for different use cases. They can be deployed in different configurations just by changing the input parameters. They should be perceived by the user as black boxes, where they don’t have to worry about the internal complexity of the code, as they only interact with them by their parameters.
Can I call a Bicep child module directly? E.g., can I update or add a secret in an existing Key Vault, or a route in an existing route table?
As per the way the Public Registry is implemented today, it is not possible to publish child-modules separate from its parents. As such, you cannot reference e.g. a avm/res/key-vault/vault/key module directly from the registry, but can only deploy it through its parent avm/res/key-vault/vault - UNLESS you actually grab the module folder locally.
However, we kept the door open to make this possible in the future if there is a demand for it.
Glossary
Terms, Abbreviations, and Acronyms
This page holds a table of all the terms, abbreviations, and acronyms that are used across this site.
“We are a family of individuals united by a single, shared mission. Itβs our ability to work together that makes our dreams believable and, ultimately, achievable. We will build on the ideas of others and collaborate across boundaries to bring the best of Microsoft to our customers as one. We are proud to be part of team Microsoft.” See Microsoft cultural attributes