Terraform Specific Specification
Legacy content
The content on this website has been deprecated and will be removed in the future.
Please refer to the new documentation under the Terraform Specifications chapter for the most up-to-date information.
This page contains the Terraform specific requirements for AVM modules (Resource and Pattern modules) that ALL Terraform AVM modules MUST meet. These requirements are in addition to the Shared Specification requirements that ALL AVM modules MUST meet.
Provider Versatility: Users have the autonomy to choose between AzureRM, AzAPI, or a combination of both, tailored to the specific complexity of module requirements.
The following table summarizes the category identification codes used in this specification:
Scope | Functional requirements | Non-functional requirements |
---|---|---|
Shared requirements (resource & pattern modules) | TFFR | TFNFR |
Resource module level requirements | N/A | N/A |
Pattern module level requirements | N/A | N/A |
Listed below are both functional and non-functional requirements for Terraform AVM modules (Resource and Pattern).
Module owners MAY cross-references other modules to build either Resource or Pattern modules. However, they MUST be referenced only by a HashiCorp Terraform registry reference to a pinned version e.g.,
module "other-module" {
source = "Azure/xxx/azurerm"
version = "1.2.3"
}
They MUST NOT use git reference to a module.
module "other-module" {
source = "git::https://xxx.yyy/xxx.git"
}
module "other-module" {
source = "github.com/xxx/yyy"
}
Modules MUST NOT contain references to non-AVM modules.
See Module Sources for more information.
Authors SHOULD NOT output entire resource objects as these may contain sensitive outputs and the schema can change with API or provider versions. Instead, authors SHOULD output the computed attributes of the resource as discreet outputs. This kind of pattern protects against provider schema changes and is known as an anti-corruption layer .
Remember, you SHOULD NOT output values that are already inputs (other than name
).
E.g.,
# Resource output, computed attribute.
output "foo" {
description = "MyResource foo attribute"
value = azurerm_resource_myresource.foo
}
# Resource output for resources that are deployed using `for_each`. Again only computed attributes.
output "childresource_foos" {
description = "MyResource children's foo attributes"
value = {
for key, value in azurerm_resource_mychildresource : key => value.foo
}
}
# Output of a sensitive attribute
output "bar" {
description = "MyResource bar attribute"
value = azurerm_resource_myresource.bar
sensitive = true
}
From Terraform AzureRM 3.0, the default value of prevent_deletion_if_contains_resources
in provider
block is true
. This will lead to an unstable test because the test subscription has some policies applied, and they will add some extra resources during the run, which can cause failures during destroy of resource groups.
Since we cannot guarantee our testing environment won’t be applied some
Azure Policy Remediation Tasks
in the future, for a robust testing environment, prevent_deletion_if_contains_resources
SHOULD be explicitly set to false
.
Where descriptions for variables and outputs spans multiple lines. The description MAY provide variable input examples for each variable using the HEREDOC format and embedded markdown.
Example:
variable "my_complex_input" {
type = map(object({
param1 = string
param2 = optional(number, null)
}))
description = <<DESCRIPTION
A complex input variable that is a map of objects.
Each object has two attributes:
- `param1`: A required string parameter.
- `param2`: (Optional) An optional number parameter.
Example Input:
```terraform
my_complex_input = {
"object1" = {
param1 = "value1"
param2 = 2
}
"object2" = {
param1 = "value2"
}
}
```
DESCRIPTION
}
Terraform modules documentation MUST be automatically generated via Terraform Docs .
A file called .terraform-docs.yml
MUST be present in the root of the module and have the following content:
---
### To generate the output file to partially incorporate in the README.md,
### Execute this command in the Terraform module's code folder:
# terraform-docs -c .terraform-docs.yml .
formatter: "markdown document" # this is required
version: "0.16.0"
header-from: "_header.md"
footer-from: "_footer.md"
recursive:
enabled: false
path: modules
sections:
hide: []
show: []
content: |-
{{ .Header }}
<!-- markdownlint-disable MD033 -->
{{ .Requirements }}
{{ .Providers }}
{{ .Resources }}
<!-- markdownlint-disable MD013 -->
{{ .Inputs }}
{{ .Outputs }}
{{ .Modules }}
{{ .Footer }}
output:
file: README.md
mode: replace
template: |-
<!-- BEGIN_TF_DOCS -->
{{ .Content }}
<!-- END_TF_DOCS -->
output-values:
enabled: false
from: ""
sort:
enabled: true
by: required
settings:
anchor: true
color: true
default: true
description: false
escape: true
hide-empty: false
html: true
indent: 2
lockfile: true
read-comments: true
required: true
sensitive: true
type: true
Module owners MUST set a branch protection policy on their GitHub Repositories for AVM modules against their default branch, typically main
, to do the following:
- Requires a Pull Request before merging
- Require approval of the most recent reviewable push
- Dismiss stale pull request approvals when new commits are pushed
- Require linear history
- Prevents force pushes
- Not allow deletions
- Require CODEOWNERS review
- Do not allow bypassing the above settings
- Above settings MUST also be enforced to administrators
If you use the template repository as mentioned in the contribution guide, the above will automatically be set.
Module owners MUST use lower snake_casing for naming the following:
- Locals
- Variables
- Outputs
- Resources (symbolic names)
- Modules (symbolic names)
For example: snake_casing_example
(every word in lowercase, with each word separated by an underscore _
)
Module owners MUST use the below tooling for unit/linting/static/security analysis tests. These are also used in the AVM Compliance Tests.
- Terraform
terraform <validate/fmt/test>
- terrafmt
- Checkov
- tflint (with azurerm ruleset)
- Go
- Some tests are provided as part of the AVM Compliance Tests, but you are free to also use Go for your own tests.
For the definition of resources in the same file, the resources be depended on SHOULD come first, after them are the resources depending on others.
Resources that have dependencies SHOULD be defined close to each other.
We can use count
and for_each
to deploy multiple resources, but the improper use of count
can lead to
anti pattern
.
You can use count
to create some kind of resources under certain conditions, for example:
resource "azurerm_network_security_group" "this" {
count = local.create_new_security_group ? 1 : 0
name = coalesce(var.new_network_security_group_name, "${var.subnet_name}-nsg")
resource_group_name = var.resource_group_name
location = local.location
tags = var.new_network_security_group_tags
}
The module’s owners MUST use map(xxx)
or set(xxx)
as resource’s for_each
collection, the map’s key or set’s element MUST be static literals.
Good example:
resource "azurerm_subnet" "pair" {
for_each = var.subnet_map // `map(string)`, when user call this module, it could be: `{ "subnet0": "subnet0" }`, or `{ "subnet0": azurerm_subnet.subnet0.name }`
name = "${each.value}"-pair
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.1.0/24"]
}
Bad example:
resource "azurerm_subnet" "pair" {
for_each = var.subnet_name_set // `set(string)`, when user use `toset([azurerm_subnet.subnet0.name])`, it would cause an error.
name = "${each.value}"-pair
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.1.0/24"]
}
There are 3 types of assignment statements in a resource
or data
block: argument, meta-argument and nested block. The argument assignment statement is a parameter followed by =
:
location = azurerm_resource_group.example.location
or:
tags = {
environment = "Production"
}
Nested block is a assignment statement of parameter followed by {}
block:
subnet {
name = "subnet1"
address_prefix = "10.0.1.0/24"
}
Meta-arguments are assignment statements can be declared by all resource
or data
blocks. They are:
count
depends_on
for_each
lifecycle
provider
The order of declarations within resource
or data
blocks is:
All the meta-arguments SHOULD be declared on the top of resource
or data
blocks in the following order:
provider
count
for_each
Then followed by:
- required arguments
- optional arguments
- required nested blocks
- optional nested blocks
All ranked in alphabetical order.
These meta-arguments SHOULD be declared at the bottom of a resource
block with the following order:
depends_on
lifecycle
The parameters of lifecycle
block SHOULD show up in the following order:
create_before_destroy
ignore_changes
prevent_destroy
parameters under depends_on
and ignore_changes
are ranked in alphabetical order.
Meta-arguments, arguments and nested blocked are separated by blank lines.
dynamic
nested blocks are ranked by the name comes after dynamic
, for example:
dynamic "linux_profile" {
for_each = var.admin_username == null ? [] : ["linux_profile"]
content {
admin_username = var.admin_username
ssh_key {
key_data = replace(coalesce(var.public_ssh_key, tls_private_key.ssh[0].public_key_openssh), "\n", "")
}
}
}
This dynamic
block will be ranked as a block named linux_profile
.
Code within a nested block will also be ranked following the rules above.
PS: You can use
avmfix
tool to reformat your code automatically.
The meta-arguments below SHOULD be declared on the top of a module
block with the following order:
source
version
count
for_each
blank lines will be used to separate them.
After them will be required arguments, optional arguments, all ranked in alphabetical order.
These meta-arguments below SHOULD be declared on the bottom of a resource
block in the following order:
depends_on
providers
Arguments and meta-arguments SHOULD be separated by blank lines.
The ignore_changes
attribute MUST NOT be enclosed in double quotes.
Good example:
lifecycle {
ignore_changes = [
tags,
]
}
Bad example:
lifecycle {
ignore_changes = [
"tags",
]
}
Sometimes we need to ensure that the resources created are compliant to some rules at a minimum extent, for example a subnet
has to be connected to at least one network_security_group
. The user SHOULD pass in a security_group_id
and ask us to make a connection to an existing security_group
, or want us to create a new security group.
Intuitively, we will define it like this:
variable "security_group_id" {
type = string
}
resource "azurerm_network_security_group" "this" {
count = var.security_group_id == null ? 1 : 0
name = coalesce(var.new_network_security_group_name, "${var.subnet_name}-nsg")
resource_group_name = var.resource_group_name
location = local.location
tags = var.new_network_security_group_tags
}
The disadvantage of this approach is if the user create a security group directly in the root module and use the id
as a variable
of the module, the expression which determines the value of count
will contain an attribute
from another resource
, the value of this very attribute
is “known after apply” at plan stage. Terraform core will not be able to get an exact plan of deployment during the “plan” stage.
You can’t do this:
resource "azurerm_network_security_group" "foo" {
name = "example-nsg"
resource_group_name = "example-rg"
location = "eastus"
}
module "bar" {
source = "xxxx"
...
security_group_id = azurerm_network_security_group.foo.id
}
For this kind of parameters, wrapping with object
type is RECOMMENDED:
variable "security_group" {
type = object({
id = string
})
default = null
}
The advantage of doing so is encapsulating the value which is “known after apply” in an object, and the object
itself can be easily found out if it’s null
or not. Since the id
of a resource
cannot be null
, this approach can avoid the situation we are facing in the first example, like the following:
resource "azurerm_network_security_group" "foo" {
name = "example-nsg"
resource_group_name = "example-rg"
location = "eastus"
}
module "bar" {
source = "xxxx"
...
security_group = {
id = azurerm_network_security_group.foo.id
}
}
This technique SHOULD be used under this use case only.
An example from the community:
resource "azurerm_kubernetes_cluster" "main" {
...
dynamic "identity" {
for_each = var.client_id == "" || var.client_secret == "" ? [1] : []
content {
type = var.identity_type
user_assigned_identity_id = var.user_assigned_identity_id
}
}
...
}
Please refer to the coding style in the example. Nested blocks under conditions, MUST be declared as:
for_each = <condition> ? [<some_item>] : []
The following example shows how "${var.subnet_name}-nsg"
SHOULD be used when var.new_network_security_group_name
is null
or ""
Good examples:
coalesce(var.new_network_security_group_name, "${var.subnet_name}-nsg")
try(coalesce(var.new_network_security_group.name, "${var.subnet_name}-nsg"), "${var.subnet_name}-nsg")
Bad examples:
var.new_network_security_group_name == null ? "${var.subnet_name}-nsg" : var.new_network_security_group_name)
Since Terraform 0.13, count
, for_each
and depends_on
are introduced for modules, module development is significantly simplified. Module’s owners MUST NOT add variables like enabled
or module_depends_on
to control the entire module’s operation. Boolean feature toggles are acceptable however.
Input variables SHOULD follow this order:
- All required fields, in alphabetical order
- All optional fields, in alphabetical order
A variable
without default
value is a required field, otherwise it’s an optional one.
The naming of a variable
SHOULD follow
HashiCorp’s naming rule
.
variable
used as feature switches SHOULD apply a positive statement, use xxx_enabled
instead of xxx_disabled
. Avoid double negatives like !xxx_disabled
.
Please use xxx_enabled
instead of xxx_disabled
as name of a variable
.
The target audience of description
is the module users.
For a newly created variable
(Eg. variable
for switching dynamic
block on-off), it’s description
SHOULD precisely describe the input parameter’s purpose and the expected data type. description
SHOULD NOT contain any information for module developers, this kind of information can only exist in code comments.
For object
type variable
, description
can be composed in HEREDOC format:
variable "kubernetes_cluster_key_management_service" {
type = object({
key_vault_key_id = string
key_vault_network_access = optional(string)
})
default = null
description = <<-EOT
- `key_vault_key_id` - (Required) Identifier of Azure Key Vault key. See [key identifier format](https://learn.microsoft.com/en-us/azure/key-vault/general/about-keys-secrets-certificates#vault-name-and-object-name) for more details. When Azure Key Vault key management service is enabled, this field is required and must be a valid key identifier. When `enabled` is `false`, leave the field empty.
- `key_vault_network_access` - (Optional) Network access of the key vault Network access of key vault. The possible values are `Public` and `Private`. `Public` means the key vault allows public access from all networks. `Private` means the key vault disables public access and enables private link. Defaults to `Public`.
EOT
}
type
MUST be defined for every variable
. type
SHOULD be as precise as possible, any
MAY only be defined with adequate reasons.
- Use
bool
instead ofstring
ornumber
fortrue/false
- Use
string
for text - Use concrete
object
instead ofmap(any)
If variable
’s type
is object
and contains one or more fields that would be assigned to a sensitive
argument, then this whole variable
SHOULD be declared as sensitive = true
, otherwise you SHOULD extract sensitive field into separated variable block with sensitive = true
.
Nullable SHOULD be set to false
for collection values (e.g. sets, maps, lists) when using them in loops. However for scalar values like string and number, a null value MAY have a semantic meaning and as such these values are allowed.
nullable = true
MUST be avoided.
sensitive = false
MUST be avoided.
A default value MUST NOT be set for a sensitive input - e.g., a default password.
Sometimes we will find names for some variable
are not suitable anymore, or a change SHOULD be made to the data type. We want to ensure forward compatibility within a major version, so direct changes are strictly forbidden. The right way to do this is move this variable
to an independent deprecated_variables.tf
file, then redefine the new parameter in variable.tf
and make sure it’s compatible everywhere else.
Deprecated variable
MUST be annotated as DEPRECATED
at the beginning of the description
, at the same time the replacement’s name SHOULD be declared. E.g.,
variable "enable_network_security_group" {
type = string
default = null
description = "DEPRECATED, use `network_security_group_enabled` instead; Whether to generate a network security group and assign it to the subnet. Changing this forces a new resource to be created."
}
A cleanup of deprecated_variables.tf
SHOULD be performed during a major version release.
The terraform.tf
file MUST only contain one terraform
block.
The first line of the terraform
block MUST define a required_version
property for the Terraform CLI.
The required_version
property MUST include a constraint on the minimum version of the Terraform CLI. Previous releases of the Terraform CLI can have unexpected behavior.
The required_version
property MUST include a constraint on the maximum major version of the Terraform CLI. Major version releases of the Terraform CLI can introduce breaking changes and MUST be tested.
The required_version
property constraint SHOULD use the ~> #.#
or the >= #.#.#, < #.#.#
format.
Note: You can read more about Terraform version constraints in the documentation .
Example terraform.tf
file:
terraform {
required_version = "~> 1.6"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.11"
}
}
}
The terraform
block in terraform.tf
MUST contain the required_providers
block.
Each provider used directly in the module MUST be specified with the source
and version
properties. Providers in the required_providers
block SHOULD be sorted in alphabetical order.
Do not add providers to the required_providers
block that are not directly required by this module. If submodules are used then each submodule SHOULD have its own versions.tf
file.
The source
property MUST be in the format of namespace/name
. If this is not explicitly specified, it can cause failure.
The version
property MUST include a constraint on the minimum version of the provider. Older provider versions may not work as expected.
The version
property MUST include a constraint on the maximum major version. A provider major version release may introduce breaking change, so updates to the major version constraint for a provider MUST be tested.
The version
property constraint SHOULD use the ~> #.#
or the >= #.#.#, < #.#.#
format.
Note: You can read more about Terraform version constraints in the documentation .
Good examples:
terraform {
required_version = "~> 1.6"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.0"
}
}
}
terraform {
required_version = ">= 1.6.6, < 2.0.0"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">= 3.11.1, < 4.0.0"
}
}
}
terraform {
required_version = ">= 1.6, < 2.0"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">= 3.11, < 4.0"
}
}
}
Acceptable example (but not recommended):
terraform {
required_version = "1.6"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.11"
}
}
}
Bad example:
terraform {
required_version = ">= 1.6"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">= 3.11"
}
}
}
By rules
, in the module code provider
MUST NOT be declared. The only exception is when the module indeed need different instances of the same kind of provider
(Eg. manipulating resources across different location
s or accounts), you MUST declare configuration_aliases
in terraform.required_providers
. See details in this
document
.
provider
block declared in the module MUST only be used to differentiate instances used in resource
and data
. Declaration of fields other than alias
in provider
block is strictly forbidden. It could lead to module users unable to utilize count
, for_each
or depends_on
. Configurations of the provider
instance SHOULD be passed in by the module users.
Good examples:
In verified module:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.0"
configuration_aliases = [ azurerm.alternate ]
}
}
}
In the root module where we call this verified module:
provider "azurerm" {
features {}
}
provider "azurerm" {
alias = "alternate"
features {}
}
module "foo" {
source = "xxx"
providers = {
azurerm = azurerm
azurerm.alternate = azurerm.alternate
}
}
Bad example:
In verified module:
provider "azurerm" {
# Configuration options
features {}
}
Sometimes we notice that the name of certain output
is not appropriate anymore, however, since we have to ensure forward compatibility in the same major version, its name MUST NOT be changed directly. It MUST be moved to an independent deprecated_outputs.tf
file, then redefine a new output in output.tf
and make sure it’s compatible everywhere else in the module.
A cleanup SHOULD be performed to deprecated_outputs.tf
and other logics related to compatibility during a major version upgrade.
In locals.tf
, file we could declare multiple locals
blocks, but only locals
blocks are allowed.
You MAY declare locals
blocks next to a resource
block or data
block for some advanced scenarios, like making a fake module to execute some light-weight tests aimed at the expressions.
Precise local types SHOULD be used.
Good example:
{
name = "John"
age = 52
}
Bad example:
{
name = "John"
age = "52" # age should be number
}
A toggle variable MUST be used to allow users to avoid the creation of a new resource
block by default if it is added in a minor or patch version.
E.g., our previous release was v1.2.1
and next release would be v1.3.0
, now we’d like to submit a pull request which contains such new resource
:
resource "azurerm_route_table" "this" {
location = local.location
name = coalesce(var.new_route_table_name, "${var.subnet_name}-rt")
resource_group_name = var.resource_group_name
}
A user who’s just upgraded the module’s version would be surprised to see a new resource to be created in a newly generated plan file.
A better approach is adding a feature toggle to be turned off by default:
variable "create_route_table" {
type = bool
default = false
nullable = false
}
resource "azurerm_route_table" "this" {
count = var.create_route_table ? 1 : 0
location = local.location
name = coalesce(var.new_route_table_name, "${var.subnet_name}-rt")
resource_group_name = var.resource_group_name
}
Potential breaking(surprise) changes introduced by resource
block
- Adding a new
resource
withoutcount
orfor_each
for conditional creation, or creating by default - Adding a new argument assignment with a value other than the default value provided by the provider’s schema
- Adding a new nested block without making it
dynamic
or omitting it by default - Renaming a
resource
block without one or more correspondingmoved
blocks - Change
resource
’scount
tofor_each
, or vice versa
Terraform moved
block
could be your cure.
Potential breaking changes introduced by variable
and output
blocks
- Deleting(Renaming) a
variable
- Changing
type
in avariable
block - Changing the
default
value in avariable
block - Changing
variable
’snullable
tofalse
- Changing
variable
’ssensitive
fromfalse
totrue
- Adding a new
variable
withoutdefault
- Deleting an
output
- Changing an
output
’svalue
- Changing an
output
’ssensitive
value
These changes do not necessarily trigger breaking changes, but they are very likely to, they MUST be reviewed with caution.
newres
is a command-line tool that generates Terraform configuration files for a specified resource type. It automates the process of creating variables.tf
and main.tf
files, making it easier to get started with Terraform and reducing the time spent on manual configuration.
Module owners MAY use newres
when they’re trying to add new resource
block, attribute, or nested block. They MAY generate the whole block along with the corresponding variable
blocks in an empty folder, then copy-paste the parts they need with essential refactoring.