See origin...ID: TFNFR6 - Category: Code Style - Resource & Data Order
For the definition of resources in the same file, the resources be depended on SHOULD come first, after them are the resources depending on others.
Resources that have dependencies SHOULD be defined close to each other.
See origin...ID: TFNFR7 - Category: Code Style - count & for_each Use
We can use count
and for_each
to deploy multiple resources, but the improper use of count
can lead to anti pattern.
You can use count
to create some kind of resources under certain conditions, for example:
resource "azurerm_network_security_group" "this" {
count = local.create_new_security_group ? 1 : 0
name = coalesce(var.new_network_security_group_name, "${var.subnet_name}-nsg")
resource_group_name = var.resource_group_name
location = local.location
tags = var.new_network_security_group_tags
}
The module’s owners MUST use map(xxx)
or set(xxx)
as resource’s for_each
collection, the map’s key or set’s element MUST be static literals.
Good example:
resource "azurerm_subnet" "pair" {
for_each = var.subnet_map // `map(string)`, when user call this module, it could be: `{ "subnet0": "subnet0" }`, or `{ "subnet0": azurerm_subnet.subnet0.name }`
name = "${each.value}"-pair
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.1.0/24"]
}
Bad example:
resource "azurerm_subnet" "pair" {
for_each = var.subnet_name_set // `set(string)`, when user use `toset([azurerm_subnet.subnet0.name])`, it would cause an error.
name = "${each.value}"-pair
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.1.0/24"]
}
See origin...ID: TFNFR8 - Category: Code Style - Resource & Data Block Orders
There are 3 types of assignment statements in a resource
or data
block: argument, meta-argument and nested block. The argument assignment statement is a parameter followed by =
:
location = azurerm_resource_group.example.location
or:
tags = {
environment = "Production"
}
Nested block is a assignment statement of parameter followed by {}
block:
subnet {
name = "subnet1"
address_prefix = "10.0.1.0/24"
}
Meta-arguments are assignment statements can be declared by all resource
or data
blocks. They are:
count
depends_on
for_each
lifecycle
provider
The order of declarations within resource
or data
blocks is:
All the meta-arguments SHOULD be declared on the top of resource
or data
blocks in the following order:
provider
count
for_each
Then followed by:
- required arguments
- optional arguments
- required nested blocks
- optional nested blocks
All ranked in alphabetical order.
These meta-arguments SHOULD be declared at the bottom of a resource
block with the following order:
depends_on
lifecycle
The parameters of lifecycle
block SHOULD show up in the following order:
create_before_destroy
ignore_changes
prevent_destroy
parameters under depends_on
and ignore_changes
are ranked in alphabetical order.
Meta-arguments, arguments and nested blocked are separated by blank lines.
dynamic
nested blocks are ranked by the name comes after dynamic
, for example:
dynamic "linux_profile" {
for_each = var.admin_username == null ? [] : ["linux_profile"]
content {
admin_username = var.admin_username
ssh_key {
key_data = replace(coalesce(var.public_ssh_key, tls_private_key.ssh[0].public_key_openssh), "\n", "")
}
}
}
This dynamic
block will be ranked as a block named linux_profile
.
Code within a nested block will also be ranked following the rules above.
PS: You can use avmfix
tool to reformat your code automatically.
See origin...ID: TFNFR9 - Category: Code Style - Module Block Order
The meta-arguments below SHOULD be declared on the top of a module
block with the following order:
source
version
count
for_each
blank lines will be used to separate them.
After them will be required arguments, optional arguments, all ranked in alphabetical order.
These meta-arguments below SHOULD be declared on the bottom of a resource
block in the following order:
depends_on
providers
Arguments and meta-arguments SHOULD be separated by blank lines.
See origin...ID: TFNFR10 - Category: Code Style - No Double Quotes in ignore_changes
The ignore_changes
attribute MUST NOT be enclosed in double quotes.
Good example:
lifecycle {
ignore_changes = [
tags,
]
}
Bad example:
lifecycle {
ignore_changes = [
"tags",
]
}
See origin...ID: TFNFR11 - Category: Code Style - Null Comparison Toggle
Sometimes we need to ensure that the resources created are compliant to some rules at a minimum extent, for example a subnet
has to be connected to at least one network_security_group
. The user SHOULD pass in a security_group_id
and ask us to make a connection to an existing security_group
, or want us to create a new security group.
Intuitively, we will define it like this:
variable "security_group_id" {
type: string
}
resource "azurerm_network_security_group" "this" {
count = var.security_group_id == null ? 1 : 0
name = coalesce(var.new_network_security_group_name, "${var.subnet_name}-nsg")
resource_group_name = var.resource_group_name
location = local.location
tags = var.new_network_security_group_tags
}
The disadvantage of this approach is if the user create a security group directly in the root module and use the id
as a variable
of the module, the expression which determines the value of count
will contain an attribute
from another resource
, the value of this very attribute
is “known after apply” at plan stage. Terraform core will not be able to get an exact plan of deployment during the “plan” stage.
You can’t do this:
resource "azurerm_network_security_group" "foo" {
name = "example-nsg"
resource_group_name = "example-rg"
location = "eastus"
}
module "bar" {
source = "xxxx"
...
security_group_id = azurerm_network_security_group.foo.id
}
For this kind of parameters, wrapping with object
type is RECOMMENDED:
variable "security_group" {
type: object({
id = string
})
default = null
}
The advantage of doing so is encapsulating the value which is “known after apply” in an object, and the object
itself can be easily found out if it’s null
or not. Since the id
of a resource
cannot be null
, this approach can avoid the situation we are facing in the first example, like the following:
resource "azurerm_network_security_group" "foo" {
name = "example-nsg"
resource_group_name = "example-rg"
location = "eastus"
}
module "bar" {
source = "xxxx"
...
security_group = {
id = azurerm_network_security_group.foo.id
}
}
This technique SHOULD be used under this use case only.
See origin...ID: TFNFR12 - Category: Code Style - Dynamic for Optional Nested Objects
An example from the community:
resource "azurerm_kubernetes_cluster" "main" {
...
dynamic "identity" {
for_each = var.client_id == "" || var.client_secret == "" ? [1] : []
content {
type = var.identity_type
user_assigned_identity_id = var.user_assigned_identity_id
}
}
...
}
Please refer to the coding style in the example. Nested blocks under conditions, MUST be declared as:
for_each = <condition> ? [<some_item>] : []
See origin...ID: TFNFR13 - Category: Code Style - Default Values with coalesce/try
The following example shows how "${var.subnet_name}-nsg"
SHOULD be used when var.new_network_security_group_name
is null
or ""
Good examples:
coalesce(var.new_network_security_group_name, "${var.subnet_name}-nsg")
try(coalesce(var.new_network_security_group.name, "${var.subnet_name}-nsg"), "${var.subnet_name}-nsg")
Bad examples:
var.new_network_security_group_name == null ? "${var.subnet_name}-nsg" : var.new_network_security_group_name)
See origin...ID: TFNFR16 - Category: Code Style - Variable Naming Rules
The naming of a variable
SHOULD follow HashiCorp’s naming rule.
variable
used as feature switches SHOULD apply a positive statement, use xxx_enabled
instead of xxx_disabled
. Avoid double negatives like !xxx_disabled
.
Please use xxx_enabled
instead of xxx_disabled
as name of a variable
.
See origin...ID: TFNFR17 - Category: Code Style - Variables with Descriptions
The target audience of description
is the module users.
For a newly created variable
(Eg. variable
for switching dynamic
block on-off), it’s description
SHOULD precisely describe the input parameter’s purpose and the expected data type. description
SHOULD NOT contain any information for module developers, this kind of information can only exist in code comments.
For object
type variable
, description
can be composed in HEREDOC format:
variable "kubernetes_cluster_key_management_service" {
type: object({
key_vault_key_id = string
key_vault_network_access = optional(string)
})
default = null
description = <<-EOT
- `key_vault_key_id` - (Required) Identifier of Azure Key Vault key. See [key identifier format](https://learn.microsoft.com/en-us/azure/key-vault/general/about-keys-secrets-certificates#vault-name-and-object-name) for more details. When Azure Key Vault key management service is enabled, this field is required and must be a valid key identifier. When `enabled` is `false`, leave the field empty.
- `key_vault_network_access` - (Optional) Network access of the key vault Network access of key vault. The possible values are `Public` and `Private`. `Public` means the key vault allows public access from all networks. `Private` means the key vault disables public access and enables private link. Defaults to `Public`.
EOT
}
See origin...ID: TFNFR18 - Category: Code Style - Variables with Types
type
MUST be defined for every variable
. type
SHOULD be as precise as possible, any
MAY only be defined with adequate reasons.
- Use
bool
instead of string
or number
for true/false
- Use
string
for text - Use concrete
object
instead of map(any)
See origin...ID: TFNFR19 - Category: Code Style - Sensitive Data Variables
If variable
’s type
is object
and contains one or more fields that would be assigned to a sensitive
argument, then this whole variable
SHOULD be declared as sensitive = true
, otherwise you SHOULD extract sensitive field into separated variable block with sensitive = true
.
See origin...ID: TFNFR20 - Category: Code Style - Non-Nullable Defaults for collection values
Nullable SHOULD be set to false
for collection values (e.g. sets, maps, lists) when using them in loops. However for scalar values like string and number, a null value MAY have a semantic meaning and as such these values are allowed.
See origin...ID: TFNFR21 - Category: Code Style - Discourage Nullability by Default
nullable = true
MUST be avoided.
See origin...ID: TFNFR22 - Category: Code Style - Avoid sensitive = false
sensitive = false
MUST be avoided.
See origin...ID: TFNFR23 - Category: Code Style - Sensitive Default Value Conditions
A default value MUST NOT be set for a sensitive input - e.g., a default password.
See origin...ID: TFNFR24 - Category: Code Style - Handling Deprecated Variables
Sometimes we will find names for some variable
are not suitable anymore, or a change SHOULD be made to the data type. We want to ensure forward compatibility within a major version, so direct changes are strictly forbidden. The right way to do this is move this variable
to an independent deprecated_variables.tf
file, then redefine the new parameter in variable.tf
and make sure it’s compatible everywhere else.
Deprecated variable
MUST be annotated as DEPRECATED
at the beginning of the description
, at the same time the replacement’s name SHOULD be declared. E.g.,
variable "enable_network_security_group" {
type = string
default = null
description = "DEPRECATED, use `network_security_group_enabled` instead; Whether to generate a network security group and assign it to the subnet. Changing this forces a new resource to be created."
}
A cleanup of deprecated_variables.tf
SHOULD be performed during a major version release.
See origin...ID: TFNFR25 - Category: Code Style - Verified Modules Requirements
The terraform.tf
file MUST only contain one terraform
block.
The first line of the terraform
block MUST define a required_version
property for the Terraform CLI.
The required_version
property MUST include a constraint on the minimum version of the Terraform CLI. Previous releases of the Terraform CLI can have unexpected behavior.
The required_version
property MUST include a constraint on the maximum major version of the Terraform CLI. Major version releases of the Terraform CLI can introduce breaking changes and MUST be tested.
The required_version
property constraint SHOULD use the ~> #.#
or the >= #.#.#, < #.#.#
format.
Note: You can read more about Terraform version constraints in the documentation.
Example terraform.tf
file:
terraform {
required_version = "~> 1.6"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.11"
}
}
}
See origin...ID: TFNFR26 - Category: Code Style - Providers in required_providers
The terraform
block in terraform.tf
MUST contain the required_providers
block.
Each provider used directly in the module MUST be specified with the source
and version
properties. Providers in the required_providers
block SHOULD be sorted in alphabetical order.
Do not add providers to the required_providers
block that are not directly required by this module. If submodules are used then each submodule SHOULD have its own versions.tf
file.
The source
property MUST be in the format of namespace/name
. If this is not explicitly specified, it can cause failure.
The version
property MUST include a constraint on the minimum version of the provider. Older provider versions may not work as expected.
The version
property MUST include a constraint on the maximum major version. A provider major version release may introduce breaking change, so updates to the major version constraint for a provider MUST be tested.
The version
property constraint SHOULD use the ~> #.#
or the >= #.#.#, < #.#.#
format.
Note: You can read more about Terraform version constraints in the documentation.
Good examples:
terraform {
required_version = "~> 1.6"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.0"
}
}
}
terraform {
required_version = ">= 1.6.6, < 2.0.0"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">= 3.11.1, < 4.0.0"
}
}
}
terraform {
required_version = ">= 1.6, < 2.0"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">= 3.11, < 4.0"
}
}
}
Acceptable example (but not recommended):
terraform {
required_version = "1.6"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.11"
}
}
}
Bad example:
terraform {
required_version = ">= 1.6"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">= 3.11"
}
}
}
See origin...ID: TFNFR27 - Category: Code Style - Provider Declarations in Modules
By rules, in the module code provider
MUST NOT be declared. The only exception is when the module indeed need different instances of the same kind of provider
(Eg. manipulating resources across different location
s or accounts), you MUST declare configuration_aliases
in terraform.required_providers
. See details in this document.
provider
block declared in the module MUST only be used to differentiate instances used in resource
and data
. Declaration of fields other than alias
in provider
block is strictly forbidden. It could lead to module users unable to utilize count
, for_each
or depends_on
. Configurations of the provider
instance SHOULD be passed in by the module users.
Good examples:
In verified module:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.0"
configuration_aliases = [ azurerm.alternate ]
}
}
}
In the root module where we call this verified module:
provider "azurerm" {
features {}
}
provider "azurerm" {
alias = "alternate"
features {}
}
module "foo" {
source = "xxx"
providers = {
azurerm = azurerm
azurerm.alternate = azurerm.alternate
}
}
Bad example:
In verified module:
provider "azurerm" {
# Configuration options
features {}
}
See origin...ID: TFNFR30 - Category: Code Style - Handling Deprecated Outputs
Sometimes we notice that the name of certain output
is not appropriate anymore, however, since we have to ensure forward compatibility in the same major version, its name MUST NOT be changed directly. It MUST be moved to an independent deprecated_outputs.tf
file, then redefine a new output in output.tf
and make sure it’s compatible everywhere else in the module.
A cleanup SHOULD be performed to deprecated_outputs.tf
and other logics related to compatibility during a major version upgrade.
See origin...ID: TFNFR31 - Category: Code Style - locals.tf for Locals Only
In locals.tf
, file we could declare multiple locals
blocks, but only locals
blocks are allowed.
You MAY declare locals
blocks next to a resource
block or data
block for some advanced scenarios, like making a fake module to execute some light-weight tests aimed at the expressions.
See origin...ID: TFNFR33 - Category: Code Style - Precise Local Types
Precise local types SHOULD be used.
Good example:
{
name = "John"
age = 52
}
Bad example:
{
name = "John"
age = "52" # age should be number
}
See origin...ID: TFNFR34 - Category: Code Style - Using Feature Toggles
A toggle variable MUST be used to allow users to avoid the creation of a new resource
block by default if it is added in a minor or patch version.
E.g., our previous release was v1.2.1
and next release would be v1.3.0
, now we’d like to submit a pull request which contains such new resource
:
resource "azurerm_route_table" "this" {
location = local.location
name = coalesce(var.new_route_table_name, "${var.subnet_name}-rt")
resource_group_name = var.resource_group_name
}
A user who’s just upgraded the module’s version would be surprised to see a new resource to be created in a newly generated plan file.
A better approach is adding a feature toggle to be turned off by default:
variable "create_route_table" {
type = bool
default = false
nullable = false
}
resource "azurerm_route_table" "this" {
count = var.create_route_table ? 1 : 0
location = local.location
name = coalesce(var.new_route_table_name, "${var.subnet_name}-rt")
resource_group_name = var.resource_group_name
}
See origin...ID: TFNFR35 - Category: Code Style - Reviewing Potential Breaking Changes
Potential breaking(surprise) changes introduced by resource
block
- Adding a new
resource
without count
or for_each
for conditional creation, or creating by default - Adding a new argument assignment with a value other than the default value provided by the provider’s schema
- Adding a new nested block without making it
dynamic
or omitting it by default - Renaming a
resource
block without one or more corresponding moved
blocks - Change
resource
’s count
to for_each
, or vice versa
Terraform moved
block could be your cure.
Potential breaking changes introduced by variable
and output
blocks
- Deleting(Renaming) a
variable
- Changing
type
in a variable
block - Changing the
default
value in a variable
block - Changing
variable
’s nullable
to false
- Changing
variable
’s sensitive
from false
to true
- Adding a new
variable
without default
- Deleting an
output
- Changing an
output
’s value
- Changing an
output
’s sensitive
value
These changes do not necessarily trigger breaking changes, but they are very likely to, they MUST be reviewed with caution.
See origin...ID: TFNFR36 - Category: Code Style - Setting prevent_deletion_if_contains_resources
From Terraform AzureRM 3.0, the default value of prevent_deletion_if_contains_resources
in provider
block is true
. This will lead to an unstable test because the test subscription has some policies applied, and they will add some extra resources during the run, which can cause failures during destroy of resource groups.
Since we cannot guarantee our testing environment won’t be applied some Azure Policy Remediation Tasks in the future, for a robust testing environment, prevent_deletion_if_contains_resources
SHOULD be explicitly set to false
.
See origin...newres
is a command-line tool that generates Terraform configuration files for a specified resource type. It automates the process of creating variables.tf
and main.tf
files, making it easier to get started with Terraform and reducing the time spent on manual configuration.
Module owners MAY use newres
when they’re trying to add new resource
block, attribute, or nested block. They MAY generate the whole block along with the corresponding variable
blocks in an empty folder, then copy-paste the parts they need with essential refactoring.