Terraform Secrets: What You Need to Know

While managing your infrastructure using Terraform, state files tracking infrastructure configurations are stored without encryption by default. State files often contain sensitive information, such as database login details, which include usernames and passwords. They may also contain SSH keys for accessing instances. Letā€™s say your Terraform state file is stored in an S3 bucket as a remote backend that hasnā€™t been properly secured with access controls for separate roles accessing the configuration. While Terraform state files typically do not store AWS or any secret access keys explicitly included in your configuration, they may contain other important details, such as database passwords, if defined separately in the configuration. If someone gains inappropriate access to this bucket due to improper permissions, such as allowing public access in access control policies or attaching policies like Administrator Access to entities that only need minimal S3 permissions, they could misuse it to modify your infrastructure or get a database login.

Risks of Improper Secret Management

Effective management of sensitive information is crucial for maintaining security. Failing to secure secrets effectively can expose databases by leaving credentials hardcoded in state files, leaving your system vulnerable to cyberattacks.

To address these vulnerabilities, itā€™s important to understand the risks of improper secret management. Let's get a better understanding of these risks and their impacts.Ā 

Improper Access Control

Access control to your code and resources is essential for securing your infrastructure. Without proper restrictions, sensitive information can be exposed, creating vulnerabilities that can be misused. For example, If credentials, such as database usernames and passwords, are stored unencrypted in Terraform state files and these files are accidentally uploaded to a remote backend with public access, they can be easily accessed and misused. Letā€™s say we create a Terraform configuration with the credentials of our [aws_db_instance] resource hardcoded.

provider "aws" { access_key = <aws_access_key> secret_key = <aws_secret_key>> region = "us-west-2" } resource "aws_db_instance" "mydb" { allocated_storage = 10 db_name = "app_db" engine = "mysql" engine_version = "5.7" instance_class = "db.t3.micro" username = "admin_user" password = "SecurePass123" parameter_group_name = "default.mysql5.7" skip_final_snapshot = true }

The username and password values are hardcoded for demonstration only.

After applying the Terraform configurations, locate the section related to the aws_db_instance resource. In your Terraform state file (terraform.tfstate), you will find the hardcoded username and password stored in the attributes section of the aws_db_instance.mydb resource.

"resources": [ { { { "instances": [ "monitoring_role_arn": "multi_az": false, "nchar_character_set_name": "network_type": "IPV4", "option_group_name": "default:mysql-5-7", "parameter_group_name": "default.mysql5.7", "password": "SecurePass123", "performance_insights_enabled": false, "performance_insights_kms_key_id": "performance_insights_retention_period": 0, "port": 3306, "publicly_accessible": false, "replica_mode": "replicas": [], || || "replicate_source_db": || || |||| "resource_id": "db-N70TPPEB2672EUDC4TCSH7XK14", "restore_to_point_in_time": [], "s3_import": [], } } }

Risks of Exposed Security Group IDs

Terraform state files can add security risks if they are not managed properly. These files store metadata about your infrastructure, including details such as database endpoints and login credentials of database instances.Ā 

Beyond database deletion using the credentials, improper handling of these credentials could result in misconfigured security groups, allowing external traffic that would typically be restricted.

For example, we deploy an application on AWS that interacts with a MySQL database running on RDS. We set up security groups so that only designated EC2 instances in a private subnet can access the database. The EC2 instances' security group allows outgoing traffic to the RDS database on port 3306, while the database's security group permits inbound traffic only from the EC2 instances' IP addresses.

{ "Resources":[ { "mode": "managed", "type": "aws_security_group", "name": "web_sg", "provider[\"registry.terraform.to/hashicorp/aws\"}", "provider": *instances": [ { "schema_version": 1, "attributes":{ "arn": "arn:aws:ec2:us-east-1:123456789012: security-group/sg- @a1236456cdef7890", "description": "Security group for web servers", "egress": [ { } 1, "cidr_blocks":["0.0.0.0/0"], "description": null, "from_port":0, ipv6 cidr_blocks": [], "prefix_list_ids": [], "protocol":"-1", "security_groups": [], "self": false, "to_port": 0 "id": "sg-ea1236456cdef7890", "Engress": [ { "cldr_blocks": ["0.0.0.0/0"], "description": "Allow HTTP", "from_port": 80, *ipv6_cidr_blocks": [], "prefix_list_ids": [], "protocol": "tcp", "security_groups": [], "self": false, "to_port*: 88 } 1. }, "name": "web-sg", "owner_id": "123456789012", "vpc_id": "vpc-1a2b3c4d" "sensitive_attributes": [], "private": "eyJzZM5zaXRpdmULOJkYXRhIn=", "dependencies": []

In the AWS Security Group resource aws_security_group, the field id represents the unique identifier of the security group, such as sg-0a123b456cdef7890. Additionally, the arn field provides the Amazon Resource Name for the security group, for example, "arn:aws:ec2:us-east-1:123456789012:security-group/sg-0a123b456cdef7890". These fields uniquely identify and describe the security group within your AWS environment.

Compliance Violations

Maintaining strict control over access credentials and infrastructure configurations is crucial for meeting regulatory standards for your organization. For example, the General Data Protection Regulation, or GDPR, requires organizations to protect personal data, such as names, addresses, and financial details, stored in databases or accessed through API keys. Similarly, in healthcare, the Health Insurance Portability and Accountability Act, or HIPAA, enforces strict guidelines for protecting patient data accessed through cloud services like storage platforms, compute instances, or databases.

If these credentials are exposed due to misconfigurations or unauthorized access to storage systems, it can lead to sensitive information being misused or stolen. This can result in inappropriate access, which leads to violations of these regulations. For example, a single HIPAA violation can result in fines of up to $25,000 per incident, which may vary depending on the size of the deal involved.

Improper secret management can have serious impacts beyond fines. These types of practices not only damage the trustworthiness of your organization but also create additional hurdles in maintaining compliance with industry standards and data protection laws

How Overlooked Secret Management Impacts Infrastructure Security

Managing secrets properly is key to keeping your applications secure and reliable. Overlooking this can create small vulnerabilities that can become entry points to be used against the entire applicationā€™s security and reliability.

Secrets in Configuration or State Files

There are several types of secrets that can be unintentionally exposed through Terraform state files, including database passwords, API tokens, and SSH keys. For instance, when creating a Google Cloud SQL instance, its connection details and generated passwords may be stored in the state file by default. Similarly, SSH keys assigned to compute instances can also end up in the state files while applying those configurations. If these files are stored in version control systems like GitHub or GitLab without proper .gitignore rules or in misconfigured storage backends such as AWS S3 buckets without encryption or restrictive policies, they become accessible to anyone with the appropriate permission or public access.

Configuration Sharing Risks

Sharing Terraform configuration files that contain hardcoded credentials, like API tokens, can lead to unintended exposure of sensitive data. For example, during debugging or testing, a developer might temporarily hardcode a database password into a configuration file. If this file is later committed to a shared repository integrated with external systems like CI/CD pipelines (e.g., Jenkins or GitHub Actions) or mirrored to backup repositories, these credentials can be unintentionally accessed by individuals

How to Manage Secrets in Terraform

Managing secrets effectively in Terraform is not just about following a checklist but protecting your infrastructure. Adopting best practices and leveraging the right tools is essential to address this. Hereā€™s how you can take a structured approach to keep your sensitive information safe while maintaining a workflow.

Environment Variables

Environment variables are a starting point for storing secrets. Instead of embedding secrets in your .tf files, configure Terraform to pull them from environment variables. This approach reduces the risk of exposing sensitive data by keeping it separate from your codebase, much like sharing information privately without recording it in public records.

provider "aws" { access_key = var.aws_access_key region = "us-west-2" } variable "aws_access_key" {}

Code configures the AWS provider with an access key (from a variable) and sets the region to us-west-2 for provisioning resources in that specific AWS region.

HashiCorp Vault

Using environment variables is a common practice for managing secrets, but secrets can still appear in Terraform state files. HashiCorp Vault addresses this issue by securely fetching secrets at runtime during Terraform execution. This avoids hardcoding and ensures secrets are not stored in plaintext. Vault also supports secret rotation, automatically replacing credentials before they can be misused.

provider "vault" { address = "http://127.0.0.1:8200" } data "vault_generic_secret" "custom_secret_data" { path = "custom-secrets/app_api_key" } output "retrieved_secret" { value = data.vault_generic_secret.example.data sensitive = true }

The VAULT_ADDR environment variable is being set to http://127.0.0.1:8200.This tells the Vault CLI to connect to a Vault server running locally on port 8200. The vault login command is used to authenticate to the Vault server. A Vault token (hvs.ICuB1nAR5foiLAAPMNkcq8kB) is provided for authentication. This token provides full access to Vault operations and does not expire. Future Vault commands in this terminal session will automatically use this token.

AWS Secrets Manager

For cloud-native environments, tools like AWS Secrets Manager or Azure Key Vault integrate with Terraform to manage secrets. Instead of hardcoding values, Terraform retrieves these secrets securely when configurations are applied. For example, database credentials or API keys can be fetched directly from the secret manager, ensuring they are never exposed in state files or logs. These services also provide built-in access control and auditing, making them a more intelligent choice for scalable infrastructures. Hereā€™s a sample configuration for getting a better understanding of the workings of the AWS secret manager

provider "aws" { region = "us-west-2" } data "aws_secretsmanager_secret_version" "database_credentials" { secret_id = "database-credentials-id" }

Here, a secret named my-secret-id is created using AWS secret manager and shown on listing the secrets.

Sensitive Variables in Terraform

Terraform has a built-in feature to ensure secrets donā€™t appear in logs or outputs. By using the sensitive = true setting, you can hide sensitive data like passwords and API keys, adding an extra layer of protection. This helps keep your secrets secure even during execution.

provider "aws" { access_key = var.aws_access_key region = "us-west-2" } output "access_key" { value = var.aws_access_key sensitive = true }

Now, we explored the importance of secure secret management in Terraform and covered best practices to store sensitive data out-of-state files. From using environment variables and tools like HashiCorp Vault and AWS Secrets Manager to using Terraformā€™s sensitive variables, these techniques collectively ensure that sensitive information stays protected.

Using GCP KMS for key encryption

Using the sensitive = true flag in Terraform ensures that sensitive data like database passwords or AWS credentials remains hidden in the terminal output during the plan and apply operations. While this prevents secrets from being exposed in logs, it doesnā€™t address the fact that they are still stored within the Terraform state file. To address this, we adopt a secure approach by integrating secrets management solutions like AWS Key Management Service or Google Cloud Secret Manager.
Letā€™s suppose you are using GCP to store sensitive information, such as database credentials. To ensure that data is securely stored and encrypted, you can use Google Cloud Secret Manager for storage and Key Management Service (KMS) for encryption.Ā 

In GCP, a Key Ring serves as a container to organize cryptographic keys, while the keys handle encryption and decryption.Ā 

The process begins by creating a Key Ring and an encryption key using KMS. Once the encryption key is set up, you can securely store your sensitive information in Google Cloud Secret Manager. Here, a "db" refers to a securely stored key-value pair created in the Secret Manager console, such as a secret named db containing the database username and password.Ā 

If your credentials are stored in a file like in this example in db-creds.yml, you can encrypt it using the encryption key with the [.gcloud kms encrypt] command, generating an encrypted file, db-creds.yml.encrypted. This approach ensures your sensitive data remains encrypted, securely stored, and accessible only to authorized users or services.

This produces an encrypted file named db-creds-encrypted.yml.

After encrypting sensitive credentials in a file using Google KMS, the next step is to integrate Terraform to securely utilize these encrypted values in your infrastructure.Ā 

Sensitive information, such as database credentials, should be stored securely using tools like Google Cloud KMS for encryption. Decryption should only occur when necessary, such as during Terraform configuration file execution.
Implementing these practices ensures your cloud resources remain intact and security risks are minimized.

Now handling secrets securely in Terraform is one part of the solution, but keeping them safe across the entire cloud environment is another challenge. Even if secrets are stored in HashiCorp Vault, AWS Secrets Manager, or Google Cloud Secret Manager, misconfigurations can still expose them. An S3 bucket with Terraform state files might be open to the public, or a security group could allow unrestricted access to a database containing sensitive credentials. Without constant monitoring, these issues can go unnoticed until they cause serious problems.

Managing Secrets Across Cloud Environments with Firefly

Firefly helps by continuously scanning cloud environments for misconfigurations and policy violations that could put secrets at risk. It does not store or manage secrets directly but makes sure that the resources handling them follow security best practices.

If an S3 bucket storing Terraform state files is not encrypted or is publicly accessible, Firefly detects it. If a database security group allows connections from any IP address, making it vulnerable to attacks, Firefly flags it. If IAM policies grant more permissions than necessary, increasing the risk of unauthorized access, Firefly points it out. If an environment variable containing secrets is exposed in a misconfigured instance.

Instead of manually reviewing cloud resources, Firefly monitors them continuously, catching issues before they become threats. It provides clear remediation steps so teams can fix security risks quickly.

By using Firefly, organizations can enforce security best practices and make sure that secrets remain protected. It helps maintain visibility over cloud environments, so misconfigurations donā€™t put sensitive data at risk.

FAQs

Can I use Terraform without managing secrets?

While itā€™s technically possible to manage a codebase without proper secret management, doing so can expose your organization to serious security. Secret management is important for maintaining the confidentiality, integrity, and security of your systems, making it a non-negotiable practice for sustainable operations.

What tools can I use to scan my Terraform code for hardcoded secrets?

There are several tools available to scan your Terraform code for hardcoded secrets, such as TruffleHog, GitLeaks, and TFSec. These tools help identify sensitive data like database credentials or other confidential information that shouldnā€™t be exposed in your code. Additionally, you can set up pre-commit hooks to automatically check for secrets before they are committed to version control, providing an added layer of protection against accidental leaks.

Can I automate secret rotation in Terraform?

Yes, and you should. Tools like HashiCorp Vault and AWS Secrets Manager excel at automating secret rotation. Rotated secrets expire quickly, reducing the window for attackers to misuse them.

How do I secure my Terraform state when working in a team?

Encrypt your state files. Use Terraformā€™s remote backends with encryption enabled and lock state files to avoid simultaneous edits. Teamwork doesnā€™t have to mean compromise when it comes to security.

What are the consequences of not encrypting state files in Terraform?

Exposed state files pose a risk of corruption, which can lead to loss of resource tracking, erroneous infrastructure changes, or even complete deployment failures. Proper state management practices are crucial to avoid such risks. Addressing this issue now can prevent potential security breaches and their associated consequences in the future.