Enhance infrastructure development with Amazon Q Developer and HashiCorp Terraform MCP server
Discover how Amazon Q Developer CLI and the Terraform MCP server work together to make infrastructure development faster, smarter, and more secure.
Author: Glenn Chia Jin Wee
» Introduction
Practitioners building modern infrastructure with Terraform often juggle deep documentation, provider nuances, and version complexity, all while racing to ship securely and at scale. That’s where intelligent tooling comes in.
In this blog, we explore how the Amazon Q Developer CLI integrates with the Terraform MCP server to bring AI-powered guidance directly into your development workflow, making Terraform development faster, more accurate, and easier to manage at scale.
» How Amazon Q and the Terraform MCP server work together
Amazon Q Developer CLI establishes a direct connection to HashiCorp's Terraform MCP server. This server acts as a specialized knowledge gateway, providing structured, real-time access to the complete Terraform ecosystem, including provider documentation, module specifications, and registry metadata. The result is an AI assistant that can reference and reason with HashiCorp's latest infrastructure guidance when helping with your Terraform challenges.
Key benefits for infrastructure teams:
Provider documentation in your current workflow: Instantly access documentation for any Terraform provider without leaving your workflow
Module discovery and evaluation: Find and compare modules from the Terraform Registry to solve specific infrastructure challenges
Configuration assistance: Generate and troubleshoot Terraform code with context-aware recommendations
Version migration support: Navigate breaking changes when upgrading providers or modules with specific guidance
» Enable the MCP server for Q Developer CLI
Prerequisites
Install Docker. This is needed to run the server in a container.
Amazon Q Developer CLI supports two levels of MCP configuration:
Global configuration:
~/.aws/amazonq/mcp.json
- Applies to all workspacesWorkspace configuration:
.amazonq/mcp.json
- Specific to the current workspace
In this blog we will apply the global configuration. Create the ~/.aws/amazonq/mcp.json
file with the following contents.
{
"mcpServers": {
"terraform": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"hashicorp/terraform-mcp-server"
]
}
}
}
The configuration is as follows:
Terraform MCP configuration
Start an interactive chat by typing q
. This loads the MCP server defined earlier.
Initialize Q Developer CLI
» Example use case 1: Using the latest version of the provider
As of this writing, the latest major release of theTerraform AWS provider, version 6.0, displays information about aws_flow_log
having the log_group_name
argument removed. It indicates using the log_destination
argument instead.
Use the following prompt to ask Q from the CLI about the latest changes in the resource’s arguments:
What argument was deprecated in the AWS provider v6's
aws_flow_log
resource, and what should I use instead?
Q then proceeds to call several tools to give us an answer.
1. Initial search: Calls the search_providers
tool to locate AWS Flow Log resource docs.
Q uses the search_providers tool
2. Documentation retrieval: Fetches details via get_provider_details
, but standard resource documentation lacks explicit deprecation notices.
Q uses the get_provider_details tool
3. Strategic pivot: Rather than giving up, Q automatically shifts gears to search for the latest guides instead.
Q uses the search_providers tool to search for the latest guides
4. Final resolution: Retrieves the Version 6 Upgrade Guide where it finds the exact information - log_group_name
is deprecated in favor of log_destination
.
Q uses the get_provider_details tool for a specific doc_id
5. Documentation of findings: Generates documentation about the deprecated arguments based on the tool findings.
Q summarizes the information discovered
This example shows that Q was able to use the tools that were defined in the MCP server to retrieve information about the resource. This demonstrates how the integration intelligently navigates HashiCorp's documentation ecosystem, finding answers that would otherwise require manual searching across multiple documentation sources.
» Example use case 2: Leveraging a Terraform registry module
Following our discovery about the AWS Flow Log changes, we can ask Q Developer to write Terraform code that implements this knowledge. Let's request code that creates a flow log with the new log_destination
parameter and integrates it with a VPC module:
Write Terraform code that creates a VPC using the AWS VPC module from the registry, then adds a flow log to that VPC using the correct pattern for provider v6. Include CloudWatch log group creation for the destination.
Q then proceeds to call several tools to give us an answer.
1. Search and retrieve module: Q first calls search_modules
to identify relevant VPC modules in the Terraform Registry. After identifying the popular terraform-aws-modules/vpc/aws module, Q uses get_module_details
to fetch its implementation details, input parameters, and output values.
Q uses the search_modules and get_module_details tools
2. Q then generates Terraform code that implements both the VPC module and flow logs using the latest provider pattern, eliminating the need for developers to manually reconcile module documentation with provider version requirements. It calls the inbuilt fs_write
tool to write the file to the current directory.
Q generates Terraform code
3. It proceeds to also create the variables.tf, outputs.tf, and README.md files, summarizing what it has done.
Q summarizes that it has done
Part of the code generated is shown below where aws_flow_log
references the outputs from the AWS VPC Module, uses the proper argument of log_destination
, and implements the destination as a CloudWatch Log Group.
Code snippet generated by Q
» Conclusion
The integration between Q Developer CLI and HashiCorp's Terraform MCP server represents a significant advancement in infrastructure development tooling. By combining AI assistance with Terraform ecosystem data, teams can accelerate development, reduce errors, and stay current with evolving best practices. Whether you're managing complex provider upgrades, exploring new modules, or implementing security improvements, this integration delivers the right information at the right time, directly within your development workflow.
The examples we've shown are just the beginning. When you configure the MCP server in your environment, you'll discover how this powerful combination can transform many aspects of your infrastructure process, from comparing vetted modules for multi-cloud implementations to analyzing configurations against security recommendations, understanding cross-provider differences, and troubleshooting complex error messages. This seamless integration creates an experience that makes infrastructure development more efficient and reliable.
Ready to get started? Try the Terraform MCP server with Q Developer in your local environment and explore how intelligent tooling can elevate your infrastructure workflows.