Skip to main content

Terraform AWS Provider 4.0 Refactors S3 Bucket Resource

Version 4.0 of the HashiCorp Terraform AWS provider brings usability improvements to data sources and attribute validations along with a refactored S3 bucket resource.

The HashiCorp Terraform AWS provider has grown a great deal over the last year, and now includes 897 resources and 307 data sources. While we have been hard at work extending the provider's coverage, we needed to make space for significant changes and prepare for another major release.

Version 4.0 of the Terraform AWS provider brings four major updates:

  1. Updating the Amazon S3 bucket resource, creating additional resources for S3 bucket management
  2. Implementing the full CRUD lifecycle for default resources
  3. Ensuring all plural data sources can return zero results
  4. Updating the provider configuration

These changes, along with other minor updates, are aimed at simplifying your configurations and improving the overall experience of using the Terraform AWS provider.

»Refactor aws_s3_bucket Resource

The aws_s3_bucket is one of the oldest, largest, and most-used resources within the AWS provider. As this resource has grown to become responsible for handling a multitude of API calls related to bucket management and operations, it has become difficult for users to manage with a single configuration. Over the years, a variety of GitHub feature requests and bug reports advocated for a more discrete S3 bucket resource while refactoring bucket management capabilities to individual resources (e.g. replication_configuration).

We agreed that a change was needed, so in version 4.0 of the Terraform AWS provider we are introducing new S3 bucket management resources that reduce the burden of the previously overloaded aws_s3_bucket resource. The arguments normally found in the aws_s3_bucket resource will transition to computed arguments. For example, in the aws_s3_bucket resource, the argument lifecycle_rule would be marked as Computed (read-only) and a new resource — aws_s3_bucket_lifecycle_configuration — would function as its replacement, such that an argument rule would contain the same metadata as lifecycle_rule in the aws_s3_bucket resource. The aws_s3_bucket refactor will also allow practitioners to use fine-grained identity and access management (IAM) permissions when configuring specific S3 bucket settings via Terraform.

aws_s3_bucket will remain with its existing arguments marked as Computed until the next major release (v5.0) of the Terraform AWS Provider; at which time arguments marked as computed will be removed and will only be configurable via their new resource types. As of v4.0.0, users with configurations that contain attributes that are now marked as computed may see unexpected updates or errors within their Terraform plans. In order to transition to the new resource types without state mismatch or data loss, practitioners will need to use terraform import statements to track the new types in state. The AWS provider team is researching whether or not it is possible to provide tooling to assist in these potentially large resource moves.

For a complete list of computed arguments and their corresponding new resources, please consult the Terraform AWS provider 4.0 upgrade guide.

Additionally, all HashiCorp Learn content that includes S3 bucket management will transition to include the new resources. Visit the Host a Static Website with S3 and Cloudflare and Build and Use a Local Module tutorials to review the updated resources.

To protect against unexpected changes, we recommend pinning to an earlier version of the provider and previewing these changes in a development or staging environment. For additional information about the changes and rationale behind them, please refer to this discussion on GitHub.

»Implement Full Resource Lifecycle for Default Resources

Default resources (e.g. aws_default_vpc, aws_default_subnet) previously could only be read and updated. However, recent service changes now enable users to create and delete these resources within the provider. AWS has added corresponding API methods that allow practitioners to implement the full CRUD lifecycle.

In order to avoid breaking changes to default resources, you must upgrade to use create and delete functionality via Terraform, with the caveat that only one default VPC can exist per region and only one default subnet can exist per availability zone.

»Ensure All Plural Data Sources Return Zero Results

Beginning with version 4.0, all AWS provider plural data sources that are expected to return an array of results will now return an empty list if zero results are found. This level of consistency across all data source types lets practitioners create dynamic implementations based on returned results without encountering an error in their workflows.

»Improve Provider Configuration

In order for the provider to keep in step with AWS API development and align with other Terraform providers maintained by the AWS provider team, the provider configuration now supports FIPS, DualStack, unique STS regions, and accepts more than one shared credentials file.

As of this release, practitioners can, for example, have the AWS provider automatically resolve FIPS endpoints for all supported services as well as benefit from additional parameters in the provider configuration block or via environment variables.

»Additional Resources

When upgrading to version 4.0.0 of the Terraform AWS provider, please consult the upgrade guide on the Terraform Registry, as it contains not only a list of changes but also examples. As this release introduces breaking changes, we recommend pinning your provider version to protect against unexpected circumstances.

For a complete list of the changes in 4.0, please reference the AWS provider changelog.

The Terraform AWS provider team has worked hard on these changes and is thrilled to bring you these improvements. Please share any bugs or enhancement requests with us via GitHub Issues. We look forward to your feedback and want to thank you for being such a great community!

Sign up for the latest HashiCorp news

By submitting this form, you acknowledge and agree that HashiCorp will process your personal information in accordance with the Privacy Policy.

HashiCorp uses data collected by cookies and JavaScript libraries to improve your browsing experience, analyze site traffic, and increase the overall performance of our site. By using our website, you’re agreeing to our Privacy Policy and Cookie Policy.

The categories below outline which companies and tools we use for collecting data. To opt out of a category of data collection, set the toggle to “Off” and save your preferences.