The Amazon AWS S3 service provides a number of different ways to delete a non-empty S3 bucket; some of the approaches involve “emptying” the bucket prior to deleting it. The process can also vary a bit depending on whether or not the bucket has versioning enabled.

When the “aws” provider is used, the Terraform program acts as a client to the AWS service, so has a number of available approaches it can take when deleting S3 buckets.

When managing your infrastructure using Terraform, one common way to get rid of an infrastructure resource (cause it to be destroyed) is to simply remove it from your Terraform configuration (either by commenting-out it’s configuration block or by deleting it from the configuration file entirely).

Non-empty S3 buckets throw a monkeywrench into that process. The data stored as S3 objects within the bucket can be considered as separate (possibly precious!) artitfacts, so a little extra convincing is needed to let Terraform know that you really do want it to delete an S3 bucket resource and any data objects it contains.

If you simply get rid of the configuration block for the bucket, the terraform plan command will succeed in telling you that it would remove the bucket (as you might expect):

    $ terraform plan
    Refreshing Terraform state in-memory prior to plan...
    The refreshed state will be used to calculate this plan, but will not be
    persisted to local or remote state storage.

    aws_s3_bucket.your_tf_s3_bucket_resource_name: Refreshing state... (ID: your-bucket-name)
    The Terraform execution plan has been generated and is shown below.
    Resources are shown in alphabetical order for quick scanning. Green resources
    will be created (or destroyed and then created if an existing resource
    exists), yellow resources are being changed in-place, and red resources
    will be destroyed. Cyan entries are data sources to be read.

    Note: You didn't specify an "-out" parameter to save this plan, so when
    "apply" is called, Terraform can't guarantee this is what will execute.

    - aws_s3_bucket.your_tf_s3_bucket_resource_name


    Plan: 0 to add, 0 to change, 1 to destroy.

However, if you were then run terraform apply you might be surprised by the error:

    $ terraform apply
    aws_s3_bucket.your_tf_s3_bucket_resource_name: Refreshing state... (ID: your-bucket-name)
    aws_s3_bucket.your_tf_s3_bucket_resource_name: Destroying... (ID: your-bucket-name)
    Error applying plan:

    1 error(s) occurred:

    * aws_s3_bucket.your_tf_s3_bucket_resource_name (destroy): 1 error(s) occurred:

    * aws_s3_bucket.your_tf_s3_bucket_resource_name: Error deleting S3 Bucket: BucketNotEmpty: The bucket you tried to delete is not empty. You must delete all versions in the bucket.
            status code: 409, request id: <request-id-value>, host id: <host-id-value> "your-bucket-name"

    Terraform does not automatically rollback in the face of errors.
    Instead, your Terraform state file has been partially updated with
    any resources that successfully completed. Please address the error
    above and apply again to incrementally change your infrastructure.

Despite the “delete all versions in the bucket” language of the error message, the above error will appear regardless of whether or not the bucket has versioning enabled.

The Solution

Terraform will happily delete all of the objects in the bucket for you, but you have to explicitly tell it to do so, and you have to know how to ask.

Let’s say you have the following as your S3 bucket configuration block in your Terraform configuration file:

    resource "aws_s3_bucket" "my_s3_bucket_resource" {

        bucket = "my-bucket-name"

        acl    = "private"

        versioning {
            enabled = true
        }

        tags {
            Name        = "whatev-name"
            Environment = "whatev-env"
        }

        lifecycle {

            # Any Terraform plan that includes a destroy of this resource will
            # result in an error message.
            #
            prevent_destroy = true
        }
    }

You’ll want to do two things to allow the above S3 bucket to be deleted:

  1. Comment-out (or remove) the 'prevent_destroy' setting

  2. Add a 'force_destroy' setting

Here’s our new version:

    resource "aws_s3_bucket" "my_s3_bucket_resource" {

        bucket = "my-bucket-name"

        force_destroy = true

        acl    = "private"

        versioning {
            enabled = true
        }

        tags {
            Name        = "whatev-name"
            Environment = "whatev-env"
        }

    #     lifecycle {
    #
    #         # Any Terraform plan that includes a destroy of this resource will
    #         # result in an error message.
    #         #
    #         prevent_destroy = true
    #     }
    }

Now you can see the plan for destroying that bucket (and its dependencies) with the command:

    $ terraform plan -destroy -target=aws_s3_bucket.my_s3_bucket_resource

And then to actually have the bucket, its content, and its dependencies destroyed:

    $ terraform destroy -target=aws_s3_bucket.my_s3_bucket_resource
    [Terraform will prompt you to confirm, warning that there is no 'undo' for the action]

Here’s what it looks like in action:

    $ terraform destroy -target=aws_s3_bucket.my_s3_bucket_resource
    Do you really want to destroy?
      Terraform will delete the following infrastructure:
            aws_s3_bucket.my_s3_bucket_resource
      There is no undo. Only 'yes' will be accepted to confirm

      Enter a value: yes

    aws_s3_bucket.my_s3_bucket_resource: Refreshing state... (ID: my-bucket-name)
    aws_s3_bucket.my_s3_bucket_resource: Destroying... (ID: my-bucket-name)
    aws_s3_bucket.my_s3_bucket_resource: Destruction complete

Destroy complete! Resources: 1 destroyed.

Q: What would happen if we did not comment-out 'prevent_destroy'?

You may be wondering what would happen if we did not comment-out the 'lifecycle' setting prevent_destroy = true while having force_destroy = true set at the same time. The answer is that Terraform would do what you would want it to do: it would refuse to delete the S3 bucket:

    $ terraform plan -destroy -target=aws_s3_bucket.my_s3_bucket_resource
    Refreshing Terraform state in-memory prior to plan...
    The refreshed state will be used to calculate this plan, but will not be
    persisted to local or remote state storage.

    aws_s3_bucket.my_s3_bucket_resource: Refreshing state... (ID: my-bucket-name)
    Error running plan: 1 error(s) occurred:

    * aws_s3_bucket.my_s3_bucket_resource: aws_s3_bucket.my_s3_bucket_resource: the plan would destroy this resource, but it currently has lifecycle.prevent_destroy set to true. To avoid this error and continue with the plan, either disable lifecycle.prevent_destroy or adjust the scope of the plan using the -target flag.

Terraform version: 0.9.4 (released 2017-04-26)

The examples in this post all used terraform-0.9.4 (released 2017-04-26), but are likely to work just fine with both earlier and later versions. The earliest version expected to work (not tested!) is terraform-0.5.3 (the version that introduced the 'force_destroy’ parameter for AWS S3 bucket resources).

Additional Reading

The rationale for Terraform not blindly deleting S3 objects was discussed in hashicorp/terraform#1977; the discussion there includes the need for some sort of “force” option.

The 'force_destroy' option was implemented in hashicorp/terraform#2007.

The ‘force_destroy’ option is also documented in the Terraform under the “aws” provider.