In the previous blogpost we explored the status of our AWS S3 by doing audits around it. We spoke to multiple of our key stakeholders, including the devs & the systems teams to understand how could we fit S3 security in the context of our specific organization. While the audits are essential (& an absolutely mandatory) exercise, towards our goal, it is not really scalable. It helps with the clean up task, but doesn't really ensure that more mess is not being dumped on. Hence, it becomes crucial to figure out ways to proactively ensure that any new S3 resource creation follows a certain baseline/benchmark. So with that prelude, let's explore this section with a similar approach like the last one.

What

Have a system in place to ensure that any new S3 resources getting created follow a certain security benchmark.

Why

To ensure that new S3 resource creations are secure by default as per our contextual definition of security. This consequently leads to getting the problem of insecure S3 sorted at the root.

How

From the results of the last section we can infer that there are some very specific needs of our devs around AWS S3 requirements. More often that not, a very loosely access controlled S3 resource is not really needed. In the process of the audits, we also made certain rules around buckets, their access & their names. So to have proactive measures implemented, we would define our controls first as a set of rules/policies & then build tooling or systems to facilitate easier adoption of &/or enforcement these policies. And as the last time, we would need measurable criteria to verify if we have achieved what we wanted to or not, which gets us to our milestones listed below.

Milestones

  • [ ] A policy document detailing the rules that would define what is considered secure in the context of our organization
  • [ ] A system that implements/enforces this policy document
  • [ ] Number of violations of the above rules reported in the audits on newly created resources after the proactive system/s are implemented

We had already created a list of rules in the previous blogpost. In addition to those let us say that there are a couple of more use cases that were identified over a period of time. So our extended rule/policies set now become:

  1. Only & only the following 3 operations would be allowed onto any newly created bucket: s3:GetObject, s3:PutObject and s3:DeleteObject
  2. There would be one IAM user for every single bucket who would be allowed the above 3 access permissions onto that bucket & that bucket alone.
  3. Cross account S3 access would not be allowed
  4. Every bucket will have a subfolder that would allow any objects inside it to be world/public readable
  5. All S3 resources would be created only with the provisioned system to do so
Now once again, the above are a very contextual set of rules & policies that depend on each organization. It still makes case for an example.

The second bit is to think about a system that would technically implement the above policies. One of the ways to do so would be to use Terraform. The details of what it is & how it can be setup & used is quite decently documented in the above link. For our use case, we would make use of the below Terraform script to ensure that any new bucket creation abides by all the rules/policies we identified above.

c0n71nu3/s3ProactiveSecure
Terraform for securing AWS S3 proactively (opinionated) - c0n71nu3/s3ProactiveSecure

Having the terraform script solves our requirement to a big extent.

The next question, however, that arises is how/who would run this script? This aspect has to be controlled. Giving it away to anyone & everyone would again wind us up in a bad state. One of the ways to do this could be have another layer in between the Terraform script (version controlled), which actually makes the infra changes, and the users who need these S3 buckets.

Pros of this approach

  1. The Terraform script itself would be version controlled & source maintained (with all the respective checks & controls around that system). This means that the credentials to access the underlying infra does not need to be given out to any users at all. It also ensures that only the approved rules, mentioned in the script, are used for actual resource creation. Plus the inherent benefits of audit capabilities packed with it being version controlled.  
  2. This additional layer would be the one that the user would finally interface with. Hence, all the details of the underlying Terraform can be abstracted out, thus making it very simple for any developers to create an S3 bucket.

Cons of this approach

It becomes extremely crucial that this additional layer be very tightly controlled, especially in a situation where this system may become a solution for provisioning other infra related resources as well. It needs to have it's own tamper proof, securely maintained, audit trails around which resource creation was triggered by which user.

There could possibly be many other approaches to achieve the proactive controls depending on your specific context again.

One such system that readily provides exactly this capability is this awesome tool from GoJek:

gojek/proctor
A Developer-Friendly Automation Orchestrator. Contribute to gojek/proctor development by creating an account on GitHub.

When the terraform script mentioned above is used with the above tool, what it provides (from our use case's perspective) is a command to the user of the form:

proctor execute create-s3-bucket --name=myBucket --public=myCustomPublicFolder

and produces as output the same thing as mentioned in my Github link above.

Once the above systems are provisioned & made available to the users, the last bit that remains is to ensure that devs (or most users in general) create any S3 resources only through the above system. This would be more of a process driven thing again, which may include removal of, say, AWS console access/capabilities of any/all users. Once again this is quite contextual depending on how things are being managed at a given organization.

With all of this we are ascertained that any new bucket creations would be as per our defined policies (& technically enforced for the most part). There may be exceptions at times, which would need to be accommodated on a on-demand basis (& perhaps eventually generalized & made a part of the above/another system if needed). Our audits, from the last blog post, are already set up to run as a cron. And using that we can track if the proactive approach, we discussed above, has actually lead to any improvements in the creation of S3 resources.

Revisiting our milestones:

[✔︎] A policy document detailing the rules that would define what is considered secure in the context of our organization
[✔︎] A system that implements/enforces this policy document
[✔︎] Number of violations of the above rules reported in the audits on newly created resources after the proactive system/s are implemented

Revisiting our Objective 1: Secure AWS S3 plan:

[✔︎] Audit & ensure that the existing open buckets/objects fixed/accounted for
[✔︎] Ensure that any new buckets/objects being created are secure
[ ] Ensure that the security team is made aware of any insecure buckets/objects existence/creation (if at all) as quickly as possible


Credits:

  • @vjdhama for guidance around Terraform