What

Go through all the existing S3 buckets & objects in the AWS infra & check to see how many & which of those are publicly accessible & why.

Why

  • We need to get a picture of the current state of S3 in our infrastructure . This would help us assess what & how much work needs to be done
  • It would help us keep a track of our progress
  • This essentially defines our benchmark

How

There's a possibility that this is the first time that the S3 resource is going to be used in our AWS infra, in which case, the effects of audits may not be immediately visible. Nevertheless, audits still make sense as the usage of S3, in our infra, expands.

In the other case, where AWS S3 is already being used in the infra, this could easily become one of the most time taking (& consuming) task. We could choose to do this manually by logging into the AWS console everyday & doing this audit manually or with the power of programming/scripting (especially in python) bestowed in us, we could choose to automate the audits. (I am not a big fan of the former approach personally, at all!)

Milestones

  • [ ] get a list of all existing buckets/objects, their existing access permissions & possibly their owners & reasons for why these buckets/objects are public
  • [ ] get a count of buckets/objects that are publicly accessible
  • [ ] have a script ensuring that this list is regularly updated & maintained

As mentioned earlier, one way of doing the above is to goto the AWS console & look for these buckets & their permissions & maintain a record of the same manually. However, I prefer automation wherever possible (& sensible). There are plenty of open source scripts/tools that let you do these kind of audits. A simple Google search would give enough good results, like:

scalefactory/s3audit
CLI tool for auditing S3 buckets. Contribute to scalefactory/s3audit development by creating an account on GitHub.
SecOps-Institute/AWS-S3-Buckets-Audit-Users
Ever tried to summarise the User access to the S3 buckets in your AWS Account? Here is the tool that can help you do the same - SecOps-Institute/AWS-S3-Buckets-Audit-Users
richarvey/s3-permission-checker
Check read, write permissions on S3 buckets in your account - richarvey/s3-permission-checker

etc. All of the above are good tools that can be used to get S3 audits in place.

The above script helps us achieve all the milestones identified above, except the part that mentions "possibly their owners & reasons for why these buckets/objects are public". This is a manual thing that needs to be done, unless there's already enough tooling in the existing infra that maintains this record already.

However, once we have captured the above data & analyzed it, we are in a position to determine what exactly are the requirements of our developers, why do they need buckets/objects with a certain access & which all of the buckets/objects can/should remain with lenient access controls. Consequently, this allows for more informed decisions on what may be called insecure in the context of our developers/our org requirements, instead of a one size fits all approach. It helps us decide the strategy that would best suit the custom needs of our devs while ensuring security around anything (s3 in this case).

For example, in our use case, after doing the above exercise & extended discussions with our devops/systems team/enough devs, we concluded on the below strategy for managing access around our S3:

  1. Only & only the following 3 operations should be allowed onto any bucket: s3:GetObject, s3:PutObject and s3:DeleteObject
  2. There should be one IAM user for every single bucket who would be allowed the above 3 access permissions onto that bucket & that bucket alone. A naming convention was also made ensuring that all such IAM user names end with -s3 so we could easily identify these users as & when needed
  3. All of these users must belong to the only one AWS account that we use, or in other words, no cross account access allowed

Any buckets that do not follow the above criteria would be considered insecure.

Now the above example is a very opinionated conclusion based on our specific requirements. This could be anything else in your case.

So, to suit our specific audit needs, we came up with a custom audit script, which can be found here:

c0n71nu3/s3Auditor
Contribute to c0n71nu3/s3Auditor development by creating an account on GitHub.

After the results of the above audit are available, the next step is to start working on the data by getting the bucket/object access fixed where ever identified as necessary. This may again be quite a manual task (& a mammoth in our case), depending on how the processes are defined in your org, as it may need context, permissions, execution capabilities/bandwidth etc. to get these fixed. Once all the identified issues are fixed, we would have reached a clean slate. The audits would need to be still run periodically though to ensure that the security team is on top of things should anything come up again after the audits or to keep a track of the progress around the clean up itself.

The number of buckets still existing with unacceptable access gives a great deal of clarity on whether efforts are being invested in the right direction or not. Mangers/leadership please smile :)

Revisiting our milestones:

  • [✔︎] get a list of all existing buckets/objects, their existing access permissions & possibly their owners & reasons for why these buckets/objects are public
  • [✔︎] get a count of buckets/objects that are publicly accessible
  • [✔︎] have a script ensuring that this list is regularly updated & maintained

Revisiting our Objective 1: Secure AWS S3 plan:

[✔︎] Audit & ensure that the existing open buckets/objects fixed/accounted for
[ ] Ensure that any new buckets/objects being created are secure
[ ] Ensure that the security team is made aware of any insecure buckets/objects existence/creation (if at all) as quickly as possible