MCAS – Device Identity via Certificates and Progressive Web Apps

I have a customer scenario where we needed to explore leveraging certificates in order to identify corporate Windows 10 machines for the purposes of preventing corporate data from being downloaded from O365 services to non-corporate assets. There are a few different ways that this can be tackled, however, other routes proved to be dead ends for various technical reasons, so it was landed on leveraging device certificates via MCAS in order to control data spillage.

https://docs.microsoft.com/en-us/cloud-app-security/proxy-intro-aad#managed-device-identification

Utilizing device identity via MCAS with certificates does mean that your user traffic for the devices with certs (and without) will have to go through Conditional Access App Control for all the sessions (reverse proxy). It took some trial and error to get this to work. I do want to point out that this method is technically not supported for O365 services as proxying traffic for O365 can impact the user experience and impact the SLAs.

Scenario

  1. Hybrid Azure AD Joined machines will be allowed to access corporate resources unfettered
  2. Corporate devices without HAADJ will have a certificate deployed to them. The certificate will be used in a session control policy via MCAS to allow the device to download corporate data from O365
  3. Devices without certificates (corporate or otherwise) will be treated as untrusted. They will still be allowed to access corporate data (as allowed by other conditional access policies outside of the scope of this blog) but they will not be allowed to download data from O365

Hybrid Azure AD Join: https://docs.microsoft.com/en-us/azure/active-directory/devices/hybrid-azuread-join-managed-domains

HAADJ is a function of AAD and AAD Connect where machines are effectively synced from your on-prem AD into AAD. If a device has been synced, that information can be leveraged within conditional access policies as a piece of criteria during authentication. At this point, you can decide to request additional controls (such as MFA), whether to allow or block access, and even potentially drop the session into the MCAS reverse proxy for additional session control. In order to cover these scenarios for my lab, I configured two conditional access policies – the first policy enables the scenario I want to allow and the second policy blocks the rest.

Policy #1 – ITOp5 – RP – Block Download without Cert

My target user is ITOp5 in my lab. This is a standard user account that has been synced from the on-prem AD via AADC.

Apps – I went lazy for this and simply selected all apps

Conditions

Only targeting Windows since this was my client’s use case

Only allowing browser – the reverse proxy only works with the browser. Since that is the case, we are only going to allow browser based access.

For device state, we want to include all devices…

But exclude devices that are actually compliant (this is a 2-fer and how we catch all 3 scenarios with 2 conditional access policies)

In this case, I am only granting access, but additional controls and requirements could be implemented

Lastly, I am putting my user into Conditional Access App Control aka the reverse proxy if the above criteria is met

Policy #2 – ITOp5 – Block Non Browser on Non Compliant Devices

This policy is almost exactly the same with two exceptions. The first exception is Client apps – in the allow policy this is set to Browser. In the block, this is set to everything else

Lastly, rather than grant the policy is set to block

With these CAPs, folks coming in on a Windows machine either need to be marked compliant via Intune or have their devices HAADJ in order to have full access to all resources and to be able to use thick client apps. Everyone else is going to be routed into the reverse proxy. For the proxied users, if they have a certificate, they will be allowed to download data to their endpoint. If they do not have the certificate, the user will be forced to leverage the cloud-based tools within O365 to collaborate and work with corporate data.

MCAS – Session Control Policy

Configuration here is straight forward. It is a Control file download (with DLP) policy. For the criteria, it is going to specifically target my test user and block if the user does not have the certificate. Given that, the criteria is going to be targeted to the devices that do not have a valid certificate

For the file criteria I could get specific and try to stop content specifically with PII, financial data, HIPAA, etc. but I really want to stop all egress in order to prevent non corporate devices from coming under potential eDiscovery. With that, the criteria for what I am looking for is left blank to catch all content

Lastly, the policy is set to block with a custom block message. Notifications are optional

This is it! Other than getting certificates to the end points a HAADJ joined machine should be allowed to connect regardless of app. An unmanaged device should be routed into the reverse proxy and data egress should be blocked.

Verified this was the case in lab with this configuration and I have the behavior I want.

Device Certificate

For the root cert that goes into MCAS, it just needs to be the base64 encoded public cert (.CER) file that you import. Simply export, import and done.

https://docs.microsoft.com/en-us/cloud-app-security/proxy-intro-aad#client-certificate-authenticated-devices

The device identification mechanism can request authentication from relevant devices using client certificates. You can either use existing client certificates already deployed in your organization or roll out new client certificates to managed devices. You then use the presence of those certificates to set access and session policies.

Not a ton of detail – this suggests that a device certificate on the endpoint should be sufficient. There is a little more detail further in the article that suggests an SSL certificate. From my testing, I have found that the ultimate requirement for this certificate itself is that it has Client Authentication in the Extended Key Usage. Lastly, the certificate needs to be installed into the user’s personal store – the local machine store will not work.

Walking through the process using a MS PKI to deploy the cert – this is probably not the most efficient and is definitely not the route you would go to deploy certificates in bulk to end users. This is purely the process I used and I wanted to document it here to show how I get a cert manually into the end user store.

Certificate Authority Template

I duplicated the Web Server Template, added Server and Client Authentication in for Extended Key Usage. I then published the template for use within in CA.

End User Certificate Request

I am doing this manually with a certificate signing request. On the endpoint, create a request.ini with the following and run the certreq command

I took the output in the csr6.req file and pasted it into the CertSrv site requesting a certificate utilizing the custom Web Server template I created with Client and Server authentication in the EKU

I downloaded the .cer file, copied it to the target workstation and than ran the following to import the cert directly into the user’s personal store

At this point, we can validate that the browser is able to see the certificate by going into Settings -> Search for Certificates

I can also see the certificate if I go into the certificate manager and look in the user’s personal store

Now, when I browse to O365 I hit the login page, I provide my username and then password. Immediately post password, I get the following

I select my certificate and hit ok. This allows me into O365. At this point, I can browse anywhere in the cloud service and download files since I now have a valid certificate that I presented immediately post authentication to the reverse proxy. Given how the conditional access policies are configured, this prevents all non browser apps from accessing the services. This means that only machines that are HAADJ or Intune managed (and compliant) are allowed to use thick client applications. Machines outside of that are allowed to access O365 but are not allowed to download content unless they have the certificate. Even the machines with the certificate are forced into use the web apps for O365 as applications like Outlook, Teams, OneDrive sync, etc. are not allowed to connect directly to the service. These users with the cert would be able to download content directly to their machine and work on it offline but they would then have to re-upload the content via the browser when it came time.

Progressive Web Apps

PWAs introduce and interesting option for enabling productivity but still flowing the traffic through and controlling data egress via the reverse proxy. This is very slick and I use PWAs for sites/apps both for O365 and otherwise.

https://docs.microsoft.com/en-us/microsoft-edge/progressive-web-apps-edgehtml/get-started

That’s the dev resource for PWAs so what does this actually mean for O365? You can pin specific sites (like Outlook and Teams) as PWAs on a machine. They are still web apps, but they get better performance, they get extra caching and they all the nice little tweaks to make them _almost_ as good as the thick client. Keep in mind, the _almost_ is my opinion and it does depend on the app. For example, I use a music stream service on home machine. The app for the stream service is a resource hog. I pin the PWA for that app and it is significantly lighter on resources on my machine. I think each web app (both in and out of the MS ecosystem) warrants exploring.

Edge Chromium

Summary

Requirements for leveraging Device Certificate via MCAS to block downloads of corporate data to unmanaged devices

Device Certificate Requirement

  • Certificate has Client Authentication in the Extended Key Usage
  • Certificate is located in the user’s personal store

Conditional Access

  • Non browser apps are blocked for the targeted users – required or the session will not be routed into the reverse proxy and this negates the usage of the certificate in order to allow/block downloads
  • Browser access is set to be granted through Conditional Access App Control

MCAS Session Control Policy

  • Block downloads of all data to devices that are not tagged with having the certificate

Resultant Scenario

  • All users off network and/or in scope would be forced to use web-based clients for a connected experience. This means thick client apps such as Outlook, OneDrive Sync, Teams and even tools like Excel and Word will not work when connecting directly to cloud data
  • All users with a valid certificate would be able to download the data and work with tools like Word and Excel in an offline mode and then reupload the data into SharePoint or OneDrive
  • Progressive Web Apps provide a nice alternative to thick clients and/but they do not allow local disk access
  • End users would still have full access and be able to use the web-based collaboration tools within the cloud without allowing egress of data to their endpoints

Happy MCAS-ing!

Hack Job, MCAS

Secure RDP – Using SSH Tunneling With Built-In Windows Features

So…

Who knew? I didn’t. This is the screen for Settings -> Apps and Features -> Optional Features for both Windows Server 2019 as well as Windows 10. This was a very pleasant surprise. With that, I am always looking for a way to connect to my home lab that is:

  1. Secure
  2. Minimizes the required HW/resource footprint

I was previously using Remote Desktop Services along with Azure App Proxy (here) to publish access securely, however, this meant that I had to have the RDS and AAD App Proxy footprints in my lab. I only have 48GB of RAM in my lab and I run out all the time – running AD along with a few additional services for demo purposes can really put the squeeze on quick. Stumbling across OpenSSH built right in seems like it might be the solution for which I have been looking. With that being said, the documentation….has room for improvement. I did finally find the answers I was looking for and was able to leverage SSH tunneling in order to RDP into my lab. I love this!

** Requires Windows 10 or Server 2019

I am going to save you some time and steer you away from the official docs. The documentation you need to get started is in an answer on Stack Overflow here:

https://stackoverflow.com/questions/16212816/setting-up-openssh-for-windows-using-public-key-authentication and was provided by this person: https://stackoverflow.com/users/31782/n0rd

High level, here are the condensed steps from his answer:

On the server:

  1. PS> Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0 or through the UI (pictured above)
  2. Start the services
    1. PS> Start-Service ssh-agent
    2. PS> Start-Service sshd

On the client:

  1. Mkdir .ssh in the C:\Users\<Username>\ directory. This is where the keys are going to land. For demo purposes, I am going to use administrator so the path would be C:\Users\Administrator\.ssh
  2. CD to that directory using CMD or PS
  3. Run ssh-keygen
    1. Give a password or not for the key file. I chose to do so for the sake of security
  4. In the window, run ssh-add .\id_rsa (assuming this is the private key that was generated). You will get prompted for the password – go ahead and enter it. This makes the key securely available without prompted for a password every time you use it to connect from this machine

The client is now configured.

Back to the server:

  1. Log into the server as the user you are wishing to connect as – for demo purposes this is the administrator account
  2. Mkdir C:\Users\<Username>\.ssh – in this case, C:\Users\Administrator\.ssh
  3. Copy and paste the contents of the .pub file from the client into a file named authorized_keys here in this directory on the server
  4. Right Mouse Click on autorized_keys and go to properties
  5. Properties -> Security -> Advanced -> Disable Inheritance
  6. There should be exactly 2 perms assigned to this file (probably has 3 right now).

    Get rid of the extra. You should have SYSTEM and your user account – in this case Administrator. Get rid of any other entries such as Administrators (group) here.

  7. Apply, Ok, Close, or whatevs to save the changes
  8. Open C:\ProgramData\ssh\sshd_config with notepad or your text editor of choice
  9. Comment out the bottom 2 lines

    NOTE: Put a # at the beginning of each line to comment it out

  10. Find this line and make sure it is not commented out:

    PubkeyAuthentication yes

  11. Save and close

At this point, you can SSH in with the key file. Sweet! For my lab, in order to further secure it, I want to make it such that you HAVE to use the keyfile in order to connect.

  1. Reopen sshd_config with notepad
  2. Find this line, uncomment it and set it to no:

    PasswordAuthentication no

  3. Save and close
  4. PS> Restart-Service sshd
    • This may or may not be necessary

Now, you HAVE to use the private key (which exists only on your machine) to SSH to the server. Sweet!

The next step depends on your environment and which port you want to use. For me, I grabbed a random high value port on my router and port fowarded it to 22 on my server. For demo purposes, I used 47474 (which I later changed). Now, on my machine I can either SSH to the Windows Server (which throws a CMD shell by default) or I can fire up tunneling to RDP into any machine in my lab.

Just doing a straight SSH:

Now, to do RDP instead we just need to fire up the tunnel with a slightly different command

Note the -f and the -N along with the 12345:HV01:3389. MyPublicIp is the public IP I currently have for my lab ommitted for security:

-f = for into the background
-N = do not execute a remote command
12345 = the local port to which we are going to RDP
HV01 = the server name to which we want to RDP in the lab
3389 = the port to which we are going to RDP

When you run this command, a listener (:12345) is established on the local machine which is routed through the secure tunnel (:47474 -> :22) to :3389 on HV01

RDPing to my localhost on port 12345 – MSTSC prompts me for creds. I provide valid creds for the HV01 machine in my lab and:

Boom. We now have a secure tunnel that can only be accessed with the private key on my local machine which then allows me to RDP into my lab.

Hack Job, Security, Uncategorized

MCAS Lab – Auto Updating Discovery Data with Sample Data

Maybe you have a need to demo Microsoft Cloud App Security to your customers. Maybe you have a need for a lab that has constantly updated discovery data. Maybe creating a snapshot report every 30 days is good enough…maybe not. For me, I want the Discovery Dashboard to be populated with fresh data for demo purposes and the logs from my home router just don’t cut it as GBs of traffic to NetFlix, and Hulu and a taste of Twitter just don’t make for that compelling of a demo. I wanted a way to auto-update the global logs on a recurring basis in a “set it and forget it” manner.

  1. Deploy the log collector (Ubuntu FTW)
  2. Grab the Code and Config
  3. Create the Scheduled Task
  4. Forget it

#1 Deploy the log collector

https://docs.microsoft.com/en-us/cloud-app-security/discovery-docker-ubuntu

Critical Pieces of information:

  1. Machine Name – UBTLOG01
  2. Machine IP – 192.168.50.163 (this isn’t my real IP but I’ll keep it consistent for the purpose of the doc)
  3. Log Collector Data Source – name I gave the data source in the MCAS portal
  4. Log Collector Data Source Type – Palo Alto – PA Series FW
  5. Data Source Type – FTP

MCAS Portal – Log Collectors

Now that the log collector is deployed, we can move on to the code and the scheduled task

#2 Code and Config

Download from GitHub here
Download the code and drop it into the folder in which you want the script to run from and work.

I like using the CredentialManager module to register to register and hide credentials on my PowerShell automation machines.

PowerShell Gallery: Credential Manager 2.0

On line 44 of the PS1 code it has the path to the .env file (really just JSON) that contains all of the environmental variables necessary to run the script. Here’s the format of the .env file:

{
    “LogCollectorVMName”: “UBTLOG01”,
    “LogCollectorHVHost”: “DC03”,
    “LogCollectorIP”: “192.168.50.163”,
    “LogCollectorDSName”: “PaloFW-TSTLab”,
    “CredManTarget”: “MCAS”,
    “LogfilePath”: “E:/Jobs/MCASLogCollectorUpload”
}
LogCollectorVMName – Name of the Ubuntu machine
LogCollectorHVHost – I am using HyperV to host the log collector machine
LogCollectorIP – IP Address (LAN) for the Ubuntu machine
LogCollectorDSName – Data Source name assigned during the creating in MCAS
CredManTarget – name target used to retrieve the FTP credentials for pushing the log files
LogfilePath – path to really all of the artifacts – since this is a json file, use / instead of \ for the path
  1. Install Credential Manager on your worker machine
  2. Register the ftp credentials in a target named MCAS (match your .env file)
  3. Drop the script and the .env file in the LogFilePath you assigned earlier – I am using a path of E:\Jobs\MCASLogCollectorUpload

#3 Scheduled Task

Import the provided schedule task and tweak to your environment

  1. Fix the user

  2. Tweak the trigger (if wanted)

  3. Set the paths for the Action

    Program Path: %SystemRoot%\system32\WindowsPowerShell\v1.0\powershell.exe

    Arguments: full path to the .PS1 file – E:\Jobs\MCASLogCollectorUpload\MCAS_Upload-Log.ps1

    Start in – path to all the files – E:\Jobs\MCASLogCollectorUpload

#4 Forget it

At this point, run the scheduled task to ensure it is working. The PS1 script will even turn the VM on and off on-demand so that you can conserve the VM resources rather than having the Ubuntu machine running 24×7. If you are fast enough, you can FTP to the log collector yourself and see the file land – cd into the folder with the name of the Log Collector Data Source:

Once the file is uploaded, it will disappear from this folder on the FTP server. At that point, check in the governance log within MCAS and you should see this:

Now, your MCAS demo environment will stay fresh with recurring sample data.

Automation, Hack Job, MCAS, PowerShell, Security

Kali Linux, Hyper-V, PowerShell and VS Code

I am in the process of working towards my OSCP certification. As such, I needed a way to run a Kali Linux machine leveraging the OffSec provided VM images on my Win10 box and I needed tools that I am comfortable with that allow me to script easily and on demand. Since I am pretty deep in PowerShell, getting PWSH (how we launch PS on Linux) and Visual Studio Code up and running seemed logical. The instructions for installing PWSH on most blog posts aren’t quite complete or are out of date. I am documenting the version of everything I am using here to make it work.

Environment

  • Windows 10 Professional 1809
  • Kali Linux VMWare version 2019.2
  • PowerShell 6.2
  • Visual Studio Code
  • Git

Step 1. Download the VM

Step 2. Convert the VM into a Hyper-V Image

Step 3. Import the VM into Hyper-V

Step 4. Update and Upgrade

Step 5. Install PWSH

Step 6. Install VS Code

Step 7. Install Git

Step 1. Download the VM

## Update – OffSec now offers HyperV images directly so you can skip conversion

Download page here: https://www.offensive-security.com/kali-linux-vm-vmware-virtualbox-image-download/

Q. Should you download 32 or 64-bit?

A. If you are going to run PWSH you need the 64-bit version as .Net Core is only supported on 64-bit Debian machines.

https://docs.microsoft.com/en-us/dotnet/core/linux-prerequisites?tabs=netcore2x#supported-linux-versions

Since I am going to land on Hyper-V, I downloaded the VMWare image.

Step 2. Convert the VM into a Hyper-V Image

There are a lot of blog posts on doing this. I followed the steps here: https://blogs.msdn.microsoft.com/timomta/2015/06/11/how-to-convert-a-vmware-vmdk-to-hyper-v-vhd/

Step 3. Import the VM into Hyper-V

  • Select Location
  • Gen 1 VM (Image does not work with Gen 2)
  • 4096 MB of RAM
  • Connected to the Internet
  • Using the converted VMWare image
  • 4 cores

Boot it up

Step 4. Update and Upgrade

NOTE: Out of the box username and password are root and toor respectively. Recommend you change this ASAP.

  • Login
  • Open a terminal (left hand side)
  • sudo apt-get update && sudo apt-get upgrade
    • It might throw a warning or error here that a different process has a lock on some necessary files. If that is the case, wait a sec and rerun the prior command
  • Be patient
  • Follow the onscreen prompts – generally accept
    • Should non-super users be able to capture packets – yes
  • Reboot

     

Step 5. Install PWSH

 

From here, I followed the Microsoft steps in order to install PowerShell on Kali:

https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell-core-on-linux?view=powershell-6#kali

 

# Download & Install prerequisites

wget http://ftp.us.debian.org/debian/pool/main/i/icu/libicu57_57.1-6+deb9u2_amd64.deb

dpkg -i libicu57_57.1-6+deb9u2_amd64.deb

apt-get update && apt-get install -y curl gnupg apt-transport-https

 

# Add Microsoft public repository key to APT

curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add –

 

# Add Microsoft package repository to the source list

echo “deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-debian-stretch-prod stretch main” | tee /etc/apt/sources.list.d/powershell.list

 

# Install PowerShell package

apt-get update && apt-get install -y powershell

 

# Start PowerShell

pwsh

 

Rather than putting stretch main into powershell.list, I put it into microsoft.list instead.

PowerShell is installed!

Step 6. Install Visual Studio Code

https://code.visualstudio.com/docs/setup/linux#_debian-and-ubuntu-based-distributions

I downloaded the .deb file from here:

https://go.microsoft.com/fwlink/?LinkID=760868

And then changed to the download directory and ran the install command:

sudo apt install ./code_1.35.1-1560350270_amd64.deb

Once VS Code finishes installing, pop open the editor and then go to Extensions and add the PowerShell extension

You are now ready to PWSH on Kali!

Step 7. Install Git

    sudo apt-get install git

Ready to rock. We now have OffSec’s Kali Linux running in Win10 Hyper-V with PowerShell, Visual Studio Code and Git installed.

Automation, OffSec, OSCP, PowerShell, Security

ARM – Get Publishers, Offers and SKUs

Just a quick PowerShell script to get all of the Publishers, Offers and SKUs for the various VM images that are available for deployment through ARM.  These are the parameters that are necessary in order to deploy a VM using the Microsoft.Computer/VirtualMachines provider.

This is as of 2/4/2016 with Azure PowerShell December 2015 installed.

$Location = “WestUS” #Each locale might be different. Choose the location where you intend to deploy

$lstPublishers = Get-AzureRMVMImagePublisher -Location $Location

ForEach ($pub in $lstPublishers) {
#Get the offers
$lstOffers = Get-AzureRMVMImageOffer -Location $Location -PublisherName $pub.PublisherName

ForEach ($off in $lstOffers) {
$lstSkus = Get-AzureRMVMImageSku -Location $Location -PublisherName $pub.PublisherName -Offer $off.Offer

ForEach ($sku in $lstSkus) {
“” + $sku.Skus + “,” + $sku.Offer + “,” + $sku.PublisherName | out-file “.\myVMSkus.csv” -encoding ascii -append
}
}
}

The script is very slow, but I have found it difficult to find a resource that simply lists this information.  The information may be out there, but this script gives me a way within minutes to have a completely updated list I can use for my PowerShell scripts and ARM templates.

ARM, Azure, Infrastructure as Code