MCAS Lab – Auto Updating Discovery Data with Sample Data

Maybe you have a need to demo Microsoft Cloud App Security to your customers. Maybe you have a need for a lab that has constantly updated discovery data. Maybe creating a snapshot report every 30 days is good enough…maybe not. For me, I want the Discovery Dashboard to be populated with fresh data for demo purposes and the logs from my home router just don’t cut it as GBs of traffic to NetFlix, and Hulu and a taste of Twitter just don’t make for that compelling of a demo. I wanted a way to auto-update the global logs on a recurring basis in a “set it and forget it” manner.

  1. Deploy the log collector (Ubuntu FTW)
  2. Grab the Code and Config
  3. Create the Scheduled Task
  4. Forget it

#1 Deploy the log collector

https://docs.microsoft.com/en-us/cloud-app-security/discovery-docker-ubuntu

Critical Pieces of information:

  1. Machine Name – UBTLOG01
  2. Machine IP – 192.168.50.163 (this isn’t my real IP but I’ll keep it consistent for the purpose of the doc)
  3. Log Collector Data Source – name I gave the data source in the MCAS portal
  4. Log Collector Data Source Type – Palo Alto – PA Series FW
  5. Data Source Type – FTP

MCAS Portal – Log Collectors

Now that the log collector is deployed, we can move on to the code and the scheduled task

#2 Code and Config

Download from GitHub here
Download the code and drop it into the folder in which you want the script to run from and work.

I like using the CredentialManager module to register to register and hide credentials on my PowerShell automation machines.

PowerShell Gallery: Credential Manager 2.0

On line 44 of the PS1 code it has the path to the .env file (really just JSON) that contains all of the environmental variables necessary to run the script. Here’s the format of the .env file:

{
    “LogCollectorVMName”: “UBTLOG01”,
    “LogCollectorHVHost”: “DC03”,
    “LogCollectorIP”: “192.168.50.163”,
    “LogCollectorDSName”: “PaloFW-TSTLab”,
    “CredManTarget”: “MCAS”,
    “LogfilePath”: “E:/Jobs/MCASLogCollectorUpload”
}
LogCollectorVMName – Name of the Ubuntu machine
LogCollectorHVHost – I am using HyperV to host the log collector machine
LogCollectorIP – IP Address (LAN) for the Ubuntu machine
LogCollectorDSName – Data Source name assigned during the creating in MCAS
CredManTarget – name target used to retrieve the FTP credentials for pushing the log files
LogfilePath – path to really all of the artifacts – since this is a json file, use / instead of \ for the path
  1. Install Credential Manager on your worker machine
  2. Register the ftp credentials in a target named MCAS (match your .env file)
  3. Drop the script and the .env file in the LogFilePath you assigned earlier – I am using a path of E:\Jobs\MCASLogCollectorUpload

#3 Scheduled Task

Import the provided schedule task and tweak to your environment

  1. Fix the user

  2. Tweak the trigger (if wanted)

  3. Set the paths for the Action

    Program Path: %SystemRoot%\system32\WindowsPowerShell\v1.0\powershell.exe

    Arguments: full path to the .PS1 file – E:\Jobs\MCASLogCollectorUpload\MCAS_Upload-Log.ps1

    Start in – path to all the files – E:\Jobs\MCASLogCollectorUpload

#4 Forget it

At this point, run the scheduled task to ensure it is working. The PS1 script will even turn the VM on and off on-demand so that you can conserve the VM resources rather than having the Ubuntu machine running 24×7. If you are fast enough, you can FTP to the log collector yourself and see the file land – cd into the folder with the name of the Log Collector Data Source:

Once the file is uploaded, it will disappear from this folder on the FTP server. At that point, check in the governance log within MCAS and you should see this:

Now, your MCAS demo environment will stay fresh with recurring sample data.

Leave a Reply