MCAS Lab – Auto Updating Discovery Data with Sample Data

Maybe you have a need to demo Microsoft Cloud App Security to your customers. Maybe you have a need for a lab that has constantly updated discovery data. Maybe creating a snapshot report every 30 days is good enough…maybe not. For me, I want the Discovery Dashboard to be populated with fresh data for demo purposes and the logs from my home router just don’t cut it as GBs of traffic to NetFlix, and Hulu and a taste of Twitter just don’t make for that compelling of a demo. I wanted a way to auto-update the global logs on a recurring basis in a “set it and forget it” manner.

  1. Deploy the log collector (Ubuntu FTW)
  2. Grab the Code and Config
  3. Create the Scheduled Task
  4. Forget it

#1 Deploy the log collector

https://docs.microsoft.com/en-us/cloud-app-security/discovery-docker-ubuntu

Critical Pieces of information:

  1. Machine Name – UBTLOG01
  2. Machine IP – 192.168.50.163 (this isn’t my real IP but I’ll keep it consistent for the purpose of the doc)
  3. Log Collector Data Source – name I gave the data source in the MCAS portal
  4. Log Collector Data Source Type – Palo Alto – PA Series FW
  5. Data Source Type – FTP

MCAS Portal – Log Collectors

Now that the log collector is deployed, we can move on to the code and the scheduled task

#2 Code and Config

Download from GitHub here
Download the code and drop it into the folder in which you want the script to run from and work.

I like using the CredentialManager module to register to register and hide credentials on my PowerShell automation machines.

PowerShell Gallery: Credential Manager 2.0

On line 44 of the PS1 code it has the path to the .env file (really just JSON) that contains all of the environmental variables necessary to run the script. Here’s the format of the .env file:

{
    “LogCollectorVMName”: “UBTLOG01”,
    “LogCollectorHVHost”: “DC03”,
    “LogCollectorIP”: “192.168.50.163”,
    “LogCollectorDSName”: “PaloFW-TSTLab”,
    “CredManTarget”: “MCAS”,
    “LogfilePath”: “E:/Jobs/MCASLogCollectorUpload”
}
LogCollectorVMName – Name of the Ubuntu machine
LogCollectorHVHost – I am using HyperV to host the log collector machine
LogCollectorIP – IP Address (LAN) for the Ubuntu machine
LogCollectorDSName – Data Source name assigned during the creating in MCAS
CredManTarget – name target used to retrieve the FTP credentials for pushing the log files
LogfilePath – path to really all of the artifacts – since this is a json file, use / instead of \ for the path
  1. Install Credential Manager on your worker machine
  2. Register the ftp credentials in a target named MCAS (match your .env file)
  3. Drop the script and the .env file in the LogFilePath you assigned earlier – I am using a path of E:\Jobs\MCASLogCollectorUpload

#3 Scheduled Task

Import the provided schedule task and tweak to your environment

  1. Fix the user

  2. Tweak the trigger (if wanted)

  3. Set the paths for the Action

    Program Path: %SystemRoot%\system32\WindowsPowerShell\v1.0\powershell.exe

    Arguments: full path to the .PS1 file – E:\Jobs\MCASLogCollectorUpload\MCAS_Upload-Log.ps1

    Start in – path to all the files – E:\Jobs\MCASLogCollectorUpload

#4 Forget it

At this point, run the scheduled task to ensure it is working. The PS1 script will even turn the VM on and off on-demand so that you can conserve the VM resources rather than having the Ubuntu machine running 24×7. If you are fast enough, you can FTP to the log collector yourself and see the file land – cd into the folder with the name of the Log Collector Data Source:

Once the file is uploaded, it will disappear from this folder on the FTP server. At that point, check in the governance log within MCAS and you should see this:

Now, your MCAS demo environment will stay fresh with recurring sample data.

Automation, Hack Job, MCAS, PowerShell, Security

Kali Linux, Hyper-V, PowerShell and VS Code

I am in the process of working towards my OSCP certification. As such, I needed a way to run a Kali Linux machine leveraging the OffSec provided VM images on my Win10 box and I needed tools that I am comfortable with that allow me to script easily and on demand. Since I am pretty deep in PowerShell, getting PWSH (how we launch PS on Linux) and Visual Studio Code up and running seemed logical. The instructions for installing PWSH on most blog posts aren’t quite complete or are out of date. I am documenting the version of everything I am using here to make it work.

Environment

  • Windows 10 Professional 1809
  • Kali Linux VMWare version 2019.2
  • PowerShell 6.2
  • Visual Studio Code
  • Git

Step 1. Download the VM

Step 2. Convert the VM into a Hyper-V Image

Step 3. Import the VM into Hyper-V

Step 4. Update and Upgrade

Step 5. Install PWSH

Step 6. Install VS Code

Step 7. Install Git

Step 1. Download the VM

## Update – OffSec now offers HyperV images directly so you can skip conversion

Download page here: https://www.offensive-security.com/kali-linux-vm-vmware-virtualbox-image-download/

Q. Should you download 32 or 64-bit?

A. If you are going to run PWSH you need the 64-bit version as .Net Core is only supported on 64-bit Debian machines.

https://docs.microsoft.com/en-us/dotnet/core/linux-prerequisites?tabs=netcore2x#supported-linux-versions

Since I am going to land on Hyper-V, I downloaded the VMWare image.

Step 2. Convert the VM into a Hyper-V Image

There are a lot of blog posts on doing this. I followed the steps here: https://blogs.msdn.microsoft.com/timomta/2015/06/11/how-to-convert-a-vmware-vmdk-to-hyper-v-vhd/

Step 3. Import the VM into Hyper-V

  • Select Location
  • Gen 1 VM (Image does not work with Gen 2)
  • 4096 MB of RAM
  • Connected to the Internet
  • Using the converted VMWare image
  • 4 cores

Boot it up

Step 4. Update and Upgrade

NOTE: Out of the box username and password are root and toor respectively. Recommend you change this ASAP.

  • Login
  • Open a terminal (left hand side)
  • sudo apt-get update && sudo apt-get upgrade
    • It might throw a warning or error here that a different process has a lock on some necessary files. If that is the case, wait a sec and rerun the prior command
  • Be patient
  • Follow the onscreen prompts – generally accept
    • Should non-super users be able to capture packets – yes
  • Reboot

     

Step 5. Install PWSH

 

From here, I followed the Microsoft steps in order to install PowerShell on Kali:

https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell-core-on-linux?view=powershell-6#kali

 

# Download & Install prerequisites

wget http://ftp.us.debian.org/debian/pool/main/i/icu/libicu57_57.1-6+deb9u2_amd64.deb

dpkg -i libicu57_57.1-6+deb9u2_amd64.deb

apt-get update && apt-get install -y curl gnupg apt-transport-https

 

# Add Microsoft public repository key to APT

curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add –

 

# Add Microsoft package repository to the source list

echo “deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-debian-stretch-prod stretch main” | tee /etc/apt/sources.list.d/powershell.list

 

# Install PowerShell package

apt-get update && apt-get install -y powershell

 

# Start PowerShell

pwsh

 

Rather than putting stretch main into powershell.list, I put it into microsoft.list instead.

PowerShell is installed!

Step 6. Install Visual Studio Code

https://code.visualstudio.com/docs/setup/linux#_debian-and-ubuntu-based-distributions

I downloaded the .deb file from here:

https://go.microsoft.com/fwlink/?LinkID=760868

And then changed to the download directory and ran the install command:

sudo apt install ./code_1.35.1-1560350270_amd64.deb

Once VS Code finishes installing, pop open the editor and then go to Extensions and add the PowerShell extension

You are now ready to PWSH on Kali!

Step 7. Install Git

    sudo apt-get install git

Ready to rock. We now have OffSec’s Kali Linux running in Win10 Hyper-V with PowerShell, Visual Studio Code and Git installed.

Automation, OffSec, OSCP, PowerShell, Security

ARM – Get Publishers, Offers and SKUs

Just a quick PowerShell script to get all of the Publishers, Offers and SKUs for the various VM images that are available for deployment through ARM.  These are the parameters that are necessary in order to deploy a VM using the Microsoft.Computer/VirtualMachines provider.

This is as of 2/4/2016 with Azure PowerShell December 2015 installed.

$Location = “WestUS” #Each locale might be different. Choose the location where you intend to deploy

$lstPublishers = Get-AzureRMVMImagePublisher -Location $Location

ForEach ($pub in $lstPublishers) {
#Get the offers
$lstOffers = Get-AzureRMVMImageOffer -Location $Location -PublisherName $pub.PublisherName

ForEach ($off in $lstOffers) {
$lstSkus = Get-AzureRMVMImageSku -Location $Location -PublisherName $pub.PublisherName -Offer $off.Offer

ForEach ($sku in $lstSkus) {
“” + $sku.Skus + “,” + $sku.Offer + “,” + $sku.PublisherName | out-file “.\myVMSkus.csv” -encoding ascii -append
}
}
}

The script is very slow, but I have found it difficult to find a resource that simply lists this information.  The information may be out there, but this script gives me a way within minutes to have a completely updated list I can use for my PowerShell scripts and ARM templates.

ARM, Azure, Infrastructure as Code

ARM – Recreating VM Off Existing VHDs

Note – This blog pertains to the November 2015 release of Azure PowerShell

At some point, I apparently told Azure Resource Manager to delete the VM that runs SQL for my SCOM environment running in Azure.  While I completely disagree with the portal’s interpretation of my button clicks, my VM is missing and I need it back.  Thankfully, when you delete a VM through the portal (accidentally or otherwise…or not at all and it just magically disappears), the disks are left behind in the storage account.  In this case, this is a good thing.  At least I can recover.

The documentation around the new AzureRM cmdlets still has a gap or nine, so I wasn’t able to easily dig up a script that created a VM off an existing OS disk and attach all of the data disks as well.  Using PS help and assuming the process was still somewhat like what we had to do under the older cmdlets, I put together the following script:

$om03 = New-AzureRmVMConfig -VMName om03 -VMSize Standard_D2
$om03 | Set-AzureRmVMOSDisk `
-VhdUri
https://labtstazom0sa.blob.core.windows.net/vhds/om03osdisk.vhd `
-Name om03osdisk -CreateOption attach -Windows -Caching ReadWrite

$StorageAccountURI = “https://labtstazom0sa.blob.core.windows.net/vhds/”
$numdisks = 4

For($i=0;$i -lt $numDisks;$i++){
$OM03 | Add-AzureRMVMDataDisk -Name (“datadisk” + $i) `
-VhdUri ($StorageAccountURI + “OM03datadisk” + $i + “.vhd”) `
-LUN $i -Caching ReadWrite `
-CreateOption Attach -DiskSizeInGB 20
}

New-AzureRMVM -ResourceGroupName LabTSTAZRGOM `
-Location “West US” -VM $om03 -Verbose

I needed to make sure the data disk naming and URIs matched, I was provisioning to the right region, resource group, etc.  Executing the script resulted in the following:

image

I forgot to attach the network adapter.  Thankfully, the delete executed by the ARM gremlins left the interface so it should be as simple as doing a reattach and then I should be good to go.  Herein lies a pretty significant issue.  It would seem with this release of Azure PowerShell, the cmdlet Add-AzureRmVMNetworkInterface has been removed.  With that, I was not able to easily figure out how to attach an existing network interface to a new VM config through PowerShell using the AzureRM cmdlets.  You can do this with a template, but I want to just bang this out and get it done.  This leaves me with 2 options:

  1. Destroy the existing NIC and create a new one, or
  2. Do something somewhat hacky by stealing the network profile from one of the other VMs, modify it and then attach that network profile to my OM03 config.

I am guessing there is probably an additional route invoking .NET methods to create a net new profile and then assign the NIC but I will leave that research until later.  The first option would be straight forward, however, it might leave me with some cleanup in the lab afterwards (DNS, IPs, etc.).  Rather than messing with that, I decide to try the second option and steal the profile from OM01 VM and modify:

$nic = Get-AzureRmNetworkInterface -Name om03nic -ResourceGroupName LabTSTAZRGOM
$netprof = (Get-AzureRMVM -VMName OM01 -ResourceGroupName LabTSTAZRGOM).NetworkProfile
$netprof.NetworkInterfaces[0].ReferenceUri = $nic.id

$om03.NetworkProfile = $netprof

I get the existing OM03NIC, get the network profile from OM01 and then stuff the ID for the existing NIC into the profile.  Once I have this, I assign the network profile to the new VM Config.  This process actually successfully navigates the getters and setters.  After running this, I re-run the New-AzureRMVM cmdlet and I get a success:

image

Now, if it really worked, SQL would be running and I will be able to launch SCOM.  Logging and popping open the console:

image

Success!

ARM, Azure, Hack Job

ARM – Visual Studio Deployment With Oct 2015 Azure PowerShell Preview

With the release of the updated Azure PowerShell 1.0 Preview (October 2015), I was curious to see how much of a change would be required for me to continue to use Visual Studio 2015 to provision dev/test environments into my Azure account.  When VS pushes an ARM template, it executes an auto generated PowerShell script named Deploy-AzureResourceGroup.ps1.

image

Here are the changes I made in order to get the script to execute successfully.

Change 1 – Module Check

The old code checked for the existence of the AzureResourceManager module.  I updated the code to check for the new module named AzureRM

if (-NOT (Get-Module -ListAvailable | Where-Object {($_.Name -eq ‘AzureRM’) })) {
Throw “The version of the Azure PowerShell cmdlets installed on this machine are not compatible with this script.”
}

Change 2 – Import the module.  Because the script still needs access to the Azure Service Management cmdlets, both the Azure and AzureRM modules need to be imported

Import-Module AzureRM -ErrorAction SilentlyContinue
Import-Module Azure -ErrorAction SilentlyContinue

Change 3 – Get the storage account key.  The old code switches between AzureResourceManager and AzureServiceManagement depending on how the storage account was provisioned.  The switching is no longer needed since now both the ASM and ARM cmdlets can be loaded and accessed at the same time.  The code below will now retrieve the key in either case

    if ($StorageAccountResourceGroupName) {
$StorageAccountKey = (Get-AzureRMStorageAccountKey -ResourceGroupName $StorageAccountResourceGroupName -Name $StorageAccountName).Key1
}
else {
$StorageAccountKey = (Get-AzureStorageKey -StorageAccountName $StorageAccountName).Primary
}

Change 4 – Create a connection for ARM.  VS stores connection information for the old cmdlets and for ASM.  For the new AzureRM cmdlets, a new connection needs to be established.  In order to do this, I added in the new Login-AzureRMAccount cmdlet but only call it if there does not already exist a connection to ARM

try{
$AzureRMContext = Get-AzureRMContext
} catch {
Login-AzureRMAccount
}

Note – If you connect to more than one subscription or need to authenticate with more than one set of credentials, this cmdlet will have to be wrapped in additional logic

Change 5 – Deploy the resources.  In the prior version of the code, the deployment is done directly through the New-AzureResourceGroup cmdlet.  This cmdlet would update an existing resource group and force the replacement of resources, or deploy the RG from scratch if it did not exist.  In the new version, we need to use the Get-AzureRMResourceGroup cmdlet to see if the RG is already there.  If not, create it.  Then, we need to use the New-AzureRMResourceGroupDeployment cmdlet to actually land the resources in the RG

$ResourceGroup = Get-AzureRMResourceGroup | where{$_.ResourceGroupName -eq $ResourceGroupName}
if($ResourceGroup -eq $null) {
Write-Host “Provisioning Resource Group: $ResourceGroupName in Location: $ResourceGroupLocation”
New-AzureRMResourceGroup -Name $ResourceGroupName -Location $ResourceGroupLocation
} else {
Write-Host “Resource Group: $ResourceGroupName Exists”
}

New-AzureRMResourceGroupDeployment -Name $ResourceGroupName `
-ResourceGroupName $ResourceGroupName `
-TemplateFile $TemplateFile `
-TemplateParameterFile $TemplateParametersFile `
@OptionalParameters `
-Force -Verbose

That’s the gist of the changes.  The only difference in the actual deployment through VS is that I now get prompted for the AzureRM credentials in order to connect if a connection does not already exist.

image

Happy cloud deploying!

Example Script

ARM, Azure, Infrastructure as Code