Azure storage: working with containers and SAS keys

This article describes how to work with Azure storage containers and securely write data files using SAS URIs with Python.

Storage containers are a way to organize a collection of blobs in public cloud, basically like folders. You can manage user access to containers using role-based access control (RBAC) just like other cloud resources. Another more anonymous way to manage access is with Shared Access Signature (SAS) keys.

Suppose you are working with a producer and want to give them a way to write files to your cloud storage container without being able to read any files. Similarly you want to let a consumer read data from the container without being able to make any changes or read from other containers. SAS keys provide a simple way to manage access to a storage container without the complexity of managing role-based access. Anyone who has a valid key can access the resource.

In this example you could give the producer a write-only key, and the consumer a key with read and list permissions, and set expiry dates for both keys for the duration of the contract. For convenience a SAS key can be provided in the form of a URI, also known as a SAS URI.

A limitation of using SAS keys is that it is only as secure as your key management. If your consumer were to share their read-key with a third party or store it insecurely, then anyone with access to the key could read the data. Therefore it is most useful for limited-duration data exchange where there is a trusted key-management process.

Creating a storage container and SAS URIs using CLI

You can create storage containers and SAS URIs using the Azure portal or by command line.

The script below shows a Bash script which can be run from the Azure Cloud Shell. It uses Azure CLI to create a storage account, a container, and two SAS URIs, one with read-list permissions, and one with write-only permissions. It’s also on github here.

# Script to create a storage account with SAS URI

# command line arguments 

# set the start date to today, and expiry date 90 days in the future - change this as needed
SASSTART=`date +%Y-%m-%d`'T00:00:00Z'
EXPIRY=`date -d "+90 days" +%Y-%m-%d`'T00:00:00Z'

# set the name of the SAS key, based on the storage account

# create the resource group (keeps going if already exists)
az group create --name $RGNAME --location $LOCATION

# Create a storage account (keeps going if already exists)
az storage account create --name $SANAME --resource-group $RGNAME

# Get a storage account key
KEY=`az storage account keys list -g $RGNAME -n $SANAME | jq .[0].value`

# Create a container using the key
az storage container create -n $CONTAINERNAME --account-name $SANAME --account-key $KEY

# Create a write-only SAS token on the container and get the key
SASKEY=`az storage container generate-sas --account-name $SANAME --account-key $KEY --name $CONTAINERNAME \
--permissions w --start $SASSTART --expiry $EXPIRY`

# remove quotes

# return the write-only SAS URI which is used in the Workplace Analytics Settings page

# Create a read-list-only SAS token on the container and get the key
SASKEY=`az storage container generate-sas --account-name $SANAME --account-key $KEY --name $CONTAINERNAME \
--permissions rl --start $SASSTART --expiry $EXPIRY`

# remove quotes

# return a read-only SAS URI which can be used by an analyst to access data

Writing data to a write-only SAS URI using Python

Assuming you’ve created SAS URIs with the required permissions and date range, here’s a Python example of using a write-only SAS URI to write data to an Azure container. It takes some text as a command line argument, and writes it to a blob in the container. This example can be found on github here.

''' - test program to write an Azure block blob using a SAS URI'''
import argparse
from urllib import request, parse
import sys

from import BaseBlobService

def put_blob(sa_uri, container, blob_name, sas_key, blob_text):
    '''Write blob data using a url PUT'''
    opener = request.build_opener(request.HTTPHandler)
    urlrequest = request.Request(
        sa_uri + container + '/' + blob_name + '?' + sas_key, data=blob_text)
    urlrequest.add_header('x-ms-blob-type', 'BlockBlob')
    urlrequest.get_method = lambda: 'PUT'

def main():
    '''Main routine, start by parsing the URL argument'''
    arg_parser = argparse.ArgumentParser()
        '--uri', '-u', required=True, action='store', help='SAS URI with write perms, in quotes')
        '--blobname', '-n', action='store', help='Name of blob file to create')
        '--text', '-t', action='store', help='blob file text to write')

    args = arg_parser.parse_args()
    uri = args.uri

    # write a default blob name and text if not specified
    blob_name = 'deleteme.txt' if args.blobname is None else args.blobname
    blob_text = b'Blob write test message.' if args.text is None else str.encode(

    # split the URI into components
    uri_tuple = parse.urlparse(uri)
    base_uri = 'https://' + uri_tuple.netloc
    container = uri_tuple.path
    sas_key = uri_tuple.query

    # write the blob
    put_blob(base_uri, container, blob_name, sas_key, blob_text)

if __name__ == "__main__":
Posted in Cloud, Containers, Python | Tagged , , | 1 Comment

Deploying an Azure Data Science VM from the command line

This article describes how to deploy an Azure Data Science virtual machine from the command line using Azure CLI 2.0.

Setting up a compute environment for data science work can be challenging for several reasons:

  • Identifying the software you need, and setting up all the required open source and other tools can be time-consuming and complex.
  • Figuring out all the versions of individual tools which play nicely together is a hard problem.
  • You could be working with sensitive data that needs to stay within a customer-controlled security boundary. Your trusty desktop/laptop won’t do. Data compliance is an increasingly important and complex area to navigate.

The Azure Data Science virtual machine (DSVM) images provide a quick way to get a data science and machine learning environment on virtual machines without requiring any software installation or configuration.

There are pre-configured images available for Ubuntu, Windows and CentOS. They each come with a comprehensive set of software including Jupyter Notebook Server with R, Python, development tools & IDEs, data movement and management systems, machine learning, deep learning, big data and many other tools.

You have the option of deploying DSVMs interactively through the Azure Portal, but it can save time to simply run a script to deploy a VM in a single step. If you need a client or an operator to deploy the VM infrastructure within their Azure subscription in order to maintain a security boundary, you can provide a script to run without having to document all the interactive steps.

Because the data science VMs are Azure Marketplace images, you can’t just issue a vm create statement in CLI and point to the image. You need to include the Marketplace plan name, product and publisher in the create statement.

Here’s an example bash script which uses Azure CLI 2.0 to automate the creation of Microsoft Windows 2016 Data Science VM, with an attached data disk. The script takes the following command line arguments:

  • VM name
  • Azure resource group name
  • Azure data center location
  • VM size (small, medium or large – estimates which Azure VM and data disk size to pick, edit the script if you want to pick different sizes)
  • user
  • password

The Azure resource group is created if does not already exist. This script can be run directly from the Azure Portal in an Azure Cloud Shell.

# script to create a Microsoft Windows 2016 data science VM in Azure
LOCATION=$3  # e.g. westus2
CONFIG=$4    # small|medium|large
USER=$5      # e.g. wpauser
PASS=$6      # must be 12 or more characters


# determine config size
case $CONFIG in

# create the resource group (keeps going if already exists)
az group create --name $RGNAME --location $LOCATION

# create the VM
az vm create \
    --name $VMNAME --resource-group $RGNAME --image $PUB\:$OFFER\:$SKU\:$VERSION \
    --plan-name $SKU --plan-product $OFFER --plan-publisher $PUB \
    --admin-username $USER --admin-password $PASS \
    --size $SIZE \
    --data-disk-sizes-gb $DATASIZEGB

To run this script you need an Azure subscription, and to be in an environment with CLI installed. The simplest way to do this is to create the script as an executable file in your Azure Cloud Shell. For convenience I’ve put the script on github here:

The video below shows an example of running the script in the cloud shell once you’ve created the file. To create the file you could log into your cloud shell and run:

curl >
chmod +x
Posted in Cloud, Computers and Internet, Data science, Linux, Python, Ubuntu | Tagged , , , , | 1 Comment

Exploring the Microsoft Graph with Python and AI

An image showing the primary resources and relationships that are part of the graph

The Microsoft Graph is a programmability model and API platform to access Office 365 data in a unified and coordinated way.

This article will show you how to combine the Graph API with Azure Cognitive Services to derive insights from your email habits.

What’s with the name?
Not everyone likes the name “Graph”, but it’s named because the Graph is a collection of resources (vertices) which are connected by relationships (edges). You can use the API to traverse these relationships through a single endpoint.

What’s it good for?
With unified developer access to O365 data you can enrich applications with calendar, collaboration, organizational data, analyze email content, automate office workflow. Microsoft provides some Graph examples here: What can you do with Microsoft Graph.

Exploring the Graph API

You can get to know the Graph API without writing code by using the Graph Explorer. This is a good way to test if you have the right authentication, and to start learning the calls. Click on Sign in with Microsoft to see your data, or use the demo data without signing in. The explorer provides a set of sample queries to get started..


Getting started with Graph and Python
You’ll need an Office 365 subscription to get started. Additional data is available with a work or organizational account. The simple connect example below should work with any type of subscription (e.g. O365 Home). The mail analysis example assumes you have an organizational (i.e. work or school) account, as well an an Azure subscription.

To write a Graph application, start by registering an app at the Application Registration Portal and make a note of its Application Id and password. The steps are outlined here:

Connecting to the Graph requires a valid Azure Active Directory access token, which the app sends to the Graph API endpoint in the HTTP header. To do this the app needs to forward a connecting user to an authorization endpoint to log on, then validate at a token endpoint, before sending that token to the Graph endpoint. Simple Python web frameworks like Flask and Bottle can integrate with the Python OAUTH2 library to enable this workflow.

A good explanation of the workflow can be found on the Microsoft Graph github page here: Python authentication samples for Microsoft Graph. This repo includes Python examples for both Flask and Bottle app servers.

A good way to figure out how something works is to take an example and strip it down to be as simple as possible…


Here is a bare bones derivation of the Microsoft Graph Bottle example called simplegraph. It reads the application ID and password created on the Application Registration Portal from a JSON configuration file you create called graphconfig.json.

simplegraph uses OAUTH2 to log in..

def login():
    """Prompt user to authenticate."""
    auth_state = str(uuid.uuid4())
    SESSION.auth_state = auth_state

    prompt_behavior = 'none'
    #prompt_behavior = 'select_account'

    params = urllib.parse.urlencode({'response_type': 'code',
                                     'client_id': client_id,
                                     'redirect_uri': redirect_uri,
                                     'state': auth_state,
                                     'resource': resource_uri,
                                     'prompt': prompt_behavior})

    return redirect(authority_url + '/oauth2/authorize?' + params)

def authorized():
    """Handler for the application's Redirect Uri."""
    code = request.query.code
    auth_state = request.query.state
    if auth_state != SESSION.auth_state:
        raise Exception('state returned to redirect URL does not match!')
    auth_context = adal.AuthenticationContext(authority_url, api_version=None)
    token_response = auth_context.acquire_token_with_authorization_code(
        code, redirect_uri, resource_uri, client_id, client_secret)
    SESSION.headers.update({'Authorization': f"Bearer {token_response['accessToken']}",
                            'User-Agent': 'adal-sample',
                            'Accept': 'application/json',
                            'Content-Type': 'application/json',
                            'SdkVersion': 'sample-python-adal',
                            'return-client-request-id': 'true'})
    return redirect('/maincall')

and then sets up a basic API call to a standard Graph alias called /me to return information about the signed in user..

def maincall():
    """Confirm user authentication by calling Graph and displaying data."""
    apicall = '/me'
    endpoint = resource_uri + api_version + apicall
    http_headers = {'client-request-id': str(uuid.uuid4())}
    graphdata = SESSION.get(
        endpoint, headers=http_headers, stream=False).json()
    return display_payload(graphdata, apicall)

To display the JSON output from this call as an HTML table, the json2html library is called..

See def_payload()


The HTML output also displays a form which lets you call any Graph API function using the graphcall() route..

def graphcall():
    """Call the API specified in an HTML form."""
    apicall = request.forms.get('apicall')
    endpoint = resource_uri + api_version + apicall
    http_headers = {'client-request-id': str(uuid.uuid4())}
    graphdata = SESSION.get(
        endpoint, headers=http_headers, stream=False).json()
    return display_payload(graphdata, apicall)

The full source for this example can be found here:

Integrating Graph data with AI services

Let’s combine Graph data with Microsoft Cognitive Services to derive some meaningful insights based on text analytics. The following example gets email data from a folder over a selected timeframe and performs a text analytics sentiment analysis to determine how positive or negative the sentiment of the email was, and displays a summary of the data as a word cloud.


You can also search using a string to show the email sentiment for  a specific topic during that time period.

The Cognitive Services API

The Azure Cognitive Services API provides a range of services to apply AI to image processing, speech, semantic search and language recognition. This example uses text analytics to extract summary statements and analyze sentiment for a body of text.

Text Analytics
You can play with the text analytics API without needing to write code by pasting some text in the form here: Here’s what it makes of Coleridge..


With a sentiment score of 99%, Xanadu is confirmed to be a pleasure-dome. How about Edgar Allan Poe?


Certified bleak.

The wordcloud text analytics example

To use the Text Analytics API you need an endpoint and access key. See How to find endpoints and access keys. There are some simple Python examples of using it here: Quickstart for Text Analytics API with Python.

The wordcloud example below also uses the wordcloud Python library to visualize text based on word frequency.

The basic steps of the app are:

1. Authenticate to the Microsoft Graph.
2. Use an web form to select a mail folder, timeframe and optional search string.
3. Call the /me/mailFolders/{id}/messages Graph API call to get the selected emails.
4. Call the text analytics API to summarize the text from all the emails.
5. Display the sentiment.

The full source for this example can be found here: – it’s a bit rough around the edges (i.e. hacked together quickly). Suggestions to improve the code are welcome.

What next?
The power of the Microsoft Graph API is that it opens up your office data in a traversable way, making it easy to combine with sophisticated analysis libraries and services using simple coding, significantly lowering the bar to make effective productivity and reporting tools.

Posted in Cloud, Computers and Internet, Graph, Python | Tagged , | 1 Comment

Azure Cloud Shell, Python, and Container Instances

Despite being fairly new, the Azure Cloud Shell has quickly become my go-to place for scripting and programming with Azure. This article describes some of features that make cloud shell useful, and how to use the environment with Python.

What’s to like about Cloud Shell?

Here are my top 5 cloud shell features that make it a useful environment for cloud scripting and programming..

5. It’s an interactive container-on-demand, running an up-to-date patched Linux shell with the commands you’d expect, in a few seconds.


4. The latest version of Azure CLI is pre-installed, connected, and ready to run. In some cases the cloud shell CLI has access to preview features not available elsewhere, this week for example, Container Instances.

3. Your home directory is backed by an Azure storage account – any files you create are there for you next time you connect, from any device.

2. The shell includes Python, and it’s Python 3 by default – thank you. You can install any Python libraries into the user environment. It also has go, and much more.

1. Automatic cloud authentication – each time you start a cloud shell, it puts a current Azure authentication token in the CLI cache. You can use this token for more than CLI, for example calling the Azure REST API directly.

Note: The latest CLI versions now also include a command to display the current authentication token. Try:

    az account get-access-token

Python programming with the cloud shell

To successfully install Python libraries in the cloud shell, make sure you install them in the user environment, so you don’t need root access. E.g. add the –user argument to pip.

In this example I want to call the Azure REST API, so will start by installing the azurerm REST wrapper library (not to be confused with the Azure client libraries for Python).

pip install --user --upgrade azurerm

The azurerm library can authenticate using an Azure Service Principal, but it also includes a couple of handy functions for getting an authentication token and default subscription ID directly from the CLI cache. I.e. get_access_token_from_cli() and get_subscription_from_cli().

A Python script to create a new Azure resource group using azurerm in Azure cloud shell looks like this:

import azurerm

auth_token = azurerm.get_access_token_from_cli()
subscription_id = azurerm.get_subscription_from_cli()
rgname = 'containergroup'
location = 'westus'

response = azurerm.create_resource_group(auth_token, subscription_id, rgname, location)
if response.status_code == 200 or response.status_code == 201:
    print('Resource group: ' + rgname + ' - created successfully.')
    print('Return code ' + str(response.status_code) + ' from create_resource_group')


Creating container instances

Both the cloud shell CLI and the azurerm library support the Container Instance groups preview.

Here is an example that creates an nginx based container group using the same resource group:

import azurerm
import json

auth_token = azurerm.get_access_token_from_cli()
subscription_id = azurerm.get_subscription_from_cli()
rgname = 'containergroup'
location = 'westus'
container_name = 'ngx'
container_group_name = 'ngxgroup'
image = 'nginx'
iptype = 'public'
port = 80

container_def = azurerm.create_container_definition(container_name, image, port=port)
container_list = [container_def]
response = azurerm.create_container_instance_group(auth_token, subscription_id, rgname,
    container_group_name, container_list, location, port=port, iptype=iptype)
if response.status_code == 200 or response.status_code == 201:
    print('create_container_group: ' + container_group_name + ' - called successfully.')
    print('Return code ' + str(response.status_code) + ' from create_container_group')

From here you could use CLI or azurerm to do a GET on the container group to get the public IP address and other details.


The cloud shell provides a self-contained development environment to make direct calls to the Azure management endpoint, without requiring extra authentication or needing to mess with Service Principals to use it.

Cloud shell wish list

The main thing I’d love to see improve would be to make it easier to copy/paste text to and from the shell. This sometimes works for me, and sometimes crashes the shell. This is a small and hopefully temporary inconvenience however.

Update: ctrl-insert and shift-insert can be used as copy/paste shortcuts in the cloud shell.

Posted in Cloud, Containers, Linux, Python, Ubuntu | Tagged , , , , , | 1 Comment

Figuring out Azure VM scale set machine names

How do the individual VM host names in a scale set get their names?

Can you choose the names, or the naming convention?

How do you correlate a machine name with a VM instance ID?

When you create a scale set, every virtual machine instance in the scale set has several ways to reference it.

In the portal Instances blade, the VMs are shown by their Azure Resource names, consisting of the scale set name, underscore, instance ID:


VM Hostname

If you log in to a scale set VM, you’ll see a hostname like myprefix0000VU.

The hostname is composed of 2 parts:

The computerNamePrefix is a scale set property you can set when creating a VM. Many scale set Azure templates default it to the same value as the scale set name, but it doesn’t have to be the same.

The number is a base-36, also known as hexatrigesimal, representation of the VM instance ID, filled with leading zeroes to give it a fixed length of 6 characters. For example, VM ID 10 would be represented as 00000A, VM ID 35 would be 00000Z, and VM ID 1146 would be 0000VU.

Why hexatrigesimal?

Base-36 is the most compact way to represent a number using single-case alphanumeric characters (i.e. only uppercase letters and digits).

Some platforms (like Windows) have a limited maximum hostname length of 15 characters.

Using the compact hexatrigesimal numbering system to represent the VM ID provides a compromise between allowing the maximum length for a computer name prefix, and allowing for the maximum number of unique IDs before wrapping around. On Windows, this leaves a maximum machineNamePrefix length of 9 characters. The prefix can be longer on Linux.

The maximum machine name of myprefixZZZZZZ represents a VM ID of 2176782335. You would have to do a lot of scaling in and out to reach that value, but if you did, after that, the name would start wrapping around to reuse values from deleted VMs.

Note: The only part of the naming mechanism you can change is the machineNamePrefix. You can’t pick how many leading zeroes, or change other aspects of the hexatrigesimal numbering scheme.

Correlating VMSS hostname with instance ID

There is a direct correlation between a VMSS VM hostname and its instance ID. You just convert the hexatrigesimal number to decimal. Here’s a Python 3 function to convert a hostname to a decimal instance ID:

def hostname_to_vmid(hostname):
    # get last 6 characters and remove leading zeroes
    hexatrig = hostname[-6:].lstrip('0')
    multiplier = 1
    vmid = 0
    # reverse string and process each char
    for x in hexatrig[::-1]:
        if x.isdigit():
            vmid += int(x) * multiplier
            # convert letter to corresponding integer
            vmid += (ord(x) - 55) * multiplier
        multiplier *= 36
    return vmid

Here’s a function to convert an instance ID and machine name prefix to a hostname:

def vmid_to_hostname(vmid, prefix):
    hexatrig = ''
    # convert decimal vmid to hexatrigesimal base36
    while vmid > 0:
        vmid_mod = vmid % 36
        # convert int to corresponding letter
        if vmid_mod > 9:
            char = chr(vmid_mod + 55)
            char = str(vmid_mod)
        hexatrig = char + hexatrig
        vmid = int(vmid/36) 
    return prefix + hexatrig.zfill(6)

You can find an example command line script to convert these names here:


Here’s a handy summary of the various ways you can reference a VM in a scale set:


Where found

Used for



VMSS VM->instanceId

Decimal value incremented each time a new VM is created.



VMSS VM->name

Azure resource name, shown in the portal Instances blade.



VMSS VM->properties->osProfile->computerName

Hostname used by the VM operating system.



VMSS VM->properties->vmId

Unique GUID used by metrics pipeline (MDM).


Posted in Cloud, Computers and Internet, Python, VM Scale Sets | Tagged , , , | 4 Comments

How to change the password on a VM scale set

Problem: You forgot the VM password on an Azure VM scale set, or you need to change it. How do you do it?

At first glance this can be difficult because every new VM sets its password from the VMSS model you created when the scale set was first deployed, and before Microsoft.Compute API version 2017-12-01, there were no options to change the password in the model. If you write a fancy sshpass script to change the password remotely on all the existing VMs, new VMs would still come up with the old password.

Update: This problem is now solved. Starting with Compute API 2017-12-01 and later, you can change the admin credentials in the virtual machine scale set model directly. See the VMSS FAQ – “Update the admin credentials directly in the scale set model (for example using the Azure Resource Explorer, PowerShell or CLI). Once the scale set is updated, all new VMs have the new credentials. Existing VMs only have the new credentials if they are reimaged.”

Before this version of the API you could set a password using a VM extension. Extensions are a good way to modify scale sets because all the VMs in the scale set, including new VMs will run the same extension. Here’s a PowerShell example which uses the VMAccess extension, also known as VMAccessAgent.

$vmssName = "myvmss"
$vmssResourceGroup = "myvmssrg"
$publicConfig = @{"UserName" = "newuser"}
$privateConfig = @{"Password" = "********"}
$extName = "VMAccessAgent"
$publisher = "Microsoft.Compute"
$vmss = Get-AzureRmVmss -ResourceGroupName $vmssResourceGroup -VMScaleSetName $vmssName
$vmss = Add-AzureRmVmssExtension -VirtualMachineScaleSet $vmss -Name $extName -Publisher $publisher -Setting $publicConfig -ProtectedSetting $privateConfig -Type $extName -TypeHandlerVersion "2.0" -AutoUpgradeMinorVersion $true
Update-AzureRmVmss -ResourceGroupName $vmssResourceGroup -Name $vmssName  VirtualMachineScaleSet $vmss

This script will update the scale set model to add an extension which will always set the password on any new VMs. If your scale set upgradePolicy property is set to “Automatic” it will start working on all existing VMs too. If the property is set to “Manual” then you need to apply it to existing VMs (e.g. using Update-AzureRmVmssInstance if using PowerShell, az vmss update-instances using CLI 2.0, or upgrade in the Instances tab in the Azure portal).

Hopefully in the future there will be a “set password” easy-button in the portal which updates the VMSS behind the scenes. 

Posted in Cloud, Computers and Internet, VM Scale Sets | Tagged , | Leave a comment

How to add autoscale to an Azure VM scale set

It is easy to create an Azure VM scale set in the Azure portal, and there is an option to create it with basic CPU based autoscale settings. But suppose you created a scale set without any autoscale settings and now you want to add some autoscale rules. How do you do that?

This article shows 3 different ways to add autoscale settings to an existing scale set:

  1. Add autoscale rules using Azure PowerShell
  2. Create an Azure template to add scaling rules
  3. Call the REST API or a managed SDK

In these examples, the scale metric used is “Percentage CPU”. You can find a list of valid metrics to scale on here: under the heading Microsoft.Compute/virtualMachineScaleSets.

1. Add autoscale rules using PowerShell

There are three main Azure PowerShell commands required to set up autoscale. First create some rules with New-AzureRmAutoscaleRule, then create a profile with New-AzureRmAutoscaleProfile, and finally add a setting with Add-AzureRmAutoscaleSetting. There are more commands available to do things like set up notification addresses and webhooks but the script below shows a basic setup to scale horizontally based on CPU usage. Scaling out when average CPU is greater than 60% and scaling in when average CPU is less than 30%:

$subid = "yoursubscriptionid"
$rgname = "yourresourcegroup"
$vmssname = "yourscalesetname"
$location = "yourlocation" # e.g. southcentralus

$rule1 = New-AzureRmAutoscaleRule -MetricName "Percentage CPU" -MetricResourceId /subscriptions/$subid/resourceGroups/$rgname/providers/Microsoft.Compute/virtualMachineScaleSets/$vmssname -Operator GreaterThan -MetricStatistic Average -Threshold 60 -TimeGrain 00:01:00 -TimeWindow 00:05:00 -ScaleActionCooldown 00:05:00 -ScaleActionDirection Increase -ScaleActionValue 1
$rule2 = New-AzureRmAutoscaleRule -MetricName "Percentage CPU" -MetricResourceId /subscriptions/$subid/resourceGroups/$rgname/providers/Microsoft.Compute/virtualMachineScaleSets/$vmssname -Operator LessThan -MetricStatistic Average -Threshold 30 -TimeGrain 00:01:00 -TimeWindow 00:05:00 -ScaleActionCooldown 00:05:00 -ScaleActionDirection Decrease -ScaleActionValue 1
$profile1 = New-AzureRmAutoscaleProfile -DefaultCapacity 2 -MaximumCapacity 10 -MinimumCapacity 2 -Rules $rule1,$rule2 -Name "autoprofile1"
Add-AzureRmAutoscaleSetting -Location $location -Name "autosetting1" -ResourceGroup $rgname -TargetResourceId /subscriptions/$subid/resourceGroups/$rgname/providers/Microsoft.Compute/virtualMachineScaleSets/$vmssname -AutoscaleProfiles $profile1 

After running this script you can look at the VM scale set properties in the Azure portal and it will show that Autoscale is set to On. Autoscale will start kicking in, and if your scale set isn’t doing any work your VMs will start getting deleted based on the rules above.

There are a couple of important things to point out about this script.

Firstly the MetricName “Percentage CPU”. This is a host metric name. I.e. the autoscale service is getting the information from the hypervisor host rather than from any agent running in the VM. This is recommended because the host metric pipeline (aka the MDM pipeline) is faster and more reliable than the older method of getting autoscale metrics based on running a diagnostic agent inside the VM. See Autoscaling VM scale sets with host metrics for more information.

Secondly make sure you’re using the latest version of Azure PowerShell, which at the time of writing is 3.6.0. Check here: – various parameters get added or deprecated as versions progress. If you get errors or warnings when running scripts a good initial troubleshooting step is to upgrade to the latest Azure PowerShell.

2. Create an Azure template to add scaling rules

An alternative way of adding autoscale settings to a scale set is by deploying a specialized template. Look at an Azure Resource Manager template like this one: – it’s used to create a VM scale set, install a Python Bottle webserver on each VM, and autoscale based on CPU usage. You can isolate the autoscale component and deploy it as its own template if you wish. I.e. the part that starts with:

  “type”: “Microsoft.Insights/autoscaleSettings”,

The important values in the autoscaleSettings which reference back to the scale set are: “targetResourceUri”, and “metricResourceUri”. Here’s an example template which isolates the autoscale settings and can be deployed against an existing scale set: – this template could be further improved by parameterizing the autoscale metric names and thresholds.

3. Call the REST API or managed SDK directly

When you add autoscale settings using PowerShell, behind the scenes it is calling the Azure REST API. You can see the calls it makes by passing the –debug parameter. Here is an example of a Python program which adds autoscale settings using the REST API:

It uses the azurerm library which is basically a set of REST wrappers. Note that in order to use this library you need a Service Principal (described in steps 1-4 here: ).

All the official Azure SDKs (Python, Java etc.) have function calls to create autoscale rules and settings.

4. What about the Azure portal?

Ideally you should be able to edit autoscale settings for an existing scale set directly in the portal. That’s on the portal roadmap, but not in place yet (at the time of writing).

Posted in autoscale, Cloud, Computers and Internet, Python, VM Scale Sets | Tagged , , , | Leave a comment