Exploring the Microsoft Graph with Python and AI

An image showing the primary resources and relationships that are part of the graph

The Microsoft Graph is a programmability model and API platform to access Office 365 data in a unified and coordinated way.

This article will show you how to combine the Graph API with Azure Cognitive Services to derive insights from your email habits.

What’s with the name?
Not everyone likes the name “Graph”, but it’s named because the Graph is a collection of resources (vertices) which are connected by relationships (edges). You can use the API to traverse these relationships through a single endpoint.

What’s it good for?
With unified developer access to O365 data you can enrich applications with calendar, collaboration, organizational data, analyze email content, automate office workflow. Microsoft provides some Graph examples here: What can you do with Microsoft Graph.

Exploring the Graph API

You can get to know the Graph API without writing code by using the Graph Explorer. This is a good way to test if you have the right authentication, and to start learning the calls. Click on Sign in with Microsoft to see your data, or use the demo data without signing in. The explorer provides a set of sample queries to get started..


Getting started with Graph and Python
You’ll need an Office 365 subscription to get started. Additional data is available with a work or organizational account. The simple connect example below should work with any type of subscription (e.g. O365 Home). The mail analysis example assumes you have an organizational (i.e. work or school) account, as well an an Azure subscription.

To write a Graph application, start by registering an app at the Application Registration Portal and make a note of its Application Id and password. The steps are outlined here: https://github.com/microsoftgraph/python-sample-auth/blob/master/installation.md#configuration.

Connecting to the Graph requires a valid Azure Active Directory access token, which the app sends to the Graph API endpoint in the HTTP header. To do this the app needs to forward a connecting user to an authorization endpoint to log on, then validate at a token endpoint, before sending that token to the Graph endpoint. Simple Python web frameworks like Flask and Bottle can integrate with the Python OAUTH2 library to enable this workflow.

A good explanation of the workflow can be found on the Microsoft Graph github page here: Python authentication samples for Microsoft Graph. This repo includes Python examples for both Flask and Bottle app servers.

A good way to figure out how something works is to take an example and strip it down to be as simple as possible…


Here is a bare bones derivation of the Microsoft Graph Bottle example called simplegraph. It reads the application ID and password created on the Application Registration Portal from a JSON configuration file you create called graphconfig.json.

simplegraph uses OAUTH2 to log in..

def login():
    """Prompt user to authenticate."""
    auth_state = str(uuid.uuid4())
    SESSION.auth_state = auth_state

    prompt_behavior = 'none'
    #prompt_behavior = 'select_account'

    params = urllib.parse.urlencode({'response_type': 'code',
                                     'client_id': client_id,
                                     'redirect_uri': redirect_uri,
                                     'state': auth_state,
                                     'resource': resource_uri,
                                     'prompt': prompt_behavior})

    return redirect(authority_url + '/oauth2/authorize?' + params)

def authorized():
    """Handler for the application's Redirect Uri."""
    code = request.query.code
    auth_state = request.query.state
    if auth_state != SESSION.auth_state:
        raise Exception('state returned to redirect URL does not match!')
    auth_context = adal.AuthenticationContext(authority_url, api_version=None)
    token_response = auth_context.acquire_token_with_authorization_code(
        code, redirect_uri, resource_uri, client_id, client_secret)
    SESSION.headers.update({'Authorization': f"Bearer {token_response['accessToken']}",
                            'User-Agent': 'adal-sample',
                            'Accept': 'application/json',
                            'Content-Type': 'application/json',
                            'SdkVersion': 'sample-python-adal',
                            'return-client-request-id': 'true'})
    return redirect('/maincall')

and then sets up a basic API call to a standard Graph alias called /me to return information about the signed in user..

def maincall():
    """Confirm user authentication by calling Graph and displaying data."""
    apicall = '/me'
    endpoint = resource_uri + api_version + apicall
    http_headers = {'client-request-id': str(uuid.uuid4())}
    graphdata = SESSION.get(
        endpoint, headers=http_headers, stream=False).json()
    return display_payload(graphdata, apicall)

To display the JSON output from this call as an HTML table, the json2html library is called..

See def_payload()


The HTML output also displays a form which lets you call any Graph API function using the graphcall() route..

def graphcall():
    """Call the API specified in an HTML form."""
    apicall = request.forms.get('apicall')
    endpoint = resource_uri + api_version + apicall
    http_headers = {'client-request-id': str(uuid.uuid4())}
    graphdata = SESSION.get(
        endpoint, headers=http_headers, stream=False).json()
    return display_payload(graphdata, apicall)

The full source for this example can be found here: https://github.com/gbowerman/graph/blob/master/simplegraph/gbottle.py

Integrating Graph data with AI services

Let’s combine Graph data with Microsoft Cognitive Services to derive some meaningful insights based on text analytics. The following example gets email data from a folder over a selected timeframe and performs a text analytics sentiment analysis to determine how positive or negative the sentiment of the email was, and displays a summary of the data as a word cloud.


You can also search using a string to show the email sentiment for  a specific topic during that time period.

The Cognitive Services API

The Azure Cognitive Services API provides a range of services to apply AI to image processing, speech, semantic search and language recognition. This example uses text analytics to extract summary statements and analyze sentiment for a body of text.

Text Analytics
You can play with the text analytics API without needing to write code by pasting some text in the form here: https://azure.microsoft.com/en-us/services/cognitive-services/text-analytics/. Here’s what it makes of Coleridge..


With a sentiment score of 99%, Xanadu is confirmed to be a pleasure-dome. How about Edgar Allan Poe?


Certified bleak.

The wordcloud text analytics example

To use the Text Analytics API you need an endpoint and access key. See How to find endpoints and access keys. There are some simple Python examples of using it here: Quickstart for Text Analytics API with Python.

The wordcloud example below also uses the wordcloud Python library to visualize text based on word frequency.

The basic steps of the app are:

1. Authenticate to the Microsoft Graph.
2. Use an web form to select a mail folder, timeframe and optional search string.
3. Call the /me/mailFolders/{id}/messages Graph API call to get the selected emails.
4. Call the text analytics API to summarize the text from all the emails.
5. Display the sentiment.

The full source for this example can be found here: https://github.com/gbowerman/graph/blob/master/wordcloud/gbottle.py – it’s a bit rough around the edges (i.e. hacked together quickly). Suggestions to improve the code are welcome.

What next?
The power of the Microsoft Graph API is that it opens up your office data in a traversable way, making it easy to combine with sophisticated analysis libraries and services using simple coding, significantly lowering the bar to make effective productivity and reporting tools.

Posted in Cloud, Computers and Internet, Graph, Python | Tagged , | 1 Comment

Azure Cloud Shell, Python, and Container Instances

Despite being fairly new, the Azure Cloud Shell has quickly become my go-to place for scripting and programming with Azure. This article describes some of features that make cloud shell useful, and how to use the environment with Python.

What’s to like about Cloud Shell?

Here are my top 5 cloud shell features that make it a useful environment for cloud scripting and programming..

5. It’s an interactive container-on-demand, running an up-to-date patched Linux shell with the commands you’d expect, in a few seconds.


4. The latest version of Azure CLI is pre-installed, connected, and ready to run. In some cases the cloud shell CLI has access to preview features not available elsewhere, this week for example, Container Instances.

3. Your home directory is backed by an Azure storage account – any files you create are there for you next time you connect, from any device.

2. The shell includes Python, and it’s Python 3 by default – thank you. You can install any Python libraries into the user environment. It also has go, and much more.

1. Automatic cloud authentication – each time you start a cloud shell, it puts a current Azure authentication token in the CLI cache. You can use this token for more than CLI, for example calling the Azure REST API directly.

Note: The latest CLI versions now also include a command to display the current authentication token. Try:

    az account get-access-token

Python programming with the cloud shell

To successfully install Python libraries in the cloud shell, make sure you install them in the user environment, so you don’t need root access. E.g. add the –user argument to pip.

In this example I want to call the Azure REST API, so will start by installing the azurerm REST wrapper library (not to be confused with the Azure client libraries for Python).

pip install --user --upgrade azurerm

The azurerm library can authenticate using an Azure Service Principal, but it also includes a couple of handy functions for getting an authentication token and default subscription ID directly from the CLI cache. I.e. get_access_token_from_cli() and get_subscription_from_cli().

A Python script to create a new Azure resource group using azurerm in Azure cloud shell looks like this:

import azurerm

auth_token = azurerm.get_access_token_from_cli()
subscription_id = azurerm.get_subscription_from_cli()
rgname = 'containergroup'
location = 'westus'

response = azurerm.create_resource_group(auth_token, subscription_id, rgname, location)
if response.status_code == 200 or response.status_code == 201:
    print('Resource group: ' + rgname + ' - created successfully.')
    print('Return code ' + str(response.status_code) + ' from create_resource_group')


Creating container instances

Both the cloud shell CLI and the azurerm library support the Container Instance groups preview.

Here is an example that creates an nginx based container group using the same resource group:

import azurerm
import json

auth_token = azurerm.get_access_token_from_cli()
subscription_id = azurerm.get_subscription_from_cli()
rgname = 'containergroup'
location = 'westus'
container_name = 'ngx'
container_group_name = 'ngxgroup'
image = 'nginx'
iptype = 'public'
port = 80

container_def = azurerm.create_container_definition(container_name, image, port=port)
container_list = [container_def]
response = azurerm.create_container_instance_group(auth_token, subscription_id, rgname,
    container_group_name, container_list, location, port=port, iptype=iptype)
if response.status_code == 200 or response.status_code == 201:
    print('create_container_group: ' + container_group_name + ' - called successfully.')
    print('Return code ' + str(response.status_code) + ' from create_container_group')

From here you could use CLI or azurerm to do a GET on the container group to get the public IP address and other details.


The cloud shell provides a self-contained development environment to make direct calls to the Azure management endpoint, without requiring extra authentication or needing to mess with Service Principals to use it.

Cloud shell wish list

The main thing I’d love to see improve would be to make it easier to copy/paste text to and from the shell. This sometimes works for me, and sometimes crashes the shell. This is a small and hopefully temporary inconvenience however.

Update: ctrl-insert and shift-insert can be used as copy/paste shortcuts in the cloud shell.

Posted in Cloud, Containers, Linux, Python, Ubuntu | Tagged , , , , , | 1 Comment

Figuring out Azure VM scale set machine names

How do the individual VM host names in a scale set get their names?

Can you choose the names, or the naming convention?

How do you correlate a machine name with a VM instance ID?

When you create a scale set, every virtual machine instance in the scale set has several ways to reference it.

In the portal Instances blade, the VMs are shown by their Azure Resource names, consisting of the scale set name, underscore, instance ID:


VM Hostname

If you log in to a scale set VM, you’ll see a hostname like myprefix0000VU.

The hostname is composed of 2 parts:

The computerNamePrefix is a scale set property you can set when creating a VM. Many scale set Azure templates default it to the same value as the scale set name, but it doesn’t have to be the same.

The number is a base-36, also known as hexatrigesimal, representation of the VM instance ID, filled with leading zeroes to give it a fixed length of 6 characters. For example, VM ID 10 would be represented as 00000A, VM ID 35 would be 00000Z, and VM ID 1146 would be 0000VU.

Why hexatrigesimal?

Base-36 is the most compact way to represent a number using single-case alphanumeric characters (i.e. only uppercase letters and digits).

Some platforms (like Windows) have a limited maximum hostname length of 15 characters.

Using the compact hexatrigesimal numbering system to represent the VM ID provides a compromise between allowing the maximum length for a computer name prefix, and allowing for the maximum number of unique IDs before wrapping around. On Windows, this leaves a maximum machineNamePrefix length of 9 characters. The prefix can be longer on Linux.

The maximum machine name of myprefixZZZZZZ represents a VM ID of 2176782335. You would have to do a lot of scaling in and out to reach that value, but if you did, after that, the name would start wrapping around to reuse values from deleted VMs.

Note: The only part of the naming mechanism you can change is the machineNamePrefix. You can’t pick how many leading zeroes, or change other aspects of the hexatrigesimal numbering scheme.

Correlating VMSS hostname with instance ID

There is a direct correlation between a VMSS VM hostname and its instance ID. You just convert the hexatrigesimal number to decimal. Here’s a Python 3 function to convert a hostname to a decimal instance ID:

def hostname_to_vmid(hostname):
    # get last 6 characters and remove leading zeroes
    hexatrig = hostname[-6:].lstrip('0')
    multiplier = 1
    vmid = 0
    # reverse string and process each char
    for x in hexatrig[::-1]:
        if x.isdigit():
            vmid += int(x) * multiplier
            # convert letter to corresponding integer
            vmid += (ord(x) - 55) * multiplier
        multiplier *= 36
    return vmid

Here’s a function to convert an instance ID and machine name prefix to a hostname:

def vmid_to_hostname(vmid, prefix):
    hexatrig = ''
    # convert decimal vmid to hexatrigesimal base36
    while vmid > 0:
        vmid_mod = vmid % 36
        # convert int to corresponding letter
        if vmid_mod > 9:
            char = chr(vmid_mod + 55)
            char = str(vmid_mod)
        hexatrig = char + hexatrig
        vmid = int(vmid/36) 
    return prefix + hexatrig.zfill(6)

You can find an example command line script to convert these names here: https://github.com/gbowerman/vmsstools/tree/master/vmssvmname


Here’s a handy summary of the various ways you can reference a VM in a scale set:


Where found

Used for



VMSS VM->instanceId

Decimal value incremented each time a new VM is created.



VMSS VM->name

Azure resource name, shown in the portal Instances blade.



VMSS VM->properties->osProfile->computerName

Hostname used by the VM operating system.



VMSS VM->properties->vmId

Unique GUID used by metrics pipeline (MDM).


Posted in Cloud, Computers and Internet, Python, VM Scale Sets | Tagged , , , | 4 Comments

How to change the password on a VM scale set

Problem: You forgot the VM password on an Azure VM scale set, or you need to change it. How do you do it?

At first glance this can be difficult because every new VM sets its password from the VMSS model you created when the scale set was first deployed, and before Microsoft.Compute API version 2017-12-01, there were no options to change the password in the model. If you write a fancy sshpass script to change the password remotely on all the existing VMs, new VMs would still come up with the old password.

Update: This problem is now solved. Starting with Compute API 2017-12-01 and later, you can change the admin credentials in the virtual machine scale set model directly. See the VMSS FAQ – “Update the admin credentials directly in the scale set model (for example using the Azure Resource Explorer, PowerShell or CLI). Once the scale set is updated, all new VMs have the new credentials. Existing VMs only have the new credentials if they are reimaged.”

Before this version of the API you could set a password using a VM extension. Extensions are a good way to modify scale sets because all the VMs in the scale set, including new VMs will run the same extension. Here’s a PowerShell example which uses the VMAccess extension, also known as VMAccessAgent.

$vmssName = "myvmss"
$vmssResourceGroup = "myvmssrg"
$publicConfig = @{"UserName" = "newuser"}
$privateConfig = @{"Password" = "********"}
$extName = "VMAccessAgent"
$publisher = "Microsoft.Compute"
$vmss = Get-AzureRmVmss -ResourceGroupName $vmssResourceGroup -VMScaleSetName $vmssName
$vmss = Add-AzureRmVmssExtension -VirtualMachineScaleSet $vmss -Name $extName -Publisher $publisher -Setting $publicConfig -ProtectedSetting $privateConfig -Type $extName -TypeHandlerVersion "2.0" -AutoUpgradeMinorVersion $true
Update-AzureRmVmss -ResourceGroupName $vmssResourceGroup -Name $vmssName  VirtualMachineScaleSet $vmss

This script will update the scale set model to add an extension which will always set the password on any new VMs. If your scale set upgradePolicy property is set to “Automatic” it will start working on all existing VMs too. If the property is set to “Manual” then you need to apply it to existing VMs (e.g. using Update-AzureRmVmssInstance if using PowerShell, az vmss update-instances using CLI 2.0, or upgrade in the Instances tab in the Azure portal).

Hopefully in the future there will be a “set password” easy-button in the portal which updates the VMSS behind the scenes. 

Posted in Cloud, Computers and Internet, VM Scale Sets | Tagged , | Leave a comment

How to add autoscale to an Azure VM scale set

It is easy to create an Azure VM scale set in the Azure portal, and there is an option to create it with basic CPU based autoscale settings. But suppose you created a scale set without any autoscale settings and now you want to add some autoscale rules. How do you do that?

This article shows 3 different ways to add autoscale settings to an existing scale set:

  1. Add autoscale rules using Azure PowerShell
  2. Create an Azure template to add scaling rules
  3. Call the REST API or a managed SDK

In these examples, the scale metric used is “Percentage CPU”. You can find a list of valid metrics to scale on here: https://azure.microsoft.com/en-us/documentation/articles/monitoring-supported-metrics/ under the heading Microsoft.Compute/virtualMachineScaleSets.

1. Add autoscale rules using PowerShell

There are three main Azure PowerShell commands required to set up autoscale. First create some rules with New-AzureRmAutoscaleRule, then create a profile with New-AzureRmAutoscaleProfile, and finally add a setting with Add-AzureRmAutoscaleSetting. There are more commands available to do things like set up notification addresses and webhooks but the script below shows a basic setup to scale horizontally based on CPU usage. Scaling out when average CPU is greater than 60% and scaling in when average CPU is less than 30%:

$subid = "yoursubscriptionid"
$rgname = "yourresourcegroup"
$vmssname = "yourscalesetname"
$location = "yourlocation" # e.g. southcentralus

$rule1 = New-AzureRmAutoscaleRule -MetricName "Percentage CPU" -MetricResourceId /subscriptions/$subid/resourceGroups/$rgname/providers/Microsoft.Compute/virtualMachineScaleSets/$vmssname -Operator GreaterThan -MetricStatistic Average -Threshold 60 -TimeGrain 00:01:00 -TimeWindow 00:05:00 -ScaleActionCooldown 00:05:00 -ScaleActionDirection Increase -ScaleActionValue 1
$rule2 = New-AzureRmAutoscaleRule -MetricName "Percentage CPU" -MetricResourceId /subscriptions/$subid/resourceGroups/$rgname/providers/Microsoft.Compute/virtualMachineScaleSets/$vmssname -Operator LessThan -MetricStatistic Average -Threshold 30 -TimeGrain 00:01:00 -TimeWindow 00:05:00 -ScaleActionCooldown 00:05:00 -ScaleActionDirection Decrease -ScaleActionValue 1
$profile1 = New-AzureRmAutoscaleProfile -DefaultCapacity 2 -MaximumCapacity 10 -MinimumCapacity 2 -Rules $rule1,$rule2 -Name "autoprofile1"
Add-AzureRmAutoscaleSetting -Location $location -Name "autosetting1" -ResourceGroup $rgname -TargetResourceId /subscriptions/$subid/resourceGroups/$rgname/providers/Microsoft.Compute/virtualMachineScaleSets/$vmssname -AutoscaleProfiles $profile1 

After running this script you can look at the VM scale set properties in the Azure portal and it will show that Autoscale is set to On. Autoscale will start kicking in, and if your scale set isn’t doing any work your VMs will start getting deleted based on the rules above.

There are a couple of important things to point out about this script.

Firstly the MetricName “Percentage CPU”. This is a host metric name. I.e. the autoscale service is getting the information from the hypervisor host rather than from any agent running in the VM. This is recommended because the host metric pipeline (aka the MDM pipeline) is faster and more reliable than the older method of getting autoscale metrics based on running a diagnostic agent inside the VM. See Autoscaling VM scale sets with host metrics for more information.

Secondly make sure you’re using the latest version of Azure PowerShell, which at the time of writing is 3.6.0. Check here: https://github.com/Azure/azure-powershell/releases/ – various parameters get added or deprecated as versions progress. If you get errors or warnings when running scripts a good initial troubleshooting step is to upgrade to the latest Azure PowerShell.

2. Create an Azure template to add scaling rules

An alternative way of adding autoscale settings to a scale set is by deploying a specialized template. Look at an Azure Resource Manager template like this one: https://github.com/Azure/azure-quickstart-templates/blob/master/201-vmss-bottle-autoscale/azuredeploy.json – it’s used to create a VM scale set, install a Python Bottle webserver on each VM, and autoscale based on CPU usage. You can isolate the autoscale component and deploy it as its own template if you wish. I.e. the part that starts with:

  “type”: “Microsoft.Insights/autoscaleSettings”,

The important values in the autoscaleSettings which reference back to the scale set are: “targetResourceUri”, and “metricResourceUri”. Here’s an example template which isolates the autoscale settings and can be deployed against an existing scale set: https://github.com/gbowerman/azure-myriad/tree/master/autoscale – this template could be further improved by parameterizing the autoscale metric names and thresholds.

3. Call the REST API or managed SDK directly

When you add autoscale settings using PowerShell, behind the scenes it is calling the Azure REST API. You can see the calls it makes by passing the –debug parameter. Here is an example of a Python program which adds autoscale settings using the REST API: https://github.com/gbowerman/vmsstools/blob/master/cpuload/addautoscalerules.py

It uses the azurerm library which is basically a set of REST wrappers. Note that in order to use this library you need a Service Principal (described in steps 1-4 here: https://msftstack.wordpress.com/2016/01/03/how-to-call-the-azure-resource-manager-rest-api-from-c/ ).

All the official Azure SDKs (Python, Java etc.) have function calls to create autoscale rules and settings.

4. What about the Azure portal?

Ideally you should be able to edit autoscale settings for an existing scale set directly in the portal. That’s on the portal roadmap, but not in place yet (at the time of writing).

Posted in autoscale, Cloud, Computers and Internet, Python, VM Scale Sets | Tagged , , , | Leave a comment

VIP Swap – blue-green deployment in Azure Resource Manager

How do you automate deployment of a multi-VM application from staging to production?

Three high-level approaches might be:

1. Roll out deployment, update one or more VMs at a time. This can be a good approach to avoid downtime, as only a subset of machines are down during update. It assumes different versions of the application can coexist. See: https://msftstack.wordpress.com/2016/05/17/how-to-upgrade-an-azure-vm-scale-set-without-shutting-it-down/

2. Create a staging cluster and move it to production by swapping the network endpoints. Also known as a blue-green deployment, this is a good way to publish an application as a consistent or immutable set, but can involve downtime during the transition.

3. Use Application Gateway with two backend pools and a routing rule. Have two backend pools – one stage pool and one prod pool. Add stage VMSS to stage pool, prod VMSS to prod pool. Have one routing rule in the app gateway. Depending on whether you want to use stage or prod VMSS, this rule will change to point to the appropriate backend address pool.

This article describes 2. how to swap the public IP addresses between two Azure Load Balancers in Azure Resource Manager (ARM). You can use this method if for example you have two VM scale sets behind load balancers, one production and one staging, and you want to move the staging scale set into production.

The Azure Cloud Service (classic) deployment method included an asynchronous Swap Deployment operation, which was a fast and powerful way to initiate a virtual IP address swap between staging and deployment environments. Azure Resource Manager doesn’t have an equivalent built-in VIP swap function, so if you have a staging environment behind a load balancer, and want to swap it with a production environment behind another load balancer, you have to do something like:

  • Unassign the public IP from load balancer 2’s front-end IP configuration.
  • Assign the public IP from load balancer 2 to load balancer 1.
  • Assign the public IP from load balancer 1 to load balancer 2.
  • Since you can’t assign a public IP address to another resource until it has been unassigned from its current resource, one way to do this is to create a temporary IP address as a float. E.g.

  • Create a temporary public IP address.
  • Assign the temp IP address to LB1.
  • Assign LB1’s old IP address to LB2.
  • Assign LB2’s old IP address to LB1.
  • Delete the temp IP address.
  • One caveat to be aware of is that these operations involve some downtime. Unassigning a public IP address can take around ~30 seconds (at the time of writing), therefore total downtime for your app could be at least 60 seconds as temp/staging/production IP addresses are moved around.

    Test environment

    To test VIP swap in ARM I created two load balancers in the same resource group and VNET, and associated each one with a different VM scale set. The Azure Resource Manager templates used to set up this infrastructure can be found here: https://github.com/gbowerman/azure-myriad/tree/master/vip-swap 

    PowerShell VIP swap example

    # put your load balancer names, resource group and location here
    $lb1name = 'vipswap1lb'
    $lb2name = 'vipswap2lb'
    $rgname = 'vipswap'
    $location = 'southcentralus'
    # create a new temporary public ip address
    "Creating a temporary public IP address"
    new-AzureRmPublicIpAddress -name 'floatip' -ResourceGroupName $rgname -location $location -AllocationMethod Dynamic
    $floatip = Get-AzureRmPublicIpAddress -name 'floatip' -ResourceGroupName $rgname
    # get the LB1 model
    $lb1 = Get-AzureRmLoadBalancer -Name $lb1name -ResourceGroupName $rgname
    $lb1_ip_id = $lb1.frontendIPConfigurations.publicIPAddress.id
    # set the LB1 IP addr to floatip
    "Assigning the temporary public IP address id " + $floatip.id + " to load balancer " + $lb1name
    $lb1.FrontendIpConfigurations.publicIpAddress.id = $floatip.id
    Set-AzureRmLoadBalancer -LoadBalancer $lb1
    # get the LB2 model
    $lb2 = Get-AzureRmLoadBalancer -Name $lb2name -ResourceGroupName $rgname
    $lb2_ip_id = $lb2.FrontendIPConfigurations.publicIPAddress.id
    # set the LB2 IP addr to lb1 IP
    "Assigning the public IP address id " + $lb1_ip_id + "to load balancer " + $lb2name
    $lb2.FrontendIpConfigurations.publicIpAddress.id = $lb1_ip_id
    Set-AzureRmLoadBalancer -LoadBalancer $lb2
    # set the LB1 IP addr to old lb2 IP
    "Assigning the public IP id " + $lb2_ip_id + " to load balancer " + $lb1name
    $lb1.FrontendIpConfigurations.publicIpAddress.id = $lb2_ip_id
    Set-AzureRmLoadBalancer -LoadBalancer $lb1
    # now delete the floatip
    "Deleting the temporary public IP address"
    Remove-AzureRmPublicIpAddress -Name 'floatip' -ResourceGroupName $rgname -Force

    Python VIP swap example

    Here’s a Python example based on the azurerm REST wrapper library, which follows the same logic, and adds some timing code: https://github.com/gbowerman/vmsstools/blob/master/vipswap/vip_swap.py 

    The output looks like this:


    In this case downtime was measured at 61 seconds.

    Next steps

    If your application requires immutable deployment approaches, for further reading take a look at the Azure Spinnaker, an Azure port of Netflix’s open source continuous delivery platform: http://www.codechannels.com/video/microsoft/azure/host-spinnaker-on-azure/ and associated templates: https://azure.microsoft.com/en-us/resources/templates/?term=Spinnaker.

    Posted in Cloud, Computers and Internet, Python, VM Scale Sets | Tagged , , , | 2 Comments

    Deploying Minecraft Server on Azure

    Update 4/14/2018: Happy to report the recent problems with the Minecraft Server solution template on the Azure Marketplace are now fixed. There was a problem with the way it was picking up the latest Minecraft version, which was compounded by a Marketplace/Developer account publishing problem. All now resolved.

    The Azure Marketplace now has a Minecraft Server offering which deploys a customizable Minecraft Server to an Azure virtual machine. This replaces an older Azure Gallery Minecraft item which was often out of date, sometimes broken, and eventually removed.

    The new solution template creates an Azure virtual machine running Ubuntu 16.04 and installs Minecraft Server running on it. It also creates the other cloud infrastructure components you need, including: a resource group, virtual network, public IP address, DNS name, network security group (NSG). Additionally it lets you set several Minecraft server configuration parameters at deployment time. What follows is a brief user guide covering how to deploy and operate your Minecraft Server running in Azure.

    For a Spanish guide to deploying a Minecraft server in Azure, go here: Cómo crear un servidor Minecraft en Azure (thanks @platter5_).

    What you need to get started

    Finding the Minecraft Server

    You can deploy the Minecraft Server Marketplace solution to a new VM directly from the Azure Marketplace or by searching for it from the Azure portal.

    From the Marketplace

    Go to the Minecraft server Marketplace product page and click GET IT NOW.


    From there click Continue and you’ll be taken to the Azure portal..

    From the Portal

    You can also deploy the Minecraft Server directly from the Azure Portal. Click the ‘+’ sign and type “minecraft” in the search.


    Deploying the Minecraft Server

    Once you’ve selected the Minecraft Server product in the portal you’ll see a description and a Create button.

    Click Create.

    From here the portal takes you through a set of forms known as portal blades to configure the deployment. Let’s go through them one by one:



    VM username – A Linux user name you’ll use if you need to log on to the virtual machine.

    Password – The password you’ll use if you need to log on to the virtual machine. Don’t forget this.

    Subscription – If you have access to more than one Azure subscription, choose the one you want to use for this deployment.

    Resource group – A logical container for your resources. It’s a good idea to create a new one here, that way if you want to remove everything you created at some point you can simply delete the resource group and it won’t affect any other deployments.

    Location – Pick the region where the resources will be created. Picking somewhere local to you is usually a good idea to minimize network round-trip time.

    Click OK.

    Virtual Machine Settings


    Public IP address resource name – Leave the default setting.

    Domain name label – Pick a unique domain name for your server. No punctuation, spaces etc., just letters and/or numbers. Later when you connect to this server the full domain name will be like: ..cloudapp.azure.com.

    Size – The size of the virtual machine. You can accept the default or choose a larger machine with more CPU/memory, but it is not recommended to choose a smaller machine like A0 or D0. If you do pick a smaller VM size the server might get a bit laggy.

    Click OK.

    Minecraft Server Settings


    Minecraft username – This is your Minecraft username – get it right or you won’t be an operator of the Minecraft server that gets created. Don’t put an email address or anything else, except a valid Minecraft username.

    Minecraft server version – This should default to the latest version of the Minecraft server and hence not need changing. The field is editable just in case a new server version comes out and the Azure Marketplace product hasn’t been updated yet. If there is a new server version available you can put it here.

    difficulty – The default Difficulty of 1 means peaceful mode. Check here for what values you can use for this and the next few parameters: http://minecraft.gamepedia.com/Server.properties

    level-name – Whatever you want your new Minecraft world to be called or leave the default value. Don’t use any special characters which could interfere with a bash script. E.g. no spaces, single quotes, explanation marks, backslashes (unless you’re deliberately escaping a character).

    game-mode – The default value is 0 – Survival mode. 1 is creative.

    white-list – Set this to true to make this invite-only. Initially only you would have access, and for people to join you’ll have to use operator commands to add them to the white-list.

    enable-command-block – If this is true you can create command blocks in the server, which empowers you to build a near limitless array of quasi-magical operations.

    spawn-monsters – Controls whether monsters show up at night or not.

    generate-structures – Controls whether your world will have temples and villages.

    level-seed –  Leave this blank to use a random seed. The seed controls which of the 18,446,744,073,709,551,616 possible worlds are generated. As with level-name, avoid using characters which could interfere with the bash script which installs Minecraft. No spaces or punctuation like quotes, explanation marks, backslashes.

    Click OK


    Next you’ll see a summary of the options you picked…


    Click OK.


    At the purchase blade you confirm that you’re deploying the Azure resources. Note there are no extra charges on top of the Azure resources, just the basic compute and storage charges.


    Once you click the Purchase button your Minecraft server will start deploying and you’ll see a progress icon on the dashboard. Allow several minute for this to deploy.


    Connecting to the Minecraft Server

    Once deployment is complete you’ll see the resources that were created in the portal..


    Click on the Public IP address and you’ll see the DNS name you’ll use to connect to from your Minecraft launcher:


    Start your Minecraft desktop client. Use the regular desktop app, not a pocket edition which won’t connect to regular multiplayer servers like the one you just created. Click Play->Multiplayer and then select Direct Connect or Add Server like this:


    Once connected you (and only you) should be an operator. You should be able to type commands like /gamemode 1 to switch to Creative mode:

    Minecraft 1.11.2 2_11_2017 2_55_43 PM

    The full list of operator commands can be found on the Minecraft wiki:  http://minecraft.gamepedia.com/Commands#Summary_of_commands

    Troubleshooting and managing
    Troubleshooting may involve logging on to the VM directly. For this you need to SSH to the DNS address. If you’re connecting from Windows you’ll need to get an SSH client like putty (see this article for more examples), or better still SSH directly from the Windows subsystem for Linux which is available in Windows 10 Anniversary edition and later. In the example above the SSH command would be:
    ssh mcserver211.southcentralus.cloudapp.azure.com -l mcuser

    1. Manually stopping/starting the Minecraft server on the VM
      ssh to the VM
      sudo systemctl stop minecraft-server
      sudo systemctl start minecraft-server

    2. If you need to edit any of the Minecraft server files, they are in /srv/minecraft_server

    3. Cannot SSH to the VM
    Go here: https://github.com/Azure/azure-linux-extensions/tree/master/VMAccess

    4. Upgrading Minecraft to the latest server version
    Use this script: https://msftstack.wordpress.com/2016/06/25/upgrading-minecraft-on-an-azure-vm/.

    Next Steps

    There is another Minecraft server in the Azure Marketplace called Multicraft. I’ve not tried it myself yet but Bitnami have good solutions. It’s an advanced hosting platform for Minecraft so worth trying.

    If you have any ideas to improve the Azure Marketplace Minecraft Server template, the source files are here: https://github.com/gbowerman/azure-minecraft/tree/master/azure-marketplace/minecraft-server-ubuntu – please take a look and log issues or submit a PR. 

    Posted in Cloud, Computers and Internet, Games, Ubuntu | Tagged , , , | 56 Comments