Deploying Azure Container Service using the azurerm Python library

image

Azure Container Service is an easy to deploy container framework for Azure. It’s an open framework that among other things lets you choose whether to deploy DCOS or Swarm based cluster orchestration. You can deploy ACS directly from the Azure Portal or command line, and it has a convenient set of REST APIs to deploy and manage the service programmatically, which is supported by the standard Azure SDKs. The azurerm Python library of Azure REST wrappers also recently added support for ACS.

Here’s an example showing how you can deploy a new Container Service with azurerm. You can see a similar example in the examples section of the azurerm github repo: create_acs.py, and see all the ACS API calls exercised in the azurerm ACS unit tests.

import azurerm
from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.primitives.asymmetric import rsa
from cryptography.hazmat.backends import default_backend
from haikunator import Haikunator  # used to generate random word strings
import json
import sys

tenant_id = "your tenant id"
app_id = "your application id"
app_secret = "your application secret"
subscription_id = "your Azure subscription id"

# authenticate
access_token = azurerm.get_access_token(tenant_id, app_id, app_secret)

# set Azure data center location
location = 'eastus'

# create resource group - use Haikunator to generate a random name
rgname = Haikunator.haikunate() 
print('Creating resource group: ' + rgname)
response = azurerm.create_resource_group(access_token, subscription_id, rgname, location)
if response.status_code != 201:
    print(json.dumps(response.json(), sort_keys=False, indent=2, separators=(',', ': ')))
    sys.exit('Expecting return code 201 from create_resource_group(): ')

# create Container Service name and DNS values - random names again
service_name = Haikunator.haikunate(delimiter='')
agent_dns = Haikunator.haikunate(delimiter='')
master_dns = Haikunator.haikunate(delimiter='')

# generate RSA Key for container service - put your own public key here instead
key = rsa.generate_private_key(backend=default_backend(), public_exponent=65537, \
    key_size=2048)
public_key = key.public_key().public_bytes(serialization.Encoding.OpenSSH, \
    serialization.PublicFormat.OpenSSH).decode('utf-8')

# create container service (orchestrator will default to DCOS)
agent_count = 3                # the container hosts which will do the work
agent_vm_size = 'Standard_A1'
master_count = 1               # use 3 for production deployments
admin_user = 'azure'
print('Creating container service: ' + service_name)
print('Agent DNS: ' + agent_dns)
print('Master DNS: ' + master_dns)
print('Agents: ' + str(agent_count) + ' * ' + agent_vm_size)
print('Master count: ' + str(master_count))

response = azurerm.create_container_service(access_token, subscription_id, \
    rgname, service_name, agent_count, agent_vm_size, agent_dns, \
    master_dns, admin_user, public_key, location, master_count=master_count)
if response.status_code != 201:
    sys.exit('Expecting return code 201 from create_container_service(): ' + str(response.status_code))

print(json.dumps(response.json(), sort_keys=False, indent=2, separators=(',', ': ')))
Posted in Cloud, Computers and Internet, Containers, Python | Tagged , , , , | Leave a comment

Generating RSA keys with Python 3

I was looking for a quick way to generate an RSA key in Python 3 for some unit tests which needed a public key as an OpenSSH string. It ended up taking longer than expected because I started by trying to use the pycrypto library, which is hard to install on Windows (weird dependencies on specific Visual Studio runtimes) and has unresolved bugs with Python 3.

If you’re using Python 3 it’s much easier to use the cryptography library.

Here’s an example which generates an RSA key pair, prints the private key as a string in PEM container format, and prints the public key as a string in OpenSSH format.

from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.primitives.asymmetric import rsa
from cryptography.hazmat.backends import default_backend

# generate private/public key pair
key = rsa.generate_private_key(backend=default_backend(), public_exponent=65537, \
    key_size=2048)

# get public key in OpenSSH format
public_key = key.public_key().public_bytes(serialization.Encoding.OpenSSH, \
    serialization.PublicFormat.OpenSSH)

# get private key in PEM container format
pem = key.private_bytes(encoding=serialization.Encoding.PEM,
    format=serialization.PrivateFormat.TraditionalOpenSSL,
    encryption_algorithm=serialization.NoEncryption())

# decode to printable strings
private_key_str = pem.decode('utf-8')
public_key_str = public_key.decode('utf-8')

print('Private key = ')
print(private_key_str)
print('Public key = ')
print(public_key_str)

Posted in Computers and Internet, Cryptography, Python | Tagged | Leave a comment

Install Azure CLI 2.0 on the Windows 10 bash on Ubuntu shell

Microsoft Azure CLI 2.0 is an excellent Azure CLI reboot based on Python which is now GA. If you want to install and run it on the bash on Ubuntu shell  provided as a developer feature with the Windows 10 Anniversary update, here’s what you need to do.

1. Install the bash on Ubuntu shell. Follow these instructions if you don’t have it. If you already have it but have installed random libraries and want to reset, you can do that by uninstalling (after saving any data) with the ‘lxrun /uninstall’ command from a command window, and then clicking on the bash on Ubuntu icon again to reinstall.

2. Get the Ubuntu subsystem up to date:

sudo apt-get update
sudo apt-get upgrade

3. Make sure you can ping your host name. The first CLI install instruction in the next step will fail if you can’t ping your hostname. To fix it I edited my /etc/hosts file and added the hostname to the 127.0.0.1 localhost line:

127.0.0.1 localhost PONDLIFE

I might change this later depending on what I need to do with networking and name resolution, but this works for now.

4. Follow the apt-get install instructions for a 64-bit system from the Azure CLI github install notes:

$ sudo apt-key adv --keyserver apt-mo.trafficmanager.net --recv-keys 417A0893
$ sudo apt-get install apt-transport-https
$ sudo apt-get update && sudo apt-get install azure-cli

Once the azure-cli package is installed you can run the ‘az’ command and see the options:

image

Posted in Cloud, Computers and Internet, Linux, Python, Ubuntu | Tagged , , , | 2 Comments

Creating an Azure VM Scale Set with the azurerm Python library

There are various ways to create an Azure VM Scale Set. The easiest methods are: directly in the Azure portal, using the CLI quick-create command, and by deploying an Azure template. If instead of deploying a template you want to create a VMSS programmatically and imperatively – that is by creating each resource one call at a time, here’s how to do it using the azurerm library of REST wrapper functions.

Before using this library you need to create a service principal. These steps are covered here.

The azurerm library added a create_vmss() function in version 0.6.12. The initial implementation has some limitations, notably:

  • Doesn’t support creating VMs with certificates. Only user/password.
  • Expects a load balancer with an inbound NAT pool to be created and its ID to be provided as a function argument.
  • Only supports VM platform images, not custom images.
  • These are easy to fix. Let me know if you need one of these features.

    Here’s an example program which creates a resource group, VNet, public ip address, load balancer, storage accounts, NSG, and uses them to create a scale set. You can find a similar example program in the azure examples folder here: create_vmss.py. You can also see how the azurerm unit tests create a VM and a VMSS in the same VNet here: compute_test.py.

    # simple program to do an imperative VMSS quick create from a platform image
    # Arguments:
    # -name [resource names are defaulted from this]
    # -image
    # -location [same location used for all resources]
    import argparse
    import azurerm
    import json
    from random import choice
    from string import ascii_lowercase
    from haikunator import Haikunator
    
    # validate command line arguments
    argParser = argparse.ArgumentParser()
    
    argParser.add_argument('--name', '-n', required=True, action='store', help='Name of vmss')
    argParser.add_argument('--capacity', '-c', required=True, action='store',
                           help='Number of VMs')
    argParser.add_argument('--location', '-l', action='store', help='Location, e.g. eastus')
    argParser.add_argument('--verbose', '-v', action='store_true', default=False, help='Print operational details')
    
    args = argParser.parse_args()
    
    name = args.name
    location = args.location
    capacity = args.capacity
       
    tenant_id = 'put your tenant id here'
    app_id = 'put your app id here'
    app_secret = 'put your app secret here'
    subscription_id = 'put your subscription id here'
    
    # authenticate
    access_token = azurerm.get_access_token(tenant_id, app_id, app_secret)
    
    # create resource group
    print('Creating resource group: ' + name)
    rmreturn = azurerm.create_resource_group(access_token, subscription_id, name, location)
    print(rmreturn)
    
    # create NSG - not strictly necessary
    nsg_name = name + 'nsg'
    print('Creating NSG: ' + nsg_name)
    rmreturn = azurerm.create_nsg(access_token, subscription_id, name, nsg_name, location)
    nsg_id = rmreturn.json()['id']
    print('nsg_id = ' + nsg_id)
    
    # create NSG rule
    nsg_rule = 'ssh'
    print('Creating NSG rule: ' + nsg_rule)
    rmreturn = azurerm.create_nsg_rule(access_token, subscription_id, name, nsg_name, nsg_rule, \
        description='ssh rule', destination_range='22')
    
    # create set of storage accounts, and construct container array
    print('Creating storage accounts')
    container_list = []
    for count in range(5):
        sa_name = ''.join(choice(ascii_lowercase) for i in range(10))
        print(sa_name)
        rmreturn = azurerm.create_storage_account(access_token, subscription_id, name, sa_name, \
            location, storage_type='Standard_LRS')
        if rmreturn.status_code == 202:
            container = 'https://' + sa_name + '.blob.core.windows.net/' + name + 'vhd'
            container_list.append(container)
        else:
            print('Error ' + str(rmreturn.status_code) + ' creating storage account ' + sa_name)
            sys.exit()
    
    # create VNET
    vnetname = name + 'vnet'
    print('Creating VNet: ' + vnetname)
    rmreturn = azurerm.create_vnet(access_token, subscription_id, name, vnetname, location, \
        nsg_id=nsg_id)
    print(rmreturn)
    
    subnet_id = rmreturn.json()['properties']['subnets'][0]['id']
    print('subnet_id = ' + subnet_id)
    
    # create public IP address
    public_ip_name = name + 'ip'
    dns_label = name + 'ip'
    print('Creating public IP address: ' + public_ip_name)
    rmreturn = azurerm.create_public_ip(access_token, subscription_id, name, public_ip_name, \
        dns_label, location)
    print(rmreturn)
    ip_id = rmreturn.json()['id']
    print('ip_id = ' + ip_id)
    
    # create load balancer with nat pool
    lb_name = vnetname + 'lb'
    print('Creating load balancer with nat pool: ' + lb_name)
    rmreturn = azurerm.create_lb_with_nat_pool(access_token, subscription_id, name, lb_name, ip_id, \
        '50000', '50100', '22', location)
    be_pool_id = rmreturn.json()['properties']['backendAddressPools'][0]['id']
    lb_pool_id = rmreturn.json()['properties']['inboundNatPools'][0]['id']
    
    # create VMSS
    vmss_name = name
    vm_size = 'Standard_A1'
    publisher = 'Canonical'
    offer = 'UbuntuServer'
    sku = '16.04.0-LTS'
    version = 'latest'
    username = 'azure'
    
    # this example creates a random password. You might want to change this or at
    # least save the random password that gets created somewhere
    password = Haikunator.haikunate(delimiter=',') 
    
    print('Creating VMSS: ' + vmss_name)
    rmreturn = azurerm.create_vmss(access_token, subscription_id, name, vmss_name, vm_size, capacity, \
        publisher, offer, sku, version, container_list, subnet_id, \
        be_pool_id, lb_pool_id, location, username=username, password=password)
    print(rmreturn)
    print(json.dumps(rmreturn.json(), sort_keys=False, indent=2, separators=(',', ': ')))
    

    Next steps

    Next on my to do list is:

    – a VMSS create example using the official Azure Python SDK.

    – Add Azure ACS wrappers to the azurerm library.

    – Make the azurerm create_vmss() and create_vm() functions support certificates.

    Posted in Cloud, Computers and Internet, Python, VM Scale Sets | Tagged , , | Leave a comment

    How to create an Azure VM with the azurerm Python library

    There are at least two ways to work with Azure infrastructure using Python. You can use the official Azure SDK for Python which supports all Azure functionality, or the azurerm REST wrapper library, which is unofficial and supports a subset of the Azure REST API.

    When to use which? Where you might use azurerm is when you need something very lightweight that is easy to extend and contribute to. Use the official SDK if you’re creating a production app or service. Use azurem if you’re writing a quick ops script, like figuring out which VMs are in which fault domains etc..

    Here’s a simple azurerm example which goes through the steps to create a virtual machine. Note: Since creating a VM with the Azure Resource Manager deployment model imperatively requires several steps, in most cases it is easier to simply deploy an ARM template to create a set resources declaratively. When deploying a template the Azure Resource Manager takes care of parallelizing resource creation, so your program wouldn’t need to use multithreading and checks for resource completion (for creating a simple VM imperatively like this it’s not required but to create a whole set of VMs or scale sets it would be). The azurerm library also includes functions to deploy templates.

    This example first creates the VM resources, including resource group, storage account, public ip address, vnet, NIC. Then it creates the VM. The current azurerm.create_vm() function creates a pretty simple VM and lacks options for data disks, disk encryption, keyvault integration, etc. but you’re welcome to extend it.

    import azurerm
    impor json
    
    tenant_id = 'your-tenant-id'
    application_id = 'your-application-id'
    application_secret = 'your-application-secret'
    
    # authenticate
    access_token = azurerm.get_access_token(tenant_id, app_id, app_secret)
    
    # create resource group
    print('Creating resource group: ' + name)
    rmreturn = azurerm.create_resource_group(access_token, subscription_id, name, location)
    print(rmreturn)
    
    # create NSG
    nsg_name = name + 'nsg'
    print('Creating NSG: ' + nsg_name)
    rmreturn = azurerm.create_nsg(access_token, subscription_id, name, nsg_name, location)
    nsg_id = rmreturn.json()['id']
    print('nsg_id = ' + nsg_id)
    
    # create NSG rule
    nsg_rule = 'ssh'
    print('Creating NSG rule: ' + nsg_rule)
    rmreturn = azurerm.create_nsg_rule(access_token, subscription_id, name, nsg_name, nsg_rule, description='ssh rule',
                                      destination_range='22')
    print(rmreturn)
    
    # create storage account
    print('Creating storage account: ' + name)
    rmreturn = azurerm.create_storage_account(access_token, subscription_id, name, name, location, storage_type='Premium_LRS')
    print(rmreturn)
    
    # create VNET
    vnetname = name + 'vnet'
    print('Creating VNet: ' + vnetname)
    rmreturn = azurerm.create_vnet(access_token, subscription_id, name, vnetname, location, nsg_id=nsg_id)
    print(rmreturn)
    # print(json.dumps(rmreturn.json(), sort_keys=False, indent=2, separators=(',', ': ')))
    subnet_id = rmreturn.json()['properties']['subnets'][0]['id']
    print('subnet_id = ' + subnet_id)
    
    # create public IP address
    public_ip_name = name + 'ip'
    dns_label = name + 'ip'
    print('Creating public IP address: ' + public_ip_name)
    rmreturn = azurerm.create_public_ip(access_token, subscription_id, name, public_ip_name, dns_label, location)
    print(rmreturn)
    ip_id = rmreturn.json()['id']
    print('ip_id = ' + ip_id)
    
    # create NIC
    nic_name = name + 'nic'
    print('Creating NIC: ' + nic_name)
    rmreturn = azurerm.create_nic(access_token, subscription_id, name, nic_name, ip_id, subnet_id, location)
    print(rmreturn)
    nic_id = rmreturn.json()['id']
    
    # create VM
    vm_name = name
    vm_size = 'Standard_A1'
    publisher = 'Canonical'
    offer = 'UbuntuServer'
    sku = '16.04.0-LTS'
    version = 'latest'
    os_uri = 'http://' + name + '.blob.core.windows.net/vhds/osdisk.vhd'
    username = 'rootuser'
    password = 'myPassw0rd'
    
    print('Creating VM: ' + vm_name)
    rmreturn = azurerm.create_vm(access_token, subscription_id, name, vm_name, vm_size, publisher, offer, sku, \
        version, name, os_uri, nic_id, location, username=username, password=password)
    print(rmreturn)
    print(json.dumps(rmreturn.json(), sort_keys=False, indent=2, separators=(',', ': ')))
    

    Compare this azurerm example with an Azure Python SDK example to create a VM.

    Posted in Cloud, Computers and Internet, Python | Tagged , | Leave a comment

    Upgrading Minecraft on an Azure VM

    When you deploy an Azure Minecraft VM using the Azure Resource Manager template it should be running the latest Minecraft server version, but the Mojang folks update the server fairly often, and before you know it your Minecraft launcher is complaining that the server is no longer on the latest version. If you deploy the Azure Marketplace Minecraft image rather than the ARM template, the server is more likely to be out of date. Here’s how you can upgrade the Minecraft server on the Azure VM to the latest version.

    The basic steps are:

    • ssh to the VM.
    • Download the latest Minecraft server JAR file.
    • Update the minecraft-server systemctl service to point to the new JAR file.
    • Restart the minecraft-server service.

    For convenience here’s a script that performs all those steps automatically. I’ll paste it below, though go here for the latest version: https://github.com/gbowerman/azure-minecraft/blob/master/scripts/mineserverupgrade.sh.

    To upgrade a Minecraft server on Azure (as long as it was originally deployed using the ARM template or Marketplace image), copy the script to the virtual machine and run it using sudo. Since it’s a fairly short script a simple way to put it on the VM might be to just start vi (or nano or whatever) and paste the script into the editor and save it. Then remember to run chmod +x on the file to make it executable, and run it as root. E.g. like this (if you’re upgrading to Minecraft server version 1.10.2):

    sudo bash
    ./mineservererupgrade.sh 1.10.2
    

    Here’s the listing:

    #!/bin/bash
    # Minecraft server upgrade script for Azure
    # $1 = new version (e.g. 1.10.2)
    
    # check for a command line argument
    if [[ ! $# -eq 1 ]] ; then
        echo The Minecraft server version needs to be passed as a command line argument, e.g. sudo $0 1.10.2
        exit 1
    fi
    
    # server values
    minecraft_server_path=/srv/minecraft_server
    server_jar=minecraft_server.$1.jar
    SERVER_JAR_URL=https://s3.amazonaws.com/Minecraft.Download/versions/$1/minecraft_server.$1.jar
    
    # adjust memory usage depending on VM size
    totalMem=$(free -m | awk '/Mem:/ { print $2 }')
    if [ $totalMem -lt 1024 ]; then
        memoryAlloc=512m
    else
        memoryAlloc=1024m
    fi
    
    cd $minecraft_server_path
    
    # download the server jar
    while ! echo y | wget $SERVER_JAR_URL; do
        sleep 10
        wget $SERVER_JAR_URL
    done
    
    # stop the service
    systemctl stop minecraft-server
    
    # move the old service file
    mv /etc/systemd/system/minecraft-server.service /tmp/minecraft-server.service.old
    
    # recreate the service
    touch /etc/systemd/system/minecraft-server.service
    printf '[Unit]\nDescription=Minecraft Service\nAfter=rc-local.service\n' >> /etc/systemd/system/minecraft-server.service
    printf '[Service]\nWorkingDirectory=%s\n' $minecraft_server_path >> /etc/systemd/system/minecraft-server.service
    printf 'ExecStart=/usr/bin/java -Xms%s -Xmx%s -jar %s/%s nogui\n' $memoryAlloc $memoryAlloc $minecraft_server_path $server_jar >> /etc/systemd/system/minecraft-server.service
    printf 'ExecReload=/bin/kill -HUP $MAINPID\nKillMode=process\nRestart=on-failure\n' >> /etc/systemd/system/minecraft-server.service
    printf '[Install]\nWantedBy=multi-user.target\nAlias=minecraft-server.service' >> /etc/systemd/system/minecraft-server.service
    
    # restart the service
    systemctl start minecraft-server
    
    # closing message
    echo Upgrade completed. If any problems, you can revert to the previous version by running\:
    echo sudo systemctl stop minecraft-server
    echo sudo cp /tmp/minecraft-server.service.old /etc/systemd/system/minecraft-server.service
    systemctl daemon-reload
    echo sudo systemctl start minecraft-server
    

    Posted in Cloud, Computers and Internet, Games, Linux, Ubuntu | Tagged , , , , | 3 Comments

    How to convert an Azure virtual machine to a VM Scale Set

    If you have a regular Azure Resource Manager virtual machine, you can convert it to be a source image for a VM Scale Set.

    Update 3/21/17: Since Azure Managed Disks were introduced, it’s now recommended to create scale sets based on Managed Disks instead of the traditional storage account method. You get several advantages:

  • No more need to managed a bunch of storage accounts.
  • You can attach data disks to scale sets.
  • You can create scale sets based on platform images of up to 1000 VMs.
  • Scale sets based on custom images can be up to 100 VMs.
  • For a description of how to convert an Azure virtual machine to a VM Scale Set based on Managed Disks, see Derek Martin’s ARM Templates and Managed Disks article.

    The rest of this post is based on creating a scale set using self-managed storage. Hopefully I’ll update it at some point 🙂

    In this example I’ll convert an Azure VM running a Minecraft server, to a load balanced VM scale set of servers. Multiple Minecraft client connections will hit a load balancer and be routed to different VMs in the set. This would enable you to start with a single VM, and scale it out to handle a much larger load.

    In a nutshell, to convert a single VM into a scale set you need to: capture a generalized image of the VM and copy that image into the storage account you’ll use for the set, then deploy a VM Scale Set with a custom image pointing to the generalized image. The steps are:

    1. Generalize the VM (e.g. run sysprep on Windows or waagent –deprovision on Linux).

    2. Stop deallocate the VM.

    3. Set the VM state as generalized.

    4. Save the image to a storage account.

    5. Copy the image to the storage account where you want to create the scale set.

    6. Deploy a VM Scale Set template with the image->uri property set to the image location.

    Now let’s go through those steps in more detail..

    Before starting make sure you have an Azure VM that you can log in to. For the Minecraft scenario the starting point would be to deploy a Minecraft server VM using the Minecraft Server Azure template. See Creating a Minecraft server using an Azure Resource Manager template for more information on how to do that.

    Note: in the steps below I will mostly use Azure CLI examples. Steps 1-4 for PowerShell are described in detail in Stephane Laponte’s excellent blog post STEP BY STEP: HOW TO CAPTURE YOUR OWN CUSTOM VIRTUAL MACHINE IMAGE UNDER AZURE RESOURCE MANAGER. Another useful PowerShell resource for steps 1-4, particularly if you have a Windows VM is the Azure documentation: How to capture a Windows virtual machine in the Resource Manager deployment model.

    1. Generalize the VM

    The first step in preparing a VM to be a source image for new VM deployments is to log in to the machine and generalize the image so it can be assigned a new name/user/password/certificate etc. at VMSS deployment time.

    On Windows that means running sysprep. On Linux call the VM agent with the –deprovision argument: sudo waagent –deprovision.

    image

    2. Stop deallocate the VM

    Stop deallocate the VM so the OS drive image can be captured.

    For PowerShell the command is Stop-AzureRmVm. The CLI command is: vm deallocate . e.g.

    image

    3. Set the VM state as generalized

    Now tell Azure that the VM is generalized.

    The PowerShell command is:
    Set-AzureRmVM –ResourceGroupName –Name –Generalized

    The Azure CLI command is: azure vm generalize . E.g.

    image

    4. Save the image to a storage account

    Now it’s time to capture the generalized image and save it in a storage account. The PowerShell command is Save-AzureRmVMImage. The CLI command is: azure vm capture mineset , and you’ll get back a template for the captured image which includes the properties->storageProfile->osDisk->image->uri setting, which is the link to the captured image that you’ll need when copying it to a new storage account.

    image

    5. Copy the image to the storage account where you’ll create the scale set

    If  the generalized VM image capture is already in the storage account and container you want it to be in, fine. In most cases at this point you’ll probably want to create a new storage account that you will use for the scale set you’ll be creating. You can copy the image to the new storage account using PowerShell, CLI, or a storage explorer like CloudBerry Explorer. I like the CloudBerry tool because it offers a nice split screen to show 2 storage accounts at a time and easily copy blobs between them.

    image

    Make a note of the URI for the new image as it will be used when deploying the scale set.

    6. Deploy a VM Scale Set template with the image->uri property set to the new image location

    The last step is to create a VM Scale Set with the image URI property set to the new image. There are some example ARM templates which allow you to specify a custom image, like this one in Aure Quickstart Templates: https://github.com/Azure/azure-quickstart-templates/tree/master/201-vmss-windows-customimage, but for the Minecraft server scenario, as well as being a Linux image, I also wanted to create a public IP address, and a load balancer with a rule to load balance incoming requests to the default Minecraft server port of 25565 to every VM in the set. The specialized template I created is here: vmss-minecraft-custom.json.

    Deploying this template to Azure as a new custom deployment in the portal allows the URI of the new image to be specified as a deployment parameter (along with the number of VMs, VM size, etc.).

    image

    Once the template is successfully deployed, a VM Scale Set of 10 Minecraft servers is now running behind a load balancer. Yay! Now the set of Minecraft servers can handle 10 times the incoming load, or I could scale this out to 40 servers.

    Note the Minecraft world on each VM in the scale set is exactly how it was when I generalized the original VM, with the same operators, whitelist settings etc. When users start making changes to different VMs the worlds will diverge, but I can always reimage the VMs in the scale set to set them back to the source image.

    One manual thing I had to do was start the Minecraft server on each VM (i.e. SSH to each VM using the inbound NAT rules defined in the template and run sudo systemctl start minecraft-server). This shouldn’t be necessary, and it may have been because I had shut down the Minecraft server before generalizing the image.

    Minecraft 1.10 6_20_2016 5_39_33 PM

    Next steps

    This was a basic walkthrough of converting a standalone Azure VM to a VM Scale Set. A next logical step would be to configure the VMSS template to use Azure autoscale. This way instead of launching a fixed number of VMs and manually scaling in or out, you could save costs by automatically scaling in or out depending on a workload such as average CPU speed.

    Posted in Cloud, Computers and Internet, VM Scale Sets | Tagged , , , , | 2 Comments