Introduction
Machine learning and AI are everywhere. More and more people are discovering the advantages that learning algorithms can provide – decreasing costs, improving efficiency, eliminating menial tasks, etc. Knowing at least a little bit about how these algorithms work is quickly becoming a necessity.
While the actual ML model construction, training, and evaluation is a topic of numerous courses, talks, and tutorials, one crucial area of working with ML is much less discussed – bringing the trained models to production (also known as operationalization).
There are many ways of accomplishing this, depending on your technological stack of choice. A lot of workflows are currently moving to cloud and it’s no surprise machine learning is one of them – having on-demand compute makes scaling demanding machine learning models a breeze.
Microsoft Azure is currently in the process of testing a new set of capabilities for bringing models to production. Seeing this in action at the WeAreDevelopers conference in Vienna, I wanted to get some hands-on experience with it before it goes to general availability. This set of articles is about what I learned along the way.
The scenario
A customer I’m working for is currently exploring a way of finding and extracting a piece of information from a photo of an official document. I created a small convolutional neural network for this purpose and reached a very good level of accuracy (error below 3px for the localization of the information).
The model was written using python, Keras and Google Tensorflow and was trained in-house (not in the cloud) on a single GPU. When saved to disk, the model is about 1.2 MB.
I wanted to operationalize this model using Azure to see how would the performance be in the wild.
Enter Azure Machine Learning Workbench
Azure ML Workbench (AMLW), as the name suggests, is a set of tools and libraries for creating, testing, and deploying Azure-based ML models. Since our model is already created and trained, we don’t really need the experimentation features and the actual workbench application – for us, the most important part of the toolset is the Azure ML CLI.
This CLI contains the necessary commands and integrations that will allow us to take our pre-trained model, expose it via a web service, package the whole thing into a Docker image, and either test it locally or run it in an Azure Kubernetes cluster for production-grade deployment at scale.
Quite a few steps are required to go from a local model to a Kubernetes-deployed model cluster. I will go over each step in detail and explain the how and why of it.
Getting the required tools
First things first – in order to follow the steps, we need to install the necessary software. If you make use of Azure Data science virtual machines, all these tools should already be available there so you can skip to the next section.
Since we are going to use Azure, first thing you need is an Azure account. A free one should be sufficient, but if you want to do the Kubernetes deployment, a paid subscription works much better.
In the Azure portal, we create a new Machine Learning Experimentation resource, including a Model Management account. We will need to provide names for various resources – keep these at hand as we will need them later. There is a free tier of the Model Management service that should serve well for testing the process.


pip install -r https://aka.ms/az-ml-o16n-cli-requirements-file
After the installation finished, install Docker. Since the final output of the whole process is a docker image, having Docker locally will be help with debugging and testing.
Next, we need to log into the Azure account:
az login
This will display a link for you to follow and enter a registration code displayed in the command line.
Setting up the CLI environment
We now need to bind the CLI to the Azure account and resource groups:
az provider register -n Microsoft.MachineLearningCompute az provider register -n Microsoft.ContainerRegistry az provider register -n Microsoft.ContainerService
Registering those three providers might take a minute or two, we can monitor the progress using
az provider show -n [provider name]
Once all three providers show as “Registered”, we can proceed to the Azure environment setup:
az ml env setup -l westeurope -n amlwenv -g AMLW
If you have more than one Azure subscription available, the tool will ask which one to use. The example command line above will create a new environment called “amlwenv” in the West Europe Azure region and within a resource group called AMLW (it’s the same resource group we used for the Experimentation resource above). Provisioning a new environment will take a minute or two.
Once the provisioning succeeds, we need to activate the environment:
az ml env set -n amlwenv -g amlw
You should get a message similar to this one:
Setup phase complete!
Conclusion
In this initial part of a series of articles I attempted to introduce the Azure Machine Learning Workbench as a new way of bringing machine learning to production.
We saw how to setup an Azure Machine Learning Experimentation resource and its Model Management account. We also installed the required CLI tools and created a new ML environment we will use for testing the model’s service locally and activated it.
In the next part of the series, we will see how to package, describe, containerize, and test the ML service, complete with code examples.
Read next article: Operationalizing machine learning models 2/3 – From model to service