Azure

Azure’s new machine learning features embrace Python

Pro
(Image: Microsoft)

26 September 2018

Microsoft has several new additions to its Azure ML offering for machine learning, including better integration with Python and automated self-tuning features for faster model development.

Python is a staple language for machine learning, thanks to its low barrier to entry and its wide range of machine learning libraries and support tools. Azure’s offering with Python is a new SDK that lets Azure ML connect to a developer’s existing Python environment.

This SDK comes with the azureml-sdk package that can be installed using Python’s pip package manager. Most Python environments, from a generic Python install to data-science-specific distributions like Anaconda Python or a Jupyter notebook, can connect to Azure ML this way.

Tools offered through the SDK include data preparation, logging the results of experiment runs, saving and retrieving experiment data from Azure blob storage, automatically distributing model training across multiple nodes, and ways to automatically create various execution environments for jobs, such as remote VMs, Docker containers, and Anaconda environments.

Another new Azure ML feature supported by the new Python SDK is automated machine learning. The underlying concept isn’t new—it’s a form of hyperparameter optimisation, or a way to automatically tune the parameters used for a particular machine learning model training system to yield better results.

Microsoft describes it as “a recommender system for machine learning pipelines. Similar to how streaming services recommend movies for users, automated machine learning recommends machine learning pipelines for data sets.” Microsoft also claims the automation can be done without looking directly at sensitive data, and thus preserve users’ privacy.

Other new features include:

  • Distributed deep leaning, to allow models to be automatically trained on a cluster of machines without having to configure the cluster.
  • Hardware-accelerated inferencing, which uses FPGAs to speed the serving of inferences from models.
  • Model management via CI/CD, so that Docker containers can be used to manage trained models.

 

IDG News Service

Read More:


Back to Top ↑

TechCentral.ie