Kubeflow brings Kubernetes to ML workloads
3 September 2018 | 0
Now in beta, the open source Kubeflow project aims to help deploy a machine learning stack on the Kubernetes container orchestration system.
The Kubeflow machine learning toolkit project is intended to help deploy machine learning workloads across multiple nodes but where breaking up and distributing a workload can add computational overhead and complexity. Kubernetes itself is tasked with making it easier to manage distributed workloads, while Kubeflow centres on making the running of these workloads portable, scalable, and simple. Scripts and configuration files are part of the project. Users can customise their configuration and run scripts to deploy containers to a chosen environment.
To help management deployments, Kubeflow works with Version 0.11.0 or later of the Ksonnet framework, for writing and deploying Kubernetes configurations to clusters. Kubernetes 1.8 or later is required, in a cluster configuration. Kubeflow also works with the following technologies:
- TensorFlow machine learning models, which can be trained for use on premises or in the cloud.
- Jupyter notebooks, to manage TensorFlow training jobs.
- Seldon Core, a platform for deploying machine learning models on Kubernetes.
Kubeflow extends the Kubernetes API by adding custom resource definitions to a cluster, so Kubernetes can treat machine learning workloads as first-class citizens. Described by the open source project as being cloud-native, Kubeflow also integrates with the Ambassador for Ingress and Pachyderm projects for management of data science pipelines. Plans call for extending Kubeflow beyond TensorFlow, with backing considered for the PyTorch and MXNet deep learning frameworks.
Kubeflow can be downloaded from GitHub.
IDG News Service