AWS unveils open source model server for PyTorch

TorchServe supports multi-model serving and model versioning for A/B testing
Pro
Image: Fancycrave / IDG News Service

27 April 2020

Amazon Web Services (AWS) has unveiled an open source tool, called TorchServe, for serving PyTorch machine learning models. TorchServe is maintained by AWS in partnership with Facebook, which developed PyTorch, and is available as part of the PyTorch project on GitHub.

Released on 21 April, TorchServe is designed to make it easy to deploy PyTorch models at scale in production environments. Goals include lightweight serving with low latency, and high-performance inference.

The key features of TorchServe include:

  • Default handlers for common applications such as object detection and text classification, sparing users from having to write custom code to deploy models.
  • Multi-model serving.
  • Model versioning for A/B testing.
  • Metrics for monitoring.
  • RESTful endpoints for application integration.

Any deployment environment can be supported by TorchServe, including Kubernetes, Amazon SageMaker, Amazon EKS, and Amazon EC2. TorchServe requires Java 11 on Ubuntu Linux or MacOS. Detailed installation instructions can be found on GitHub. 

IDG News Service

Read More:


Back to Top ↑

TechCentral.ie