Petuum Unveils Enterprise MLOps Platform
Petuum helps enterprise AI/ML teams operationalize and scale their machine learning pipelines to production with the world’s first composable platform for MLOps. After years of development at CMU, Berkeleyand Stanfordalong with dozens of client engagements in finance, healthcare, energy and heavy industry, Petuum announced a limited release of its platform through an exclusive private beta for select clients.
“We’ve spent the past five years working with clients on tough MLOps problems and learned how to multiply the productivity of AI teams through extensive research. The Petuum platform helps AI teams do more with less.” – Aurick Qiao, CEO
Petuum’s enterprise MLOps platform is built around the principles of composability, openness, and infinite extensibility. With universal standards for data, pipelines, and infrastructure, AI applications can be created from reusable building blocks and managed through a repeatable assembly line process. Petuum users don’t have to worry about DevOps infrastructure or expertise, code glue or tuning, and can instead focus on rapidly deploying more projects in less time, with less resources and with less help from others.
“In training alone, we have seen a return on investment 3 to 8 times longer. The Pythonic infrastructure orchestration and deployment system is quite easy to use for a data scientist. – Tong Wen, Director of Engineering
The end-to-end platform includes the AI operating system with Kubernetes low/no-code optimized for AI. Universal Pipelines allows inexperienced users to compose and run DAGs with modular DataPacks for any type of data. The low/no-code deployment manager can upgrade, reuse, and reconfigure pipelines in production with observability and user management. The platform also hosts a revolutionary experiment manager for amortized auto-tuning and optimization of model and system pipelines.
Petuum’s award-winning team is from the open-source CASL consortium and includes thought leaders in all categories of machine learning operations. Petuum customers have seen improvements of 50% or more in time to value and ML team and resource productivity. These unparalleled efficiencies only increase with scale.
“It’s the omniverse of Petuum. With Petuum AI OS, you can abstract anything and everything, as long as it works with Docker and normal computing systems. In that sense, you don’t just have this graph system, you also want to standardize all your pipes.” – Guowei HeInception Institute of Artificial Intelligence
To learn how your #ScaleMLOps team can quickly apply for the free private beta at petuum.com or email [email protected].
About the speakers
Aurick Qiao is the General Manager of Petuum. Aurick obtained his doctorate from Carnegie Mellon University, where he studied distributed machine learning systems. His work on elastic planning for deep learning training recently won the Jay Lepreau Best Paper Award at OSDI 2021. With his experience at top tech companies such as Microsoft, Facebook, and Dropbox, Aurick develops products to support the next generation of AI/ML Operations.
Tong-wen is an architect and director of engineering at Petuum. Tong joined Petuum from Microsoft where he was a founding team member of Azure Machine Learning. Tong has over 10 years of experience building innovative, high-impact AI/ML and HPC platforms with a proven track record. Prior to his first startup experience in 2008, Tong was a research scientist in computer science and engineering at IBM Research and Lawrence Berkley National Lab. He holds a doctorate in applied mathematics from MIT.
Follow Petuum for news and updates