top of page

MLOps for Teams On a Mission
Scales from Laptop to Cloud, On-Prem & Cross- or Hybrid-Cloud
Enabled for Azure, AWS, Kubernetes and Edge Deployments

By its unique design based on widely used and state-of-the art  components in the distributed computing stack, omega|ml is easily deployed to any cloud. It works best when your cloud uses Kubernetes, however bare-metal, Docker or other container technology supporting Python, RabbitMQ and MongoDB will work.

With the Open Source core, omegal|ml is readily deployable by any DevOps team familiar with Docker. The Commercial Edition is readily enabled for Kubernetes.

For on-premise deployments or if you need enterprise-ready security and do not want to manage the complexity of a large-scale data & compute cluster, the omega|ml Commercial Edition is available. 

Setup is a breeze: omega|ml Commercial Edition is readily configured to support any Kubernetes cloud. Deployed using our GitOps-enabled Cloud Manager, omega-ml is ready to run AI & ML workloads at any scale.

The built-in JupyterHub, AppHub and the serverless lambda-style Runtime Worker services are instantly ready to use, providing Notebooks for straight-forward team collaboration and interactive computing at scale. All while integrating your DBMS and existing data science pipelines to run from the same easy-to-use API.

Fun Fact: Unlike most vendors we do not license by number or size of compute nodes - unlimited compute capacity means your license cost is known in advance and stays fixed.

DELIVER BUSINESS VALUE
HELLO PLATFORM ENGINEERS

Leverage the Cloud, no Vendor Lock-in

All globally leading cloud providers offer their version of Ready-To-Go AI Services. However, they come at the cost of unavoidable vendor-lock in.

Don't let this happen to you, use the vendor-independent omega-ml platform to stay flexible.

Make Platform Engineering & DevOps Happy

The omega-ml MLOps platform provides a cost-effective and scalable approach to run AI and ML workloads - all while keeping infrastructure, data and processes under control. No Lock-In. Your Data Center. Ready today. 

Enable AI Teams To Deliver Business Value

Analytics teams need MLOps to leverage the full potential of AI and machine learning. There is no value in building this capability in-house. Use omega-ml and start delivering.

An Cloud

Data Products Platform

Open & Flexible Integration from Development Lab to Production

omega|ml comes with batteries included, however new requirements are not a problem: Alternate data sources & sinks, third-party applications and services can easily be added thanks to omega-ml's flexible multi-layer and microservices archicture.

The following extension points are available:

feature store and model repository

STORAGE BACKEND

Extend what  objects can be stored and retrieved

pre-deployed model runtime

RUNTIME BACKEND

Add any compute backend to run any models through the same API

any cloud or on-premises

COMPUTE & DATAFRAME

Add processing options such as AutoML or Model Visualization

Architecture

Works with any machine learning framework

# Python and R frameworks

# — scikit-learn learn models and pipelines

pipl = Pipeline([…])om.models.put(pipl, ‘forecast’)

om.runtime.model(‘forecast’).predict(‘datax’)

# — R models

om$models$put('r$model', 'mtcars-model')
om.runtime.model(‘mtcars-model’).fit(‘datax’)

# — MLflow projects

lflow_path = 'mlflow:///path/to/mlflow/project' meta = om.scripts.put(mlflow_path, 'myproject', kind='mlflow.project')

om.runtime.script(‘myproject’).run()

Examples
bottom of page