Any Cloud - Data Science Platform As A Service, On Premise or Hybrid Cloud

By its unique design based on widely used and state-of-the art  components in the distributed computing stack, omega|ml is easily deployed to any cloud. It works best when your cloud supports Docker, however bare-metal or other container technology supporting Python, RabbitMQ and MongoDB will work.

With the Open Source core, omegal|ml is readily deployable by any DevOps team familiar with Docker. 

For on-premise deployments or if you need enterprise-ready security and do not want to manage the complexity of a large-scale data & compute cluster, omega|ml Enterprise Edition is available as a service or on-premise. Bring Your Own Cloud or subscribe to our compute capacity.


Setup is a breeze: omega|ml EE is already configured to support any Kubernetes cloud by simply adding an omega|ml cluster management node. We take care of the rest. Deployed within minutes, JupyterHub and a Server Less Script/Lambda service are then ready to use, providing Notebooks for straight-forward team collaboration and interactive computing at scale. All while enabling data storage and any packaged data science algorithm to run from the same easy-to-use API as shown above.

Fun Fact: Unlike most vendors we do not license by number or size of compute nodes - with unlimited compute nodes only your imagination limits what you can achieve.


There is untapped growth potential.  

All globally leading cloud providers offer their version of a Ready-To-Go Data Science and Machine Learning Service, too complex for most of their local competitors. 


Get competitive today. 

With omega|ml you can offer your clients even better convenience to process their AI and machine learning workloads while keeping all data storage local. No Lock-In. Your Data Center. Ready today. 

Why Act Now? 

As customers of all sizes look to leverage AI and machine learning, IaaS and PaaS providers face the challenge of losing or missing out on valuable growth potential for their compute and storage resources to the incumbent large-scale providers. Don't let this happen to you, act today.


Extensible Architecture

Storage & Compute Backends, DataFrame Operations

omega|ml comes with batteries included, however new requirements are not a problem: alternate data sources or sinks and data pipelines can easily be added.

The following extension points are available:

icon storage backend.png


Extend what  objects can be stored and retrieved

icon runtime backend.png


Add any compute backend to run any omega|ml-stored models through the same API

icon compute and dataframe mixins_2x.png
  • White Facebook Icon


Add processing options such as AutoML or Model Visualization


Works with any machine learning framework

# scikit-learn and SparkML supported out of the box, more to follow


# — any scikit learn model or pipelinepipl = Pipeline([…])om.models.put(pipl, ‘forecast’)om.runtime.model(‘forecast’).predict(‘datax’)


# — create spark models from specification

om.models.put(‘pyspark.mllib.clustering.KMeans’,’kmeans’, params=dict(k=8))



# — store a spark model as instantiated in your spark context

kmSpark = KMeans(k=8)om.models.put(kmSpark,‘kmeans’)