Share

Web-scale Model Deployment and Much More

When we began building Domino, we believed we would make the best product by deeply understanding the pain that data scientists feel on a day-to-day basis. That approach led to a product, which we first launched in early 2014, that was ahead of its time and saw rapid adoption among some of the most advanced data science organizations. Over the last three years, we have been privileged to work closely with these organizations, and have used their feedback to drive the next wave of improvements to our product.

Today we’re announcing Domino 2.0 with major improvements in four key areas of the product:

  1. Model Deployment - Domino 2.0 includes a complete re-architecture of our “API Endpoints” functionality, which lets data scientists deploy models as REST APIs. The new architecture supports web-scale use cases at low latencies, and provides more advanced capabilities like A/B testing model variations.
  2. Environment Management - We overhauled our functionality for managing “compute environments” of packages and dependencies. Domino leverages Docker to make environments that are now more shareable, revisioned, and much easier to reuse.
  3. Experiment Tracking and Meta-Analysis - To streamline data scientists' workflows, we developed a new user interface for finding and reviewing past results and tracking progress as work progresses iteratively on research projects.
  4. Result Publication - To support rapid iteration with business stakeholders, we improved Domino’s publishing functionality via automated reports, interactive applications or dashboards such as Shiny.

Domino 2.0 is available to select beta customers, and will be generally available later this spring. Let us know if you’d like to get a sneak peek.

Over the following weeks, we’ll elaborate on these new capabilities in more depth. Today, we want to take a deep dive into our new Model Deployment capability, because it addresses a clear need of almost every data science organization we talk to.

Ship Models to Production Faster with Enterprise-Grade APIs

Domino 2.0 allows teams to deploy Python and R models in seconds as horizontally-scalable, high-availability API endpoints. Whether you’re embedding a predictive model in an internal tool (e.g., a CRM), a customer-facing application, or publishing for external consumers, Domino helps you push your model improvements into production faster. By avoiding time-consuming translation steps and infrastructure setup, Domino drastically reduces the time from insight to impact.

Here are three highlights of what’s new in Domino 2.0:

1. Scalable, High-Performance Deployment

We completely re-architected our model deployment technology. Our new architecture is built on Kubernetes, which offers a host of performance and reliability advantages. We benchmarked popular machine learning models in Python and R with latency of <50ms handling 6,000 requests per minute. This allows data science teams to work with their engineering counterparts to embed data science results into business-critical processes rapidly without being forced to “dumb down” models or re-translate them into another language.

Preview of Domino 2.0

Please note that the performance is based on Domino’s internal testing and does not represent a performance guarantee. Model latency depends on a number of factors such as external service dependencies, network topology, and hardware configuration.

2. A/B Testing

Domino now supports A/B testing to optimize model performance and encourage agile best practices. Software development has long-embraced rapid experimentation in production and there’s no reason data science should be any different. If one team member builds a recommendation engine in R and another builds a challenger in Python, you can easily create an Experiment and route real request traffic between different versions to measure performance.

Preview of Domino 2.0c

3. Models as First-Class Entities

We made “Models” a first-class entity, and along with that, imbued them with several important properties. They now have their own, more flexible security model, distinct from the “projects” that developed them. It is easier to define roles and governance around how models get updated in production. You can attach documentation to models for internal or compliance purposes. And with Domino’s Reproducibility Engine, you can easily trace back to the exact version of the project and datasets from which your model was built.

Preview of Domino 2.0

We are excited to tell you more about Domino 2.0 over the coming weeks. Let us know if you’d like to get a sneak peek.