Select Page

Watson Machine Learning

Deploying and monitoring machine learning models into production at scale

Project description

IBM’s Watson Machine Learning existing first as a backend service for Watson Studio, a tool designed for data scientists to explore data, develop and train models. WML as an engine allowed data scientists to deploy those models.

Now, WML exists as a standalone cloud or local product that allows data scientists and devOps engineer teams to configure, deploy, and monitor their analytics assets, including models, notebooks, and Python functions.

Project details

Role: sole UX designer

Duration: 1 year

Meet Rohan, the devOps engineer.

Rohan works on a team with other devOps engineers tasked with deploying models and analytics assets. After a data scientist trains a model, it’s Rohan’s job to optimize, deploy and monitor it throughout its lifespan, while maintaining a healthy system performance. 

But organizing all of these assets can become tricky. The different projects Rohan works on range from only deploying a single model, to up to hundreds of different analytics assets. Rohan needs an effective way to collaboratively keep track of all of his models, deployments, and how they are running on his infrastructure.

Typical devOps workflow

Rohan needs to be able to accomplish these tasks efficiently in order to keep his team productive and his organization’s applications afloat.  

Watson Machine Learning (WML) is a tool created for devOps engineer teams to configure, deploy, and monitor their models at scale.

WML simplifies the handoff between data scientists and devOps engineers, optimizes devOps workflow, and stands as an independent tool or addition to Watson Studio.

Add collaborators to your deployment space

Deployment spaces host your assets and deployments. Adding collaborators allows other devOps team members to work with the assets and monitor them. You can search for collaborators and invite them to the space, or add them from the linked Watson Studio project.

 

Add assets to your deployment space

Connect your deployment space to your Watson Studio project and promote assets intended for deployment including their dependencies, or upload assets directly from Github or your local files. This way, only the assets you need will end up in your deployment environment.

Configure model performance monitoring

Set up recurring evaluations by specifying runtime, connecting to feedback data and configuring the evaluation schedule. 

Add specified metrics to score the model with and set thresholds per each metric, with the option to retrain the model if the evaluation result is poor.

 

Monitor model evaluations

The model’s evaluations are plotted and can be monitored throughout the continuous learning cycle. 

 

Deploy models as online scores

You can deploy models as online scores to be accessed by developers and other model consumers through the scoring endpoint. Apply custom hardware configurations or select from preset sizes. 

Test online score deployment API

To interface with the deployment and test the API, simply provide input data either through a graphic form or JSON input. View model predictions in JSON or in a graphic visualization.

Deploy as batch score and run jobs

To deploy the model as a batch score, specify input data sources and output data locations and configure a run schedule.

You can run jobs on the deployment through the scoring-endpoint or within the interface, on demand or on a recurring basis, view the output data asset, and review the run’s log.

 

Monitor model health

Efficiently scan total model performance through visualized evaluation results and listed recent results.

 

Monitor deployment performance

View which of your deployments are requested the most, and the average response time per each deployment 

Monitor job runs

View history of jobs run across assets as well as their completion status.

Drawing ideas

I provide explanatory diagrams consistently to explore different options for different stages of the design development. These help my design team and other stakeholders to more easily understand different concepts and choose a direction for the solution. 

(Some of the content is obscured due to intellectual discretion) 

Information architecture

These diagrams also serve as a way to structure information within the product, making all capabilities intuitively accessible and artifacts easily searchable. 

Sketches

I rapidly iterate on hand sketches throughout the process, denoting UX, UI, and content iterations for development and user testing. 

Wireframes

I convert sketches to multiple iterations of wireframes, to further explore options for UX and content with users, offering managers, developers, and fellow designers.