Prevision.io News: First Ever Pay-As-You-Go AI Management Platform Prevision.io Launches on Google Cloud.

Introduction

 

This is the last blog post in an six-part series on how to use Prevision.io Python SDK to build production-ready and fully monitored AI models using your real-world business data. If you already have a Prevision.io project, called “Electricity Forecast” containing a scheduled pipeline then you are ready to go. Otherwise, head over to the fifth blog post, follow the instructions and come back! 

 

What Are We Doing Today?

 

In this blog post, we are going to talk about Model Monitoring (what it is and how Prevision.io addresses it, especially though the Python SDK) and then we will conclude our journey.

 

PS: I am so proud that you’ve made till here, you’ll be missed and I hope we’ll go for another adventure in the ML word so soon!

 

Hope you enjoyed the series & you are as excited as I am for the next one 

 

Some Context before starting?

What Is A Model Lifecycle?

When talking about Machine Learning, people mostly think that “algorithm creation” is the thing that we, data scientists, have to deal with. While true, this vision is somewhat incomplete. At Prevision.io, we do believe that the typical Machine Learning project looks like this :

The Machine Learning spaghetti as envisioned by Prevision.io

In this blog post series, we have covered almost everything (more or less in detail) up to the deployment of models (and applications!) that comes with them. However, as you can see on the above chart there is still a final step about mostly monitoring (data, model), alerting, retraining, that we can sum up with the buzzword MLOps.

While it’s true that this field is fairly new, some companies try to offer a solution in order to solve a problem and the questions that arise after the deployment phase, which are:

  • For IT folks:

    • Is the model still running?

    • Is the model still accessible to external and allowed users?

    • Is the application still running?

    • Is the application accessible to external and allowed users?

    • How many resources (CPU, RAM, DISK, GPU,…) does all of this consume?

    • How can I manage API keys, especially in the case of compromission?

  • For Data Science folks:

    • Is the model still predicting accurately?

    • What does the prediction of my model look like? (Concept drift)

    • What does the data entering my model look like? (Data drift)

    • Is there any data integrity violation? (Data schema changing)

    • How can I check all of that?

    • Can I be alerted if something goes wrong?

    • Can I A/B test my model?

    • Can I challenge my champion model with a challenger?

    • Is the challenger behaving better?

    • How can I retrain a new version of my model, and change it without breaking everything and being buried alive by my business users and my manager?

Well, a lot of stuff is going on here, and we could write a whole new blog post series on that alone (and we will ), but you’ve got the idea: knowing if stuff works and how to improve them without breaking all the things is the topic of interest.

What Is Model Monitoring?

Model monitoring refers to the control and evaluation of the performance of an ML model to determine whether or not it is operating efficiently. When the ML model experiences some performance decay, appropriate maintenance actions should be taken to restore performance. You can think of the process as bringing your car in for maintenance from time to time and changing the vehicle’s tires or oil for better performance.

Why Is Model Monitoring Important?

Many companies make their strategic decisions based on ML applications. However, the performance of ML models degrades over time. This can lead to non-optimal decisions for the company, which simply end up with performance degradation, profit or revenue declines, etc. To prevent such a devastating effect, companies should consider the ML model’s performance threshold as a KPI that must always be met. Consequently, they should monitor their ML models regularly.

 

Let The Fun Begin: Prevision.io Monitoring

As we have just seen, monitoring revolves around two main components and personna: IT & Data Science. For each of them, we do offer some capabilities inside the Prevision.io platform.

Also, I’d like to be very transparent here, while MLops is a fairly new topic, Prevision.io has been working on it for years and will have greatly improved features in upcoming releases. So, I’m going to showcase features that are available in the 11.3.3 version or Prevision.io, but stay tuned! More to come in the coming weeks!

Step1. IT monitoring

As stated above, IT monitoring is a real issue that needs to be addressed, especially for model deployment.

In this sense, the first step to go for is to examine the previously deployed models. This can be achieved simply, working under Prevision.io solution, as follows: 

Summary of deployed model

As you can see, some KPIs sum up the deployment. Especially about the IT monitoring, you can see the health status of each model that composes the deployment (main model and if available, challenger model). Here, everything is running perfectly.

Amazing! Now the second step to go for is to access and analyze the IT metrics, such as:

  • The number of predictions (calls) done for the main and the challenger

  • The number of errors for the main and the challenger (hopefully 0)

  • The average response time for the main & the challenger

All of these metrics can be retrieved and accessed, as follows: 

Calls, errors and response time of deployed models

Step 2. Data Science Monitoring 

Now that we have covered IT monitoring, we can go into a little bit more detail about Data Science monitoring.

Disclaimer: a new version of Data Science monitoring is actively being developed at Prevision.io. Also, the actual version isn’t well covered by our SDKs (R or Python).

Moving back to the general tab of a deployed model, under the health checks, you can find (if you have made drifting predictions) the global drift. It is a chart displaying the drift of the main model alongside the challenger if available, aggregated by day.

Furthermore, we do offer concept drift (target drift) and data drift analytics in the Monitoring > Distribution tab.

 

Global Drift, Concept Drift & Data Drift Analytics

 

As stated above, as of today, this level of analytics isn’t yet available in any SDK because they are being reworked, with way more details, a consistent UI and more KPIs being monitored.

Here is a sneak peek of upcoming analytics


Example of feature distribution for main & challengers vs production

Also, we will have analytics coming about metric evolution & analytics over time soon and this will be a part of a complete new blog article about model monitoring.

Step 3. Leveraging monitoring capabilities

Now that we have seen that KPIs and analytics are being provided by Prevision.io, the good question to ask is: how do I leverage this information? What do I do if the monitoring shows something wrong?

Well, in fact, you already know the answer!

The model is inaccessible or the level or right has become insufficient? Feel free to redeploy it with the desired parameters.

The model starts to drift? (And trust me, it will) → investigate which feature is causing the model to drift. Once identified, check if it’s an IT bug (feature being renamed, badly fed, …) and show it to the right person. On the other hand, if the feature seems to follow a natural drift (ex : new modality, distribution slowly shifting,…) feel free to retrain a new version of your experiment and deploy it! You have all the tools available within Prevision.io thanks to pipelines with automatic retraining and deployment 

 

Conclusion

If you are still with me, you definitely made it! We have seen together that the AI model management platform provided by Prevision.io alongside its Python SDK can be a great help for us Python coders! Creating projects, importing data from different sources, building experiments and pipelines, deploying and fully monitoring all of that. If you want to go deeper into the usage of the Python SDK (and trust me, there is a lot more to explore ), I strongly recommend you check out the product documentation and the Python SDK documentation if if you haven’t done so already. Also watch out for release notes, stuff is changing fast and expect the Python SDK to follow this trend!

 

 

 
Zina Rezgui

About the author

Zina Rezgui

Data Scientist