Hi guys! Thanks for sticking with me for the Prevision.io R SDK blog post series! If you just landed here without knowing much about Prevision.io or this series I strongly encourage you to start at the beginning with blog post #1. All of our publications are available on the blog directly @ Prevision.io.

We are at the final article of the R SDK series. In this one, we are going to talk about the model life cycle (what it is and how Prevision.io addresses it, especially though the R SDK) and then we will conclude our journey.

Hopefully this was you at the beginning of the blog post series 🙃

Model life cycle

When talking about Machine Learning, people mostly think that “algorithm creation” is the thing that we, data scientists, have to deal with. While true, this vision is somewhat incomplete. At Prevision.io, we do believe that the typical Machine Learning project looks like this :

The Machine Learning spaghetti as envisioned by Prevision.io

In this blog post series, we have covered almost everything (more or less in detail) up to the deployment of models (and applications!) that comes with them. However, as you can see on the above chart there is still a final step about mostly monitoring (data, model), alerting, retraining, that we can sum up with the buzzword MLOps.

While it’s true that this field is fairly new, some companies try to offer a solution in order to solve a problem and the questions that arise after the deployment phase, which are:

  • For IT folks:
    • Is the model still running?
    • Is the model still accessible to external and allowed users?
    • Is the application still running?
    • Is the application accessible to external and allowed users?
    • How many resources (CPU, RAM, DISK, GPU,…) does all of this consume?
    • How can I manage API keys, especially in the case of compromission?
  • For Data Science folks:
    • Is the model still predicting accurately?
    • What does the prediction of my model look like? (Concept drift)
    • What does the data entering my model look like? (Data drift)
    • Is there any data integrity violation? (Data schema changing)
    • How can I check all of that?
    • Can I be alerted if something goes wrong?
    • Can I A/B test my model?
    • Can I challenge my champion model with a challenger?
    • Is the challenger behaving better?
    • How can I retrain a new version of my model, and change it without breaking everything and being buried alive by my business users and my manager?

Well, a lot of stuff is going on here, and we could write a whole new blog post series on that alone (and we will 🙏), but you’ve got the idea: knowing if stuff works and how to improve them without breaking all the things is the topic of interest.

Said no Data Scientist ever (at least I hope so)

Prevision.io monitoring

As we have just seen, monitoring revolves around two main components and personna: IT & Data Science. For each of them, we do offer some capabilities inside the Prevision.io platform.

Also, I’d like to be very transparent here, while MLops is a fairly new topic, Prevision.io has started to work on it and will greatly improve features in upcoming releases. So, I’m going to showcase features that are available in the 11.3.3 version or Prevision.io, but stay tuned! More to come in the coming weeks!

IT monitoring

As stated above, IT monitoring is a real issue that needs to be addressed, especially for model deployment.

In Prevision.io, one could go to the models we have previously deployed and see what we have here:

Summary of deployed model

As you can see, some KPIs sum up the deployment. Especially about the IT monitoring, you can see the health status of each model that composes the deployment (main model and if available, challenger model). Here, everything is running perfectly.

If you want to retrieve this information thanks to the R SDK, please type the following:

get_deployment_info(deploy$`_id`)

Or the following (if you lost your object from the previous SDK blog post):

get_deployment_info(get_deployment_id_from_name(project_id = get_project_id_from_name("Electricity Forecast"),
                                                name = "R SDK Deployment",
                                                type = "model"))

Furthermore, by clicking on the Monitoring > Usage link in the UI, you can access IT metrics, such as:

  • The number of predictions (calls) done for the main and the challenger
  • The number of errors for the main and the challenger (hopefully 0)
  • The average response time for the main & the challenger

Calls, errors and response time of deployed models

All of these metrics can be retrieved and plotted directly thanks to the R SDK. For instance, if I want to see how many predictions have been made, type the following:

get_deployment_usage(deployment_id = deploy$`_id`,
                     usage_type = "calls")

This will display a usage chart in plotly:

Example of a deployed model usage. Note that the 2 curves are superposed

Data Science monitoring

Now that we have covered IT monitoring, we can go into a little bit more detail about Data Science monitoring.

Disclaimer: a new version of Data Science monitoring is actively being developed at Prevision.io. Also, the actual version isn’t well covered by our SDKs (R or Python).

Moving back to the general tab of a deployed model, under the health checks, you can find (if you have made drifting predictions) a chart displaying the drift of the main model alongside the challenger if available, aggregated by day.

High drift on the first day of deployment, especially for the challenger and everything is back to normal afterwards.

This chart is actually displaying the global drift, defined as the average drift, weighted by feature importance. Let me explain. If a feature is drifting a lot (think: distribution changes radically), but if the guilty feature isn’t that important in the model then it’s not a big deal. However if the #1 feature is drifting, even slightly, then it can have a significant impact on the mode performance. That’s why weighting is crucial here.

Furthermore, we do offer concept drift (target drift) and data drift analytics in the Monitoring > Distribution tab.

Example of target distribution between production stage vs training data set (concept drift)

As stated above, as of today, this level of analytics isn’t yet available in any SDK because they are being reworked, with way more details, a consistent UI and more KPIs being monitored.

Here is a sneak peek of upcoming analytics

Example of feature distribution for main & challengers vs production

Also, we will have analytics coming about metric evolution & analytics over time soon and this will be a part of a complete new blog article about model monitoring.

Leveraging monitoring capabilities

Now that we have seen that KPIs and analytics are being provided by Prevision.io, the good question to ask is: how do I leverage this information? What do I do if the monitoring shows something wrong?

Well, in fact, you already know the answer!

The model is inaccessible or the level or right has become insufficient? Feel free to redeploy it with the desired parameters.

The model starts to drift? (And trust me, it will) → investigate which feature is causing the model to drift. Once identified, check if it’s an IT bug (feature being renamed, badly fed, …) and show it to the right person. On the other hand, if the feature seems to follow a natural drift (ex : new modality, distribution slowly shifting,…) feel free to retrain a new version of your experiment and deploy it! You have all the tools available within Prevision.io thanks to pipelines with automatic retraining and deployment 😎

Conclusion

If you are still with me, you definitely made it! We have seen together that the AI model management platform provided by Prevision.io alongside its R SDK can be a great help for us R coders! Creating projects, importing data from different sources, building experiments and pipelines, deploying all of that and even custom WEB applications is at your fingertips 💪

If you want to go deeper into the usage of the R SDK (and trust me, there is a lot more to explore 🧐), I strongly recommend you check out the product documentation and the R SDK documentation if if you haven’t done so already. Also watch out for release notes, stuff is changing fast and expect the R SDK to follow this trend!