As we have seen previously, experimental tracking allows data scientists to identify factors that would affect the model‘s performance, compare results and select the optimal version. It is very important to stay organized throughout the iterative learning process. This is also the case in the production phase, where you must follow the production version, possibly the challengers models, performance, and usage monitoring. In this post, we are going to answer this question:

  • How is experiment tracking done with without a line of code ?

How is experiment tracking done with ? offers a platform in which you will be able to centralize, organize and visualize the results of all your experiments, throughout both training and production phase, whether you have used’s autoML for training or you have trained your own models outside the platform.


In what follows, we will distinguish 2 situations:

  • if you are using to train your models,
  • if you use another environment to train your models and you wish to benefit from the experiment tracking solutions offered by This part will be covered in the third blog.


In, the first level of organization is the data science project. The project covers the entire life cycle, from design, to operational use, to periodic monitoring, while including any remodeling.

View of one project in

  • Home allows you to return to the project summary.
  • Data tab provides access to all the data sources used in the project, whether flat files, or files from data sources from third-party connections such as SQL database, Google Cloud Storage, Amazon S3.
  • Experiments tab allows you to access all the iterations relating to this project.
  • Pipelines tab allows you to access all of the data preparation and release flows that you have developed in your project.
  • Deployments tab allows you to access all the deployed models relating to this project.
  • Collaborators tab allows you to define the rights of the various stakeholders in the project, from the project administrator, to the contributor to the simple viewer.

For each of the point mentioned above about experiment tracking , here is what offers :

●     Dataset metadata (dataset name # rows, # columns, #size in Mo/Go),

General Datasets view


This view is used to display all datasets used in the project either uploaded by the user or coming from a datasource.

The information available are: name of the dataset, number of rows and columns, volume, date of creation, person at the origin of the dataset, type of source, status, and available actions such as viewing, downloading , renaming, deleting or even exporting the dataset to another workspace.

View of one version of a dataset


This interface enables to have additional information on a particular dataset such as the number of numerical and categorical variables, the correlation matrix, as well as the run experiments using this dataset.

View of features for one dataset


This screen allows you to have a deep understanding of the dataset providing detailed information by variable including the number of the missing values and its univariate representation.

View of linear feature information and distribution


This screen allows you to have some statistical information about linear features: minimum, maximum, mean, standard deviation. The feature distribution is also represented in the form of a histogram of the frequencies.

View of categorical feature information and distribution

This screen allows you to have some information about categorical : unique values and feature distribution represented by a pie chart.

Preview of dataset


You can browse sample data from the Sample tab.

●     Experiment’s settings:

Dataset you use, features engineering, features selection, list of algorithms you try, ensemble models, role of features (target, id, fold, weight, features selected and features dropped)

New experiment settings with Execution Graph Preview

Fields configuration


An experiment in starts by the experiment configuration interface in which you specify the train dataset, the possible validation dataset, a description for the experiment (do not neglect this point, the description allows you to note the key differentiating points, especially if you are in your n-th iterations), the metric you use and the variables used in the configuration: target (required), id (strongly recommended, need that this id be unique), (optional) weight if you want to specifically weight the rows, (optional) fold if you want to to set your own stratification for the cross validation.

Columns configuration


The columns configuration panel allows you to ignore some columns of your dataset. To do it, just deselect unwanted features. Please note that you can search by features name the columns you want to unselect and use the “select/unselect all” checkbox to apply your choice to the selection. In most cases, you should not drop any features. The Prevision Platform can handle up to several thousands of features and will keep those meaningful to build its models.

In which situations it may be interesting to uncheck several features ?

  • Some features have too much importance in the model and you suspect that it is in fact a covector of the target, or that you have an important leakage. If you see that your simplest model (Simple LR) performs as well as a complex one and that some feature has a huge feature importance, you may drop it.
  • You want to get some results fast. Dropping features allows for faster training. If you suspect that some features bear low signal, drop it at the start of your project to iterate faster.

Models configuration


The Models configuration panel allows you to select/unselect algorithms you want to train on your data.

Simple models are models done with no complex optimization and using simple algorithms, a linear regression and Decision Tree with less than 5 splits. They allow us to check whether the problem is treatable with a very simple algorithm instead of fancy Machine Learning.

Moreover, simple model generates :

  • A chart that explains model and is human readable
  • Python code to implement it
  • SQL Code to implement it

You can unselect simple models but it is recommended to keep them when starting a project and watch how well they perform vs more complex models. If a simple Decision Tree performs as good as a Blend of XGBoost, or only marginally worse, do favor the simple model.

Normal models are models done with a fixed hyperparameters setting defined by default by from previous experiments. Normal models use the simplest preprocessings (label encoding for categorical variables, normalization for numeric fields, no use of textual data).

Advanced models are models done with tuning hyper parameters. The number of tests to optimize the hyper parameters depends on the performance option.

Performances configuration in Basics Panel


Performances : Quick for fast result, Advanced for best result. Be aware that advanced options could take 4 more times to train than a quick one and only improve marginally the overall performance. Choose wisely 😉

Blending models is a powerful technique to get the most performance yet it can be quite long to train. Blend uses a model over all the others to merge and combine them in order to get the best performance. Switch the option if you want to blend your models but be aware that the resulting train will last very long.

Feature engineering configuration


Four kinds of feature engineering are supported by the platform:


Date features: dates are detected and operations such as information extraction (day, month, year, day of the week, etc.) and differences (if at least 2 dates are present) are automatically performed.

Textual features:

  • Statistical analysis using Term frequency–inverse document frequency (TF-IDF). Words are mapped to numerics generated using tf-idf metric. The platform has integrated fast algorithms making it possible to keep all uni-grams and bi-grams tf-idf encoding without having to apply dimension reduction.

More information about TF-IDF on .

  • Word embedding approach using Word2Vec/Glove. Words are projected in a dense vector space. Prevision trains a word2vec algorithm on the actual input corpus, to generate their corresponding vectors. More information about Word embedding on .

  • Sentence Embedding using Transformers approach. Prevision has integrated BERT-based transformers, as a pre-trained contextual model, that captures word relationships in a bidirectional way. BERT transformer makes it possible to generate more efficient vectors than word Embedding algorithms, it has a linguistic “representation” of its own. To make a text classification, we can use these vector representations as input to basic classifiers to make text classification. Bert (base/uncased) is used on English text and MultiLingual (base/cased) is used on French text. More information about Transformers on .


Categorical features:

  • Frequency encoding: modalities are converted to their respective frequencies.
  • Target encoding: modalities are replaced by the average (TARGET, grouped by modality) for a regression and by the proportion of the modality for the target’s modalities in the context of a classification.

Advanced features:

  • Polynomial features: features based on products of existing features are created. This can greatly help linear models since they do not naturally take interactions into account but are less useful on tree based models.
  • PCA: features based main components of the PCA are added.
  • K-means: Cluster number coming from a K-means method are added as new features.
  • Row statistics: features based on row by row counts are added as new features (number of 0, number of missing values, …).

Feature selection configuration


In this part of the screen you can choose to enable feature selection (off by default).

This operation is important when you have a high number of features (a couple hundreds) and can be critical when the number of features is above 1000 since the full Data Set won’t be able to hold in RAM.

You can choose to keep a percentage or a count of features and you can give a time budget to to perform the search of optimal features given the TARGET and all other parameters. In this time, will subset the feature of the Data Set, and then start the classical process.

Once the configuration of your experiment is complete, you can start the training by clicking on the button at the bottom right of the screen.

During training, in real time, you can follow the progress of the modelling.

The progress of the various tiles making up your execution graph is displayed and changes as the tasks are completed.

Track training tasks


Track training tasks panel allows you to have a detailed view of the different parts of your execution graph, possibly with execution logs if the task does not complete as expected. You will find the information about when the task was started, when it finished, its duration and its status.

When the experiment has trained at least 1 model, you can access the comparative page of the performances of the trained models and a dedicated page per model describing its characteristics.

Comparative page of the performances of the trained models


The models panel allows you to have a detailed view of all models trained, technology used, score (metrics you choose), training and prediction durations, time when it’s done.

In what follows, I present very briefly the analysis screens of a model (as we will see later, these analyzes are also available for models trained outside the platform).

Model informations and hyperparameters

Feature importance


Confusion matrix


When you have determined the model you want to deploy, makes it easy for you.

In what follows, I will show you how to go from an experiment to a production model.

Deployments are the last step of a Machine Learning Project.


Deployment experiments User Interface

Deployment experiments panel is the space where you find all the models deployed in your project.

How to Deploy a new experiment

To deploy an experiment, you need to define a name for your deployment, an experiment, a version, a model.

2 minutes later, your models are in production

Once an experiment is deployed :

  • you can schedule batch prediction
  • you can monitor performance of one main model and one challenger
  • external users can use your model for unit prediction from an url
  • external application can call your model from a REST API


Once your model is deployed, facilitates its periodic use (hourly, daily, monthly…).

You upload new data into

You go to prediction page on your experiment and set your model and your dataset and click launch prediction

You can download predictions

You can automate the periodic calculation of your forecasts using the Pipeline module and the Scheduler function.

Pipeline and scheduled run Overview


If you want to know more about both the pipeline and the scheduler, I invite you to read this document :



In the present blog post, I explained step by step how to train an experiment in the platform without writing a single line of code, how to analyze the different models trained,and how to deploy it.

How can I benefit from the experiment tracking and other advantages included in the platform while continuing to build my experiment outside the platform and / or with third-party solutions? 

Thanks for reading.

Mathurin Aché

About the author

Mathurin Aché

Expert Data Science Advisory