Google BigQuery allows you to use SQL queries to answer your organization’s biggest questions with zero infrastructure management. enables users to experiment, build, deploy and monitor AI applications at scale with zero infrastructure management.

This sounds like a good match for companies that want to get actionable insights from terabytes of data, in a glimpse of an eye, with no infrastructure required.

In this tutorial you will learn how easy it is to connect them together to get your AI applications running in production with a 100% no-code approach (code available only if that is preferred using our SDK).

First you will need to setup your BigQuery database and your platform Good news, both can be enabled from the Google Cloud Platform (GCP) marketplace:

Go to GCP, search for “Big query” and enable the BigQuery service.

Go to GCP, search for “”. Good news for you, registration includes  a 14-days free trial. Don’t be worried about licensing fees.. there are none!   If you decide to continue using, you are billed just as you would use any Google application. 

The even better news, the offering is less expensive and more performant than the native Google machine learning offerings.  


To illustrate this tutorial, we will use a classic dataset : “house price” prediction. So let’s say, you are a real estate company that wants to estimate home values based on your historical sales data…


If you want to proceed with this data, you can download the house pricing dataset from here :


Assuming your historical data is available as tables in a BigQuery database, this should look like this:

Every row corresponds to a sale of a home at a target price, and features such as numbers of bedrooms, floors, living room square feet, geolocation and so on…

The objective is to expose this BigQuery data to to create a home values model that estimates the price of future house sales. Thanks to’s automated-machine-learning feature you’re just a few clicks away from this goal.

Step 1 : Grant permission to external reader in BigQuery

To enable access to BigQuery from an API level you will need to access BigQuery from a Google service account. For this you need to create a service account form Google cloud platform console. Please follow this tutorial to create a service account key and get the corresponding json credentials :

Important: your json credential file should look like this. Please save this json file in a safe place, we will need it later to enable the connector :

  "type": "service_account",
  "project_id": "project-id",
  "private_key_id": "key-id",
  "private_key": "-----BEGIN PRIVATE KEY-----\nprivate-key\n-----END PRIVATE KEY-----\n",
  "client_email": "service-account-email",
  "client_id": "client-id",
  "auth_uri": "",
  "token_uri": "",
  "auth_provider_x509_cert_url": "",
  "client_x509_cert_url": ""

Now, in BigQuery, select the data you want to share and add access for the service account you just created.

Click on “share” :

Then click on “add principal” and add the service account with role “BigQuery Data Viewer” :

Step 2 : Create a native BigQuery connection in

From the platform, create a connector for the Google cloud platform service account you just created.

For this, choose a project, go to the data section on left side menu and select Connectors section in the top bar menu:

Click on the “New Connector” button and fill the GCP form using the credentials json file you just created.

Click on “Test Connector”, if you did everything correctly you should get a success notification:

Then click on “Save Connector” to finally create the GCP connector:

After this, you should have you a brand new GCP connector sitting in your connectors list:

Now you need to create a Prevision DataSource that maps directly to your BigQuery table.

For this select “DataSource” section in the top bar menu, Click on the “New datasource” button:

Select the GCP connector you just created and choose the “BigQuery” datasource form type:

Fill up the form. “Dataset BQ” field shall match the BigQuery database name and “Table” field shall match the Big Query datatable name. 

Pretty straightforward, isn’t it?


Click on “Test Datasource”, if you did everything correctly you should get a success notification:

Then click on “Save Datasource ” to finally create your BigQuery datasource:

After this, you should have you a brand datasource sitting in your datasource list:

Congratulations! You have successfully built a dynamic mapping in of your data table hosted by BigQuery.

Note that this mapping is dynamic: every time you access the datasource, all  updates to your data in BigQuery will be reflected into This aspect is interesting when the DataSource is made of consistently updated data : historical or real time.

Step 3 : Import and start experimenting with

Now it’s time to start experimenting with ! First you will need to import a snapshot of your BigQuery datasource as a dataset.

For this select “Datasets” section in the top bar menu and click on the “Import dataset” button:

Fill the “Import new dataset” form using the datasource you just created:

When you’re done, click on the Import button on the bottom right:

Well done, you did great. Now you see your imported dataset right in your datasets list. Ready to experiment with!

You can start using your dataset inside, ready to do machine learning experiment tracking simply with your BigQuery data.

Going further

First, I hope that you successfully connect your BigQuery data within and have found these steps easy enough to follow. If you run into any issues, we are here to help at [email protected].

As the automated-machine-learning feature is beyond the scope of this tutorial, if you are interested in experiment tracking using automated-machine-learning here are tutorials on experiment tracking I advice you to follow :

Nicolas Gaude

About the author

Nicolas Gaude

Chief Technical Officer & Co-founder