Using Dotscience in remote mode with Python scripts

If you want to develop Python scripts for model training – as opposed to Jupyter notebooks – the best way to do that is by using the Dotscience Python library in 'remote' mode, known as Dotscience Anywhere.

In this mode you use Dotscience python in your scripts and running them locally, in your IDE of choice or just a Terminal. The python library communicates with the Dotscience Hub, registers model and performs deployments on the cloud. This is the fastest way to deploy models into production. For more information on the different modes in which Dotscience can be used, see the reference section on Dotscience modes.

Using this approach it is not possible to track provenance, do data versioning, or use Dotscience collaboration.

We are working on making script-based development easier within the platform so that it becomes possible to track data and provenance while iterating on scripts, please let us know if this is important to you!

We’ll follow through a tutorial featuring basic script based development using an IDE on a macOS system.

Download tutorial scripts and dependencies

Clone the dotscience-python repository and go to the examples folder

git clone
cd dotscience-python/examples

Next, download and install required python libraries for this tutorial

Install TensorFlow and the dotscience-python library on your local environment with the following command

pip3 install tensorflow dotscience

Optionally, we recommend using python venv to isolate your Python environment.

Create a Dotscience account

Click here to create a free Dotscience account, or if you already have an account, you can sign into it here.

Set up the environment.

For this step you would need credentials from your Dotscience Hub account. Navigate to the Account panel and copy your API key.

Add your username, API key, and a new project name and export the environment setup variables into your Terminal session

export DOTSCIENCE_USERNAME=**your username here**
export DOTSCIENCE_APIKEY=**your API key from Account panel here**
export DOTSCIENCE_PROJECT_NAME=my-new-project

Train the machine learning model

Run the script with


You will notice that the script connects to the Dotscience environment specified and then validates, trains, builds and deploys the model into production!

Looking closely at you will notice that it does the following steps

ds.model(tf, "mnist", "model", classes="classes.json")
ds.publish("trained mnist model", deploy=True)

ds.model() annotates that a model with a model directory specified, and the classes that are associated with the model.

ds.publish() instructs Dotscience to build and publish the model, this also creates monitoring dashboards for the model.

Explore the model run

Navigate to the project in Dotscience

You will notice the new project, and the runs tab has at least a single run which generates the model. You will be able to see the provenance on the run.

Explore the metrics on the model training set

The Explore tab shows the metrics that was published by the script.

Model build and logs

The top level model tab has the new model that was generated by the script.

Clicking on the model links you back to the provenance of the run that generated the model.

Monitor the behaviour of the model in production

On the deployments tab, find the model you just deployed and open the monitoring dashboard for it by clicking ‘Monitor’. The monitoring dashboard for your model will track requests to your model and its behaviour based on real world data.

This is a prototype to demonstrate monitoring. For enterprise and other use cases, please contact us so we can enable monitoring at a user/project level. The credentials for prototype Grafana dashboard are:

Username: playground
Password: password

Initially the monitoring dashboard will be empty, as there are no requests being sent to the model.

For the convenience of this tutorial, we have a demo app to send requests to the model at You will need the model deployment URL, found under heading ‘Host’ for your model at Copy the URL by clicking ‘Copy to clipboard’ (note: that the entire URL is not displayed).

Navigate to and select the demo app (in this case, it’s Minst Predictor) and paste the model deployment URL into the app and send requests to it by clicking the roadsigns. Observe requests being sent to the model and its behaviour on the monitoring dashboard above.

More information about collaboration, deployment and monitoring can be found in further sections of the tutorial.