Siren Platform User Guide

Installing Siren ML Docker

Siren ML is installed using a Docker image which is downloaded from DockerHub. Therefore, to run Siren ML for the first time you will need a working Internet connection.

Two Docker images are available for Siren ML:

  • sirensolutions/siren-ml:<version> - for Linux, Mac and Windows.

  • sirensolutions/siren-ml:<version>-gpu - currently only available for Linux.

If you would like to use your GPU for training and activating your machine learning models, follow these prerequisite steps:

  1. Make sure you have a compatible GPU:

    You can check if your device is supported here. Typically if you have an Nvidia GPU, you can use it for machine learning.

  2. Install nvidia-docker (and Nvidia drivers if required):

    Follow the instructions here to install and test the installation of nvidia-docker.

  3. Follow the instructions for installing Siren ML GPU using Docker directly (see Docker section below).

Docker Compose

We recommend using Docker Compose to manage the Siren ML docker container.

  1. Create a docker-compose.yaml file:

    version: "3"
        image: sirensolutions/siren-ml:latest
        network_mode: bridge
          - /var/lib/sirenml:/var/lib/sirenml
          - /etc/sirenml:/etc/sirenml
        restart: unless-stopped
          - 5001:5001

    For more options on how to configure the container, such as limiting CPU usage, see the Docker Compose documentation.

  2. Run Siren ML:

    docker-compose up -d

    To stop the container, use the following command:

    docker-compose down

    If running docker-compose in a different directory to your compose file, or the file is named differently, use the -f flag:

    docker-compose -f /path/to/composefile.yaml up -d
  3. If running Elasticsearch locally, you may need to change the property in your elasticsearch.yml to the following:

    This allows for Elasticsearch to be accessible from Docker containers. This IP should only be used if the server's firewall has been appropriately configured to prevent unauthorized access from external sources.

  4. Set the Elasticsearch URI in the Siren ML configuration (/etc/sirenml/sirenml.yml):


    If you are running Elasticsearch locally, you must find your local IP address. To do this, run one of the following commands and look for the associated attribute:



    Attribute to look for










    Use the IP address of this attribute as the value for elasticsearch.uri in the Siren ML configuration.

  5. If using Docker Toolbox, configure Investigate (investigate.yml) as follows with the following default address for Docker Toolbox:


    This points Investigate to the IP of the Siren ML Docker container.


An alternative to using Docker Compose is using Docker directly. Run the following command in your terminal to launch Siren ML.

docker run --restart unless-stopped -d -p 5001:5001 -v /var/lib/sirenml:/var/lib/sirenml -v /etc/sirenml:/etc/sirenml sirensolutions/siren-ml:latest

or if you are using the GPU version:

docker run --restart unless-stopped --gpus all -d -p 5001:5001 -v /var/lib/sirenml:/var/lib/sirenml -v /etc/sirenml:/etc/sirenml sirensolutions/siren-ml:latest-gpu

Tips and pitfalls

If you change the mounted volumes between runs, Siren ML will not be able to access any models you have created in other volume mounts. Therefore, it is suggested that the same volume mounts be used.