Installation

This web application is packaged using docker compose. All the following instruction consider you have a working docker/docker compose installation on your system. If you need any help setting up docker, please visit the docker doc.

Software Architecture

As previously mentioned, this web application is packaged using docker compose. Each component is split in its own docker container:

  • web: contains the reverse proxy nginx and the built version of the frontend

  • backend: contains a flask REST API backend served through nginx from web container

  • database: contains the postgresql database with postgis extension (for geometry)

  • redis: contains the redis task queue message broker (used by backend to delegate long tasks)

  • worker: contains one celery worker processing tasks (some directly, some through rootless docker-in-docker)

  • docker: contains the rootless docker-in-docker daemon used to run unsafe code of backend-delegated long tasks

This application also needs one transient container, which is only spawned when starting the complete app and stops after running once. This container (named change-vol-ownership` in the example docker-compose.yml file) is necessary to make docker certificates available to both worker and docker containers.

Those containers should be in the same network so they can communicate.

web

It contains the nginx reverse proxy, and serves the compiled frontend. It also put the frontend and the backend behind the same adress. This part can be configured by modifying the nginx.conf file before building the application. In this file, the backend and the frontend are configured to be behind a Nginx reverse proxy. Or by modifying its final location in the container /etc/nginx/conf.d/default.conf.

Some other variables can be set in the config.json, which can be provided by creating an instance folder:

  • APPLICATION_TITLE: Title of the navigator tab

  • API_ROOT: Root of the backend. Permits to connect and communicate with it (for ease, please use the default value)

  • BASE_PATH: Root of the whole application (by default, it’s set as /)

  • BASIC_AUTHENTICATION_HEADER: Useful for authentication (for ease, please use the default value)

  • JWT_AUTHENTICATION_HEADER: Useful for authentication (for ease, please use the default value)

Otherwise, this container needs to expose the 80 port (done in the dockerfile), and to link this port to one of the machine’s port in the docker-compose.yml:

ports :
    - <MACHINE_PORT>:80

The frontend should be on the same network than the backend container.

backend

It can be configured in two ways. Either you provide it with an instance folder with well-formatted config.py and config.json files, or you provide some environment variables to the container so it can build the configuration files for you:

  • DB_URI: URI for the postgresql database

  • CELERY_BROKER_URL: celery broker endpoint

  • API_TOKEN: token used to salt the JWT tokens (generated if not set)

  • PROFILE_DIR: profiler directory (set as ./profiler if not set)

  • CELERY_RESULT_BACKEND: celery result backend endpoint (equal to CELERY_BROKER_URL if not set)

If you want, you can configure the backend locally by running the script configure-backend.sh with the environment variables set. You then need to mount this volume in the container. Be careful to set variables for the config intended instance deployment (docker or local).

If you want to persist the backend instance configuration, you should mount the container /app/instance directory as a volume.

When starting this container, it will perform the following tasks before launching the backend:

  • backend configuration using the environment variables (if instance/config.py is absent)

  • database tables creation (if the database has no alembic version)

  • database tables upgrade (if the database is not up-to-date)

  • extract openAPI specification from API code (performed at each start)

It exposes a uwsgi endpoint on port 3031 (should be routed by the nginx reverse-proxy from web container in most cases).

database

This container runs a postgresql database with PostGIS extension. It initially contains only generic default data and will create an empty database using the following credentials (given as environment variables):

  • POSTGRES_USER: database user for the application data

  • POSTGRES_PASSWORD: database password for the application data

  • POSTGRES_DB: database name for the application data

In the current state of the software, the application data tables are created by the backend container upon initialization.

The example docker-compose.yml file set up a volume for the database /var/lib/postgresql/data/ folder so its container can be destroyed without affecting the data.

redis

This container runs redis as a message broker between the backend and the worker containers. It is used to delegate long and unsafe tasks to the worker (which may use rootless docker-in-docker to safely run those) and getting their results.

worker

This container runs a celery worker to handle long and/or unsafe tasks sent by the backend.

Some environment variables are necessary for the workings of this container:

  • DOCKER_HOST: URL for the rootless docker-in-docker daemon

  • DOCKER_CERT_PATH: client certificates directory necessary for communication with the docker-in-docker daemon

  • DOCKER_TLS_VERIFY: whether or not to check for TLS communication with docker-in-docker daemon (should be 1)

It needs a read access to the docker certificates directory (the one set in DOCKER_CERT_PATH). This can be accomplished by a volume mounted as read-only. It also need acess to the backend configuration folder (instance/). This can be accomplished with a volume binding those directories between the containers.

This container comes preloaded with the sandbox docker image, which it will upload to the docker container registry during initialization.

docker

This contains a rootless docker-in-docker daemon waiting for jobs sent by the worker. It runs unsafe tasks safely and other long tasks as well in temporary containers (destroyed after completion).

It needs the following environment variable:

  • DOCKER_TLS_CERTDIR: certificates directory (should be the parent folder than the client certificates directory shared with backend and worker).

It exposes its API to port 2376, which should be used by worker container.

Usage

Install required docker images

You can either build the required docker images from source, or you can simply use the remote built images. In the former case, you need to clone the repository and build all docker images within it:

git clone https://gitlab.com/decide.imt-atlantique/deseasion.git
cd deseasion
make docker

You can also replace the git clone command by downloading the source code archive from one of our release: Deseasion Releases

In the latter case, the images will be pulled from the repository automatically. Though you can pull them manually if you desire:

docker pull registry.gitlab.com/decide.imt-atlantique/deseasion/frontend:latest
docker pull registry.gitlab.com/decide.imt-atlantique/deseasion/database:latest
docker pull registry.gitlab.com/decide.imt-atlantique/deseasion/backend:latest
docker pull registry.gitlab.com/decide.imt-atlantique/deseasion/worker:latest

Note

You can also directly manually pull all required docker images using docker compose (once you set up the docker-compose.yml file) using the following: docker compose pull

Set up application

You need to setup your docker-compose.yml file. Here is an example for such file:

services:
  web:
    image: registry.gitlab.com/decide.imt-atlantique/deseasion/frontend:latest
    restart: always
    ports:
      - 80:80
    depends_on:
      - backend
  backend:
    image: registry.gitlab.com/decide.imt-atlantique/deseasion/backend:latest
    restart: always
    environment:
      - DB_URI=postgresql://aileron:pass@database/aileron
      - CELERY_BROKER_URL=redis://redis:6379/0
    volumes:
      - backend-instance:/app/instance
    depends_on:
      - database
      - worker
  database:
    image: registry.gitlab.com/decide.imt-atlantique/deseasion/database:latest
    restart: always
    volumes:
      - postgres_data_prod:/var/lib/postgresql/data/
    environment:
      - POSTGRES_USER=aileron
      - POSTGRES_PASSWORD=pass
      - POSTGRES_DB=aileron
  redis:
    image: redis:7.4.2
    restart: always
  worker:
    image: registry.gitlab.com/decide.imt-atlantique/deseasion/worker:latest
    restart: always
    environment:
      - DOCKER_HOST=tcp://docker:2376
      - DOCKER_CERT_PATH=/certs/client
      - DOCKER_TLS_VERIFY=1
    volumes:
      - backend-instance:/app/instance
      - docker-certs:/certs/client:ro
    depends_on:
      - redis
      - docker
  change-vol-ownership:
    # We can use any image we want as long as we can chown
    image: ubuntu:24.04
    # Need a user priviliged enough to chown
    user: "root"
    volumes:
      # The volume to chown
      - docker-certs:/tmp/change-ownership
    command: chown -R 1000:1000 /tmp/change-ownership
  docker:
    image: docker:28.0.4-dind-rootless
    restart: always
    privileged: true
    environment:
      - DOCKER_TLS_CERTDIR=/certs
    volumes:
      - docker-certs:/certs/client
    depends_on:
      change-vol-ownership:
        # Wait for the ownership to change
        condition: service_completed_successfully
volumes:
  postgres_data_prod:
  backend-instance:
  docker-certs:

This file needs environment variables, they can be supplied by a .env file present in the same directory. Here is a working example:

# PostgreSQL aileron DB settings
POSTGRES_USER="aileron"
POSTGRES_PASSWORD="pass"
POSTGRES_DB="aileron"

For users with special needs of deployment of the application stack, we provide alternative configurations. See this page.

Start

Then, once you set up your docker-compose.yml file, you can start the complete application:

  • To start in daemon mode (better for production):

docker compose up -d
  • To start in the terminal (will be shut down once the terminal is stopped):

docker compose up

Stop

You can stop the application in two way (if launched in daemon-mode).

  • To stop the application but keeping the data and configuration:

docker compose down
  • To stop the application and removing all data/configuration:

docker compose down --volumes

Monitoring

You can use any of the docker compose commands to help you monitor your application state:

  • See running containers of the application and their state:

docker compose ps
  • See logs

docker compose logs [SERVICE]

Add user

You can add a user interactively by connecting to the backend and using the command: flask user create. You can do it in two commands:

docker compose exec backend /bin/bash
flask user create

Or in one single command:

docker compose exec -it backend flask user create

Backup database

You can backup the database as any postgresql database running pg_dump inside the database container.

First make sure this container is running:

docker compose ps

If it is not running, you can start only it (other containers are not useful for backups):

docker compose up database

Then you can backup the whole database in a sql file:

docker compose exec database pg_dump -U $POSTGRES_USER $POSTGRES_DB > backup.sql

Note

You can replace backup.sql by any other file, it will be written on your local directory.

Note

This command needs environment variables, they can be supplied by the .env file present in the same directory:

source .env

You can also decide to directly replace them by their values, taken from the docker-compose.yml or .env file.

Restore database

When restoring the database, you need a backup SQL file (let’s say it’s backup.sql).

You then need all services down:

docker compose down

You need to recreate a blank database, the simplest way to do that is to remove the database volume postgres_data_prod (it destroy all databases).

Note

docker compose append the directory name as a prefix to volumes created if non-external. If the application is in a directory called deseasion, that volume will be called deseasion_postgres_data_prod (don’t hesitate to check with docker volume ls).

docker volume rm deseasion_postgres_data_prod

Note

You could also have removed all volumes of the application with docker compose down --volumes if you didn’t modify configuration files manually. They all will be recreated when starting the application again.

Afterwards, you need to create the blank database which was removed, you can simply start the database container which initializes the database:

docker compose up -d database

Then wait for the initialization process to complete (it takes less than 30s). You should see lines about creating the database, completing init process, then starting PostgreSQL in the logs (check with docker compose logs database).

Now we can effectively restore the database using the backup SQL file:

docker compose exec -T database psql -U $POSTGRES_USER $POSTGRES_DB < backup.sql

Note

This command needs environment variables, they can be supplied by the .env file present in the same directory:

source .env

You can also decide to directly replace them by their values, taken from the docker-compose.yml or .env file.

Afterwards, you should be able to start the whole application. The backend will perform necessary database migrations if you updated the application containers since the backup was made.

docker compose up -d

Note

If you want to make sure there are no problems with such migration, you can decide to start the backend interactively and check the logs

docker compose up backend

Services

This software contains multiple services which are exposed through a nginx reverse proxy:

  • /: web application frontend

  • /api: backend (REST API)

  • /api/apidocs: Swagger-UI documentation of the backend