============
Installation
============
.. role:: bash(code)
:language: bash
This web application is packaged using docker compose.
All the following instruction consider you have a working docker/docker compose
installation on your system.
If you need any help setting up docker, please visit the
`docker doc `_.
Software Architecture
=====================
As previously mentioned, this web application is packaged using docker compose.
Each component is split in its own docker container:
* :ref:`web `: contains the reverse proxy nginx and the built version of the frontend
* :ref:`backend `: contains a flask REST API backend served through nginx from web container
* :ref:`database `: contains the postgresql database with postgis extension (for geometry)
* :ref:`redis `: contains the redis task queue message broker (used by backend to delegate long tasks)
* :ref:`worker `: contains one celery worker processing tasks (some directly, some through rootless docker-in-docker)
* :ref:`docker `: contains the rootless docker-in-docker daemon used to run unsafe code of backend-delegated long tasks
This application also needs one transient container, which is only spawned when
starting the complete app and stops after running once. This container (named
`change-vol-ownership`` in the example `docker-compose.yml` file) is necessary to make docker
certificates available to both worker and docker containers.
Those containers should be in the same network so they can communicate.
web
---
It contains the nginx reverse proxy, and serves the compiled frontend.
It also put the frontend and the :ref:`backend ` behind the same adress.
This part can be configured by modifying the `nginx.conf` file before building the application.
In this file, the :ref:`backend ` and the frontend are configured to be behind a Nginx reverse proxy.
Or by modifying its final location in the container `/etc/nginx/conf.d/default.conf`.
Some other variables can be set in the `config.json`, which can be provided by creating an `instance` folder:
* `APPLICATION_TITLE`: Title of the navigator tab
* `API_ROOT`: Root of the backend. Permits to connect and communicate with it (for ease, please use the default value)
* `BASE_PATH`: Root of the whole application (by default, it's set as `/`)
* `BASIC_AUTHENTICATION_HEADER`: Useful for authentication (for ease, please use the default value)
* `JWT_AUTHENTICATION_HEADER`: Useful for authentication (for ease, please use the default value)
Otherwise, this container needs to expose the 80 port (done in the dockerfile),
and to link this port to one of the machine's port in the `docker-compose.yml`:
.. code:: yaml
ports :
- :80
The frontend should be on the same network than the
:ref:`backend ` container.
backend
-------
It can be configured in two ways.
Either you provide it with an `instance` folder with well-formatted `config.py` and `config.json`
files, or you provide some environment variables to the container so it can build
the configuration files for you:
* `DB_URI`: URI for the postgresql database
* `CELERY_BROKER_URL`: celery broker endpoint
* `API_TOKEN`: token used to salt the JWT tokens (generated if not set)
* `PROFILE_DIR`: profiler directory (set as `./profiler` if not set)
* `CELERY_RESULT_BACKEND`: celery result backend endpoint (equal to `CELERY_BROKER_URL` if not set)
If you want, you can configure the backend locally by running the script `configure-backend.sh`
with the environment variables set. You then need to mount this volume in the container.
Be careful to set variables for the config intended instance deployment (docker or local).
If you want to persist the backend instance configuration, you should mount the container `/app/instance`
directory as a volume.
When starting this container, it will perform the following tasks before launching the backend:
* backend configuration using the environment variables (if `instance/config.py` is absent)
* database tables creation (if the database has no alembic version)
* database tables upgrade (if the database is not up-to-date)
* extract openAPI specification from API code (performed at each start)
It exposes a uwsgi endpoint on port 3031 (should be routed by the nginx reverse-proxy
from :ref:`web ` container in most cases).
database
--------
This container runs a postgresql database with PostGIS extension.
It initially contains only generic default data and will create an empty database
using the following credentials (given as environment variables):
* `POSTGRES_USER`: database user for the application data
* `POSTGRES_PASSWORD`: database password for the application data
* `POSTGRES_DB`: database name for the application data
In the current state of the software, the application data tables are created by
the :ref:`backend ` container upon initialization.
The example `docker-compose.yml` file set up a volume for the database `/var/lib/postgresql/data/` folder
so its container can be destroyed without affecting the data.
redis
-----
This container runs redis as a message broker between the :ref:`backend `
and the :ref:`worker ` containers.
It is used to delegate long and unsafe tasks to the worker (which may use rootless docker-in-docker
to safely run those) and getting their results.
worker
------
This container runs a celery worker to handle long and/or unsafe tasks sent by the :ref:`backend `.
Some environment variables are necessary for the workings of this container:
* `DOCKER_HOST`: URL for the rootless docker-in-docker daemon
* `DOCKER_CERT_PATH`: client certificates directory necessary for communication with the docker-in-docker daemon
* `DOCKER_TLS_VERIFY`: whether or not to check for TLS communication with docker-in-docker daemon (should be 1)
It needs a read access to the :ref:`docker ` certificates
directory (the one set in `DOCKER_CERT_PATH`). This can be accomplished by a volume
mounted as read-only.
It also need acess to the :ref:`backend ` configuration folder
(`instance/`). This can be accomplished with a volume binding those directories between the containers.
This container comes preloaded with the sandbox docker image, which it will upload to the
:ref:`docker ` container registry during initialization.
docker
------
This contains a rootless docker-in-docker daemon waiting for jobs sent by the :ref:`worker `.
It runs unsafe tasks safely and other long tasks as well in temporary containers
(destroyed after completion).
It needs the following environment variable:
* `DOCKER_TLS_CERTDIR`: certificates directory (should be the parent folder than the client certificates directory shared with
:ref:`backend ` and :ref:`worker `).
It exposes its API to port 2376, which should be used by :ref:`worker ` container.
Usage
=====
Install required docker images
------------------------------
You can either build the required docker images from source, or you can simply
use the remote built images. In the former case, you need to clone the repository
and build all docker images within it:
.. code:: bash
git clone https://gitlab.com/decide.imt-atlantique/deseasion.git
cd deseasion
make docker
You can also replace the `git clone` command by downloading the source code archive from one of our release: `Deseasion Releases `_
In the latter case, the images will be pulled from the repository automatically.
Though you can pull them manually if you desire:
.. code:: bash
docker pull registry.gitlab.com/decide.imt-atlantique/deseasion/frontend:latest
docker pull registry.gitlab.com/decide.imt-atlantique/deseasion/database:latest
docker pull registry.gitlab.com/decide.imt-atlantique/deseasion/backend:latest
docker pull registry.gitlab.com/decide.imt-atlantique/deseasion/worker:latest
.. note::
You can also directly manually pull all required docker images using docker compose
(once you set up the `docker-compose.yml` file) using the following: :bash:`docker compose pull`
Set up application
------------------
You need to setup your `docker-compose.yml` file.
Here is an example for such file:
.. literalinclude:: docker-compose.yml
:language: yaml
This file needs environment variables, they can be supplied by a `.env` file present in the same directory.
Here is a working example:
.. literalinclude:: .example.env
:language: bash
For users with special needs of deployment of the application stack, we provide alternative
configurations. See this :doc:`page `.
Start
-----
Then, once you set up your `docker-compose.yml` file, you can start the complete application:
* To start in daemon mode (better for production):
.. code:: bash
docker compose up -d
* To start in the terminal (will be shut down once the terminal is stopped):
.. code:: bash
docker compose up
Stop
----
You can stop the application in two way (if launched in daemon-mode).
* To stop the application but keeping the data and configuration:
.. code:: bash
docker compose down
* To stop the application and removing all data/configuration:
.. code:: bash
docker compose down --volumes
Monitoring
----------
You can use any of the docker compose commands to help you monitor your application state:
* See running containers of the application and their state:
.. code:: bash
docker compose ps
* See logs
.. code:: bash
docker compose logs [SERVICE]
Add user
--------
You can add a user interactively by connecting to the :ref:`backend ` and using the command: :bash:`flask user create`.
You can do it in two commands:
.. code:: bash
docker compose exec backend /bin/bash
flask user create
Or in one single command:
.. code:: bash
docker compose exec -it backend flask user create
Backup database
---------------
You can backup the database as any postgresql database running `pg_dump` inside the
:ref:`database ` container.
First make sure this container is running:
.. code:: bash
docker compose ps
If it is not running, you can start only it (other containers are not useful for backups):
.. code:: bash
docker compose up database
Then you can backup the whole database in a sql file:
.. code:: bash
docker compose exec database pg_dump -U $POSTGRES_USER $POSTGRES_DB > backup.sql
.. note:: You can replace `backup.sql` by any other file, it will be written on your local directory.
.. note::
This command needs environment variables, they can be supplied by the `.env` file present in the same directory:
.. code:: bash
source .env
You can also decide to directly replace them by their values, taken from the `docker-compose.yml` or `.env` file.
Restore database
----------------
When restoring the database, you need a backup SQL file (let's say it's `backup.sql`).
You then need all services down:
.. code:: bash
docker compose down
You need to recreate a blank database, the simplest way to do that is to remove
the database volume `postgres_data_prod` (**it destroy all databases**).
.. note::
docker compose append the directory name as a prefix to volumes created if non-external.
If the application is in a directory called `deseasion`, that volume will be called
`deseasion_postgres_data_prod` (don't hesitate to check with :bash:`docker volume ls`).
.. code:: bash
docker volume rm deseasion_postgres_data_prod
.. note::
You could also have removed all volumes of the application with
:bash:`docker compose down --volumes` if you didn't modify configuration
files manually. They all will be recreated when starting the application again.
Afterwards, you need to create the blank database which was removed, you can simply
start the :ref:`database ` container which initializes the database:
.. code:: bash
docker compose up -d database
Then wait for the initialization process to complete (it takes less than 30s).
You should see lines about creating the database, completing init process, then starting PostgreSQL in the logs
(check with :bash:`docker compose logs database`).
Now we can effectively restore the database using the backup SQL file:
.. code:: bash
docker compose exec -T database psql -U $POSTGRES_USER $POSTGRES_DB < backup.sql
.. note::
This command needs environment variables, they can be supplied by the `.env` file present in the same directory:
.. code:: bash
source .env
You can also decide to directly replace them by their values, taken from the `docker-compose.yml` or `.env` file.
Afterwards, you should be able to start the whole application.
The backend will perform necessary database migrations if you updated the application containers
since the backup was made.
.. code:: bash
docker compose up -d
.. note::
If you want to make sure there are no problems with such migration, you can decide
to start the :ref:`backend ` interactively and check the logs
.. code:: bash
docker compose up backend
Services
========
This software contains multiple services which are exposed through a nginx reverse proxy:
* `/`: web application frontend
* `/api`: backend (REST API)
* `/api/apidocs`: Swagger-UI documentation of the backend