Docker Compose deployment

Deployment of Allure TestOps using docker compose

Docker compose deployment is not recommended for the environments with a high workload. If the number of test results in a single launch exceeds 3000, or you have several runs over 24 hours with more than 30000 test results, then you need to deploy Allure TestOps in a Kubernetes cluster.

Docker compose is a good, reliable, and fast option to deploy Allure TestOps for your proof of concept, trial, projects of a small team, and as a system with relatively low workload (see the warning above).

Production deployment hints

The docker compose configuration files created by Qameta Software include all the additional services required to run Allure TestOps.

However, for a production deployment using docker compose, we strongly recommend having

  • dedicated PostgreSQL database server.
    • UAA and report databases could reside on the same server or on different ones.
  • dedicated RabbitMQ deployment.
  • external cloud object storage services (AWS, Google) or locally deployed S3 services like e.g. min.io with SSDs under hood.

Stuff described above generally means you need to disable said services in the docker compose configuration file and use dedicated deployments of the services on separated machines in your environment.

Production deployment diagram

Production system deployed via docker compose need to be deployed as displayed in the diagram below.

if you see this text, report to support.qameta.io

Target OS

  • We recommend using dedicated bare metal of VM with Linux family OS (Ubuntu, CentOS, RedHat) for the deployment.
  • We do not recommend using servers with Windows OS for the deployment.
  • And we strongly recommend avoiding installation on a laptop or PC as Allure TestOps is a server grade software requiring considerable resources when processing automated tests results.

HW requirements

For the docker compose deployment on a single server you need to have at least the following HW resources:

CPU RAM Storage size Storage type
4vCPU 8GB at least 50 GB SSD

We’re talking about modern HW here. If your server is 5 year old and even older then requirements will be different.

The resources described above are enough to start Allure TestOps without hiccups; however, final resources requirement are defined by your workload profile.

Using HDD instead of SSD will dramatically degrade the performance.

Prerequisites

The following items need to be closed before you will be able to deploy Allure TestOps using docker compose:

  1. You need to have docker compose version 2+ installed on your server.
  2. You need to have the username and the password (could be referenced as token as well) to pull the docker images from our private docker hub registry for the deployment. These are usually provided with the trial or commercial licence (see the next item).
  3. You need to have a trial or commercial licence to be able to start Allure TestOps.

Images registry

Here, we assume you have a licence and username and password (token) for pulling images (see items #2 and #3 in prerequisites).

Images for the docker compose deployment need to be downloaded from docker hub registry as follows:

Please read carefully 2 items below, and use a terminal application for all the actions described.

  • Login via docker web UI won’t work.

  • Login via docker desktop UI won’t work.

The docker registry to download images from is a private one, so you must use login and password provided with your licence (see item #2 in the prerequisites).

Log in to docker hub

This can be done using the terminal application only.

In the terminal application execute the the following command:

sudo docker login --username qametaaccess

depending on the configuration of your server the sudo may not be required

qametaaccess is the standard login for our private registry at the moment.

next step, docker will prompt you to enter the password, you need to provide the password received from our sales with the trial licence or with a commercial licence. This password could be also referenced as a token.

If you’ve used correct command and provided correct credentials, you will see something like this:

[email protected]:~$ sudo docker login --username qametaaccess
[sudo] password for user: 
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

This is it. Now, we can start the configuration.

Docker compose configuration files

Docker compose configuration consists of 2 files:

  1. .env file where you keep the configuration parameters specific for your deployment. It is prepared by you (don’t worry we have a good template prepared already).
  2. docker-compose.yml which contains all the settings needed to run the Allure TestOps services and some auxiliary stuff needed for Allure TestOps to start and run. This file is managed by Qameta Software and you should not apply any changes to it unless specifically recommended by our support team.

Getting configuration template

Allure TestOps docker compose deployment template is available as GitHub repository: https://github.com/qameta/allure-docker-compose

Execute (copy all lines to the clipboard, paste to your terminal app and hit Enter) the following in your terminal on a machine (preferably, a bare metal server or a VM):

git clone https://github.com/qameta/allure-docker-compose.git && \
mkdir ~/allure-testops && \
cp ${PWD}/allure-docker-compose/docker-compose.yml ~/allure-testops && \
cp ${PWD}/allure-docker-compose/env-example ~/allure-testops/.env && \
cp -r ${PWD}/allure-docker-compose/configs ~/allure-testops/ && \
cd ~/allure-testops

What do these lines do:

  1. Clone the content of our repository with docker compose configuration templates to the machine where you are going to deploy Allure TestOps.
  2. Moves docker-compose.yml and .env and configs directory to a new directory called allure-testops in your home directory.
  3. Goes to allure-testops directory.

Now, we can start the configuration.

Configuration profiles

The deployment consists of several services (three Allure TestOps services and some auxiliary ones referred below) needed to successfully run the business logic.

Sometimes, auxiliary services we need, are present in your infrastructure and we could reuse them. In this case similar service included into the configuration is not needed and we need exclude appropriate profile from using.

Profiles need to be included in the list of the used profiles only in case the required system is absent in your infrastructure.

A profile defines a service which needs to be included in the configuration to start with Allure TestOps.

Some of profiles are incompatible with each other, so be careful with the configuration.

Available profiles

  1. default
    • must be disabled if ldap profile is used
    • must be used in all other the cases
  2. postgres
    • Enable this profile if you want to use Postgres database in a container started alongside with Allure TestOps.
    • Don’t enable this profile it if you have dedicated PostgreSQL server.
  3. redis
    • Enable this profile if you want to use Redis in a container started alongside with Allure TestOps. In vast majority of cases, you don’t need to have dedicated Redis server as Redis is used to store sessions information only.
  4. rabbit
    • Enable this profile if you want to use RabbitMQ in a container started alongside with Allure TestOps.
    • Don’t enable, if there is a dedicated RabbitMQ server in the infrastructure and you are allowed to use it with Allure TestOps.
  5. minio-local
    • Enable this profile if you want to use min.io (S3 solution to use tests’ artifacts) in a container started alongside with Allure TestOps.
    • Don’t enable if you have dedicated S3 in your network or you are buying S3 services from a cloud provider (AWS, GCS).
    • If you have dedicated services, then you need additional configuration for the S3 integration (.env).
  6. minio-proxy
    • Enable this profile if you want to use min.io as caching proxy before storing files in your S3 solution. It allows you to save some traffic and avoid unnecessary operations with S3.
  7. ldap
    • Enable if you are going to integrate Allure TestOps with your LDAP (AD) and use LDAP authentication.
    • This profile must not be enables simultaneously with the default profile.
  8. metrics
    • Enable this profile is you are going to collect metrics from Allure TestOps.
    • This works with Prometheus, Grafana and exporters.
    • Make sure you have ./configs/prometheus/prometheus.yml tuned for minio scraping location.

Here is the line with profiles to run Allure Test without using the existing infrastructure:

export COMPOSE_PROFILES=default,postgres,redis,rabbit,minio-local

Preparing your configuration file .env

  1. Open .env file with your favourite text editor
  2. Add/updated a string with the needed profiles. Default set of profiles is export COMPOSE_PROFILES=default,postgres,redis,rabbit,minio-local
  3. Update Allure TestOps release to be installed using ALLURE_VERSION variable.
  4. Update ALLURE_HOST and ALLURE_INSTANCE_PORT accordingly to your network settings.
  5. Update the password for admin user using variable ALLURE_ADMIN_PASS.
  6. Walk through the .env and update parameters which have # Update comments

Start Allure TestOps

Once you have all of your configuration options completed, you can run configuration with docker compose:

docker-compose pull       # will download all necessary images
docker-compose up -d      # will start the configuration

The very first start usually takes up to 5 minutes. Further starts should take less than 1 minute.

Accessing Web interface

Allure TestOps web UI is available at <http://<%your-hostname-here%>:port

Port is defined by you in .env configuration file

Initial login

Log in to Allure TestOps using username admin and password from your configuration file.

admin log-in information

Allure TestOps requires an admin user account to be created and kept in the system.

This user’s name and password are defined in the configuration file, and it will be restored to the state described in the configuration to ensure you won’t lose the access to your Allure TestOps instance.

You cannot delete/disable this user and you cannot remove the admin’s rights for this user – during the next start of Allure TestOps, it will recreate the user again with all set of available rights, with the password defined in the configuration file.

If you omit provisioning of initial admin’s password, then default user admin will be created and admin’s strong password will be generated by the system, and then the generated password will be sent to the logs of uaa service.

Each time Allure TestOps is restarted, the admin user account is restored to its initial state defined in the config file .env

Provide the license for your Allure TestOps instance

Next thing you see will be the modal window to which you will need to provide the license you acquired.

if you see this write to support.qameta.io

After the correct license is provided, you’ll be able to start your work.

If you aren’t able to log-in more than 10 minutes after the up -d command, please check the troubleshooting section for logs generation and create a support ticket with the logs. Please always use your corporate email address (email address of the company for which the licence is issued).

Release upgrades

If your release of Allure TestOps is lower than 3.177.2, please refer to special update procedure to 3.177.2 and next ones.

Once your Allure TestOps started, configuration changes and images updates should be done using the following set of commands:

Update release version in the file .env for the variable ALLURE_VERSION, then run the commands:

docker-compose pull
docker-compose down

Wait until all the services have stopped and run Allure TestOps via docker-compose:

docker-compose up -d

Troubleshooting

In case of any troubles, do not immediately bring Allure TestOps down and up. We need the log files of each component to understand what has happened.

What is not needed for tech support

Please do not

  • provide the screenshots of logs in your terminal window. These are completely useless for the analysis.
  • provide copy paste of full log to a support ticket. These are impossible to analyse in the WEB UI of help desk.
  • provide a log for all elements together

What is needed for support

Please always provide the following information:

  • Allure TestOps release (see URL/status page of your Allure TestOps instance)
  • Releases of all the tools/integrations you have problems with (Allure TestOps plugins, allurectl, IDE plugins)
  • logs for each Allure TestOps’ service as a separate text file

Getting the logs

You can get all the logs for your Allure TestOps instance by executing following command:

docker compose logs -f
  • this will continuously update the logs from all the services
  • this log is good for a quick analysis on your side, please do not provide this log to the technical support.

Logs of specific component

To get logs for a specific component you need to specify a service name for which you want to see the logs.

To get the name of services of your docker compose deployment you need either check your docker-compose.yml file or execute the following command in your terminal (your current directory must contain docker compose config files).

docker compose ps

this command will result into a similar output:

[email protected] folder-name % docker compose ps
NAME                 COMMAND                  SERVICE                    STATUS               PORTS
allure-gateway       "/bin/sh -c /entrypo…"   allure-gateway             running (healthy)   0.0.0.0:10777->8080/tcp
allure-report        "/bin/sh -c /entrypo…"   allure-report              running (healthy)
allure-uaa           "/bin/sh -c /entrypo…"   allure-uaa                 running (healthy)
autoheal             "/docker-entrypoint …"   autoheal                   running (healthy)
minio-local          "/opt/bitnami/script…"   minio-local                running              0.0.0.0:9000->9000/tcp
minio-provisioning   "/bin/sh -c 'mc conf…"   minio-local-provisioning   exited (0)
rabbit               "/opt/bitnami/script…"   rabbitmq                   running              25672/tcp
redis                "/opt/bitnami/script…"   redis                      running (healthy)    6379/tcp
report-db            "docker-entrypoint.s…"   report-db                  running              5432/tcp
uaa-db               "docker-entrypoint.s…"   uaa-db                     running              5432/tcp

The column we need is SERVICE.

to get all logs of Allure TestOps services saved to separate files you need to execute the following command:

docker compose logs allure-report > report-logs.txt && \
docker compose logs allure-uaa > uaa-logs.txt && \
docker compose logs allure-gateway > gateway-logs.txt

What else is worth checking

Also check the following:

  1. Available memory for the artifacts storing df -h --total on your storage
  2. Available inodes for the artifacts storage df -i
  3. Available RAM free -h, top
  4. Other services logs.

Alternatively you can use this script to get the needed logs.

Uninstall

This will remove all the volumes with their data; no data will remain on your system, and this data cannot be restored.

To uninstall the Allure TestOps, run the following:

docker compose down -v --rmi local

Using existing infrastructure

This section is for you if you’re going to use resources from your existing infrastructure like

  • dedicated PostgreSQL DB server
  • dedicated RabbitMQ server
  • dedicated Redis
  • dedicated S3 solution like min.io server, AWS’s S3, Google S3 etc.

All the settings to connect external services are to be made in the .env file.

Mandatory Allure TestOps services

The mandatory Allure TestOps services, are the following:

  • uaa (allure-uaa)
  • report (allure-report)
  • gateway (allure-gateway)

These three services must remain in the docker compose configuration files.

Using dedicated PostgreSQL database server for uaa and report

Dedicated PostgreSQL database server is a must in a production environment.

Connecting the external database contains following steps:

Minimal supported release of PostgreSQL is 12.

  1. Creating the databases for services uaa and report using recommended Postgres DB version (see the line above).
  2. Migration of data from existing database to the standalone one. This step is to be skipped if you’re deploying Allure TestOps from scratch.
  3. Updating the settings to connect uaa and report services to stand alone database.

Creating the databases

Using your Postgres DBA you need to create 2 databases - for the services 1. uaa and 2. report.

You cannot store the data for these 2 services in one database, this will brick the system.
  1. The uaa database stores all the data related to the users’ profiles and licensing.
  2. The report database stores tests’ data.
Creating uaa database

Following script will create empty database and during the first start uaa service will execute a series of SQL update scripts to create actual database structure.

CREATE DATABASE uaa TEMPLATE template0 ENCODING 'utf8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';
CREATE USER uaa with encrypted password '[email protected]';
GRANT ALL PRIVILEGES ON database uaa to uaa;
Creating report database

Following script will create empty database and during the first start report service will execute a series of SQL update scripts to create actual database structure.

CREATE DATABASE report TEMPLATE template0 ENCODING 'utf8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';
CREATE USER report with encrypted password '[email protected]';
GRANT ALL PRIVILEGES ON database report to report;

Migration from the DB in a container to standalone DB

The migration of the existing data to the standalone database is described in FaQ section.

You need to create a dump from uaa and report databases and then restore the database dumps to appropriate database. The date restoration to a wrong database will corrupt the data as the databases of uaa and report services have different scripts to get these to the actual database structure.

Updating the settings of services

UAA service

For uaa service to work with an external database you need to update the following in .env file, only the items to be updated are shown:

ALLURE_UAA_DB_HOST=uaa-db # <<< your hostname here. e.g. 192.168.0.100
ALLURE_UAA_DB_NAME=uaa # <<< your db name for UAA service, uaa is used by default, it's better to keep it the same 
ALLURE_UAA_DB_USERNAME=uaa # <<< uaa DB username (see above the creation of the DB for UAA)
ALLURE_UAA_DB_PASS=password # <<< uaa DB password (see above the creation of the DB for UAA)
ALLURE_UAA_DB_PORT=5432 # <<< uaa DB port, 5432 is the default one

report service

For report service to work with an external database you need to update the following in .env file only the items to be updated are shown:

Report

ALLURE_REPORT_DB_HOST=report-db # <<< your hostname here. e.g. 192.168.0.100
ALLURE_REPORT_DB_NAME=report # <<< your db name for report service, `report` is used by default, it's better to keep it the same.
ALLURE_REPORT_DB_USERNAME=report # <<< uaa DB username (see above the creation of the DB for report).
ALLURE_REPORT_DB_PASS=pasword # <<< report DB password (see above the creation of the DB for UAA)
ALLURE_REPORT_DB_PORT=5432 # <<<report DB port, 5432 is the default one

Configuring external S3 storage for artifacts

S3 storage type is more reliable in terms of processing of the collisions when saving and deleting the data during the batch operations with files, so it is recommended at least using min.io instead of the direct usage of a file system to store the artifacts.

If you are going to connect S3 storage for the test cases and test results artifacts (which is recommended for the production) you need to add the following parameters to report service (only the parameters you need to update in .env file are sh):

# S3
ALLURE_S3_PROVIDER=s3
ALLURE_S3_URL=http://minio-local:9000
# Leave as is
ALLURE_S3_MINIO_URL=http://minio-local:9000 
ALLURE_S3_BUCKET=allure-testops
ALLURE_S3_REGION=qameta-0
ALLURE_S3_ACCESS_KEY=WBuetMuTAMAB4M78NG3gQ4dCFJr3SSmU
ALLURE_S3_SECRET_KEY=m9F4qupW4ucKBDQBWr4rwQLSAeC6FE2L
ALLURE_S3_SECRET_PATHSTYLE=true

Connecting S3 storage

The integration with external storage is being performed using the environment variables.

If you’re using the deployment via container the connection to S3 storage is being performed by adding of environment variables to report container as follows:

For AWS’ S3

      ALLURE_BLOBSTORAGE_TYPE: "S3"
      ALLURE_BLOBSTORAGE_S3_ENDPOINT: "s3.amazonaws.com" # leave as it is here
      ALLURE_BLOBSTORAGE_S3_REGION: "us-east-1" # region, update to real region you are using
      ALLURE_BLOBSTORAGE_S3_BUCKET: "<your-bucket-name>" # replace string in <> with your bucket's name 
      ALLURE_BLOBSTORAGE_S3_ACCESSKEY: "<ACCESS-KEY>" # replace string in <> with your Access key ID 
      ALLURE_BLOBSTORAGE_S3_SECRETKEY: "<SecretKey>" # replace string in <> with your Secret access key 
      ALLURE_BLOBSTORAGE_S3_PATHSTYLEACCESS: "true"

If your AWS region is different from us-east-1, it’s recommended to use the S3 endpoint like region.s3.amazonaws.com

Access rights settings for S3 bucket

For the S3 integration to work smoothly with Allure TestOps following access rights to be configured for S3 bucket.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "S3assets",
            "Effect": "Allow",
            "Action": [
                "s3:PutObjectAcl",
                "s3:PutObject",
                "s3:ListMultipartUploadParts",
                "s3:ListBucketMultipartUploads",
                "s3:ListBucket",
                "s3:GetObjectAcl",
                "s3:GetObject",
                "s3:GetBucketLocation",
                "s3:GetBucketAcl",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::<ALLURE_S3_BUCKET_NAME>/*",
                "arn:aws:s3:::<ALLURE_S3_BUCKET_NAME>"]
        }
    ]
}

For Google’s S3


      ALLURE_BLOBSTORAGE_TYPE: "S3"
      ALLURE_BLOBSTORAGE_S3_ENDPOINT: "https://storage.googleapis.com" # leave as it is here
      ALLURE_BLOBSTORAGE_S3_REGION: "europe-west3" # region, update accordingly your real region
      ALLURE_BLOBSTORAGE_S3_BUCKET: "<your-bucket-name>" # replace string in <> with your bucket's name 
      ALLURE_BLOBSTORAGE_S3_ACCESSKEY: "<ACCESS-KEY>" # replace string in <> with your Access key ID 
      ALLURE_BLOBSTORAGE_S3_SECRETKEY: "<SecretKey>" # replace string in <> with your Secret access key 
      ALLURE_BLOBSTORAGE_S3_PATHSTYLEACCESS: "true"

For Google S3 you need to use fine-granted access control and settings for public access Subject to object ACLs, this is needed for interoperability with AWS SDK, otherwise files in S3 bucket won’t be fully accessible by Allure TestOps and report service will fail to start.

If you have any doubts, please consult our Support

Preparation for the production deployment

Before putting your system to the production, please refer to the recommendations here.

If you have any doubts, please consult our Support before getting into troubles.

Back to installation