S3-compatible storage
Allure TestOps relies on an object storage service for storing test results artifacts (see Architecture).
Requirements and recommendations for S3-compatible storage
Production environment
Do not store artifacts and the Allure TestOps database on the same disk. Separating them prevents input/output resource contention and ensures stable performance for both the storage system and the database.
If you use Kubernetes, connect the storage using a CSI driver.
Hardware
Use only SSDs for storing artifacts in S3-compatible storage, preferably enterprise-grade. HDDs will degrade Allure TestOps performance as the number of stored artifacts grows.
Storage class
Use a standard storage class optimized for frequent data access. "Cold" storage classes (such as Reduced Redundancy Storage or Cold Tier) can significantly slow down access to artifacts and negatively affect overall system performance.
S3-compatible storage solutions
The most reliable and well-performing object storage for use with Allure TestOps is Amazon S3, which is recommended for large deployments with high workloads. Object storage can also be implemented using MinIO, Google Cloud Storage, or any other S3-compatible storage solution.
Amazon S3
In order to use Amazon S3 with Allure TestOps, you need to create the following JSON policy in AWS Console:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "S3assets",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:ListBucket",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::<ALLURE_S3_BUCKET_NAME>/*",
"arn:aws:s3:::<ALLURE_S3_BUCKET_NAME>"
]
}
]
}
Google Cloud Storage
To use Google Cloud Storage with Allure TestOps, you need to create a bucket with a fine-grained access control and use ACLs to manage permissions.
As the service URL, specify https://storage.googleapis.com
in your Allure TestOps configuration file.
Migrating data to MinIO
If you have used the Docker Compose demo deployment (testops-demo) and now switching to a production environment, follow the instructions below to migrate your data to a dedicated MinIO storage. You may also want to do this if you have been working with a Docker Compose deployment for an extended period and artifacts are stored in Docker volumes as file system data.
Directly copying the files might result in incorrect access permissions and inaccurate MinIO metadata for the artifacts. It is strongly advised to use the MinIO CLI application for performing the bulk migration of files.
Make sure that the target storage service is running and accessible from your machine.
In your .env file, add the following parameters for connecting to the storage service:
TESTOPS_S3_URL_NEW
— server URL.TESTOPS_S3_BUCKET_NEW
— S3 bucket name.TESTOPS_S3_ACCESS_KEY_NEW
— access key for connecting to the bucket.TESTOPS_S3_SECRET_KEY_NEW
— secret key for connecting to the bucket.
In your docker-compose.yml file, add the following service configuration:
services: minio-migrate: restart: "no" image: minio/mc container_name: minio-migrate depends_on: - minio-local networks: - testops-net entrypoint: "/bin/sh -c" command: > "mc config host add minio-old ${DEMO_INSTANCE_S3_URL} ${DEMO_INSTANCE_S3_ACCESS_KEY} ${DEMO_INSTANCE_S3_SECRET_KEY} --api S3v4 && mc config host add s3-new ${TESTOPS_S3_URL_NEW} ${TESTOPS_S3_ACCESS_KEY_NEW} ${TESTOPS_S3_SECRET_KEY_NEW} --api S3v4 && mc cp -r minio-old/${DEMO_INSTANCE_S3_BUCKET}/v2 s3-new/${TESTOPS_S3_BUCKET_NEW}/" # ...
Navigate to the directory where your docker-compose.yml and .env files are located, then run the following command:
docker compose run minio-migrate