Troubleshooting
Integration errors
Connection problems
Test connection does not pass successfully
Why it could happen
- Credentials you provided aren't correct.
- No connection to the integrated system.
- API was updated on the integration side.
How to fix
Credentials you provided aren't correct
- Check you have copied/entered correct password or token required by the integration.
- tokens and passwords must have no spaces in the beginning and at the end, these are considered as characters and adding spaces will make your password or token incorrect for sake of authentication.
- Check username string is what is expected by the integration.
- Check you are using correct string for the username.
- Carefully read the integration description, it states what exactly need to be used as a username; that could be username or email and in 100% of cases these aren't interchangeable.
No connection to the integrated system
This requires involving of support and your DevOps or engineers responsible for infrastructure.
This could result in the message Can't establish connection in the UI.
To troubleshoot the issue you need to do the following
- Collect the logs from
testops
service - Check the logs for errors like
java.net.UnknownHostException
If no such messages available in the logs, try to enable more detailed logging for integrations as described below.
Enable debug logging for the integrations.
- add a new environment variable
LOGGING_LEVEL_IO_QAMETA_ALLURE_TESTOPS_INTEGRATION
with valuedebug
. - restart the deployment (service must get the new parameters)
- try the operation which failed
- collect the logs from
testops
service - check the logs for errors
400 bad request
HTTP FAILED
java.net.UnknownHostException
- The errors above usually have more detailed description above and below error line
UnknownHostException
generally means Allure TestOps has no connection to the system you described in the integration settings.
- add a new environment variable
If UnknownHostException
occurs, then you either need to check you provided correct URL for the integration, or you need assistance from your network team to troubleshoot the issue.
If you are struggling with the comprehending of the log files, create Support request at our help portal using your corporate email address during the registration.
API was updated on the integration side
This can be fixed only by involving of your DevOps or engineers responsible for your infrastructure and Allure TestOps tech support, and then it might require additional development; and could take considerable time.
- Perform all the actions described above in No connection to the integrated system.
- debug logging for the integrations need to be enabled.
- If the logs do not contain symptoms of troubles with the connections, create Support request at our help portal using your corporate email address during the registration.
Test results
Covenants
To make some phrases easier and shorter for reading we'll use Allure TestOps agent or simply – agent to refer to Allure TestOps plug-in and/or allurectl command line tool.
Test results are marked as orphaned
Reason
The test results in Allure TestOps launch will be marked as Orphaned in following cases.
An invalid AS_ID (ALLURE_ID)is received in the results
- ALLURE ID does not yet exist in the instance of Allure TestOps, this attribute is managed by Allure TestOps and you cannot use random IDs.
- There is no valid ALLURE_ID, and there is no full path (test case selector) in the test results. Full path (selector) is used to unmistakably match a test result to a test case. If you are using some third-party integration, it could disregard this parameter, and it will work with Allure Report, but it's crucial for working with Allure TestOps.
Possible solution
No full path data
To be able to import the results containing no data on full path, the valid AS_ID (ALLURE_ID) attributes should be provided via code and be available in the test results.
You also need to check if you are using most recent dependencies in your project and then check the generation of the results again.
For the dependencies please check the official repos of Allure Report project at GitHub: https://github.com/allure-framework
Incorrect ALLURE_ID (AS_ID) and no full path data
You can only assign an existing Allure ID to a via the code, if your test framework/PL aren't supported via JetBrains IDEs, you need to create a manual test case, copy its ID, and provide its ID to the code, then upon arrival the manual test case will be replaced by the data from the automated test result.
Missing results - Empty launch - Wrong or empty test results source directory
Symptom: This could be a case when your launch is totally empty. You need to check if you are referring to correct test results directory.
This is to be checked in the settings of Allure TestOps agent used to upload the test results.
You also can execute ls -a
command during the build for the path you used for settings of Allure TestOps agent if there are any test results in this directory. If there isn't any, then you need to check for the right paths.
How to fix
The only way to fix this is to point to the right directory with test results.
Check what is the right path where your tests save the results, then update settings for Allure TestOps agent.
Missing results - Empty launch - Files exist on CI, paths are correct but files aren't uploaded
If you can see the following log details in a plugin or in logs of allurectl:
Launch [LL], job run [rr], session [ss]:files ignored [0], indexed [XX], processed [0], errors [XX]
or
[Allure] [xxx] Session [xx] total indexed [N], ignored [0], processed [0], errors [N]
These records (indexed is more than zero and processed is zero and errors more than zero) mean that before Allure TestOps agent started there were files in the allure-results directory and no files were added after the agent started. This situation happens for example when you're running your test, collect results copy them to a directory and only then Allure TestOps agents starts.
How to fix
If your test results are copied from somewhere before Allure TestOps agent starts and then no updates happen, you need to index and upload existing files.
For a plug-in (e.g. for Jenkins, TeamCity) you need to add a parameter indexExistingFiles: true
to index existing files.
withAllureUpload(indexExistingFiles: true, name: '${SOME_JOB_NAME}', projectId: '', serverId: '', tags: '') {
// some block
}
Missing results - Launch is not empty - Some results are missing - Muted tests
If you or someone from your team marks a test as muted, the results from these tests won't be shown in the Launches.
Muted tests consequences
- Muted test won't appear in the launches main page.
- Nevertheless you will see the test results of a muted test if you jump to the Tree section of a particular launch with a muted test.
- You will see the button Unmute in the test result description panel (on the bottom).
- Muted test won't participate in the analytics and statistics calculations.
- Muted test will be marked as resolved even if no defect is linked to it.
How to check - Dashboards - Mute Trend
Go to your project's dashboard and scroll down to the latest widget called Mute Trend. If on the actual date there is any bars in the view, then you have the muted tests and this could be partially the reason why you see different numbers.
How to check - Test cases - Filter
- Go to the Test cases section of your project.
- Activate filters panel
- Create a filter and set Mute = True
- In the test cases list you will see the list of mutest test cases.
- Each muted test case description will contain information about the person marked this test as muted and the date when that was done.
How to check - Test results - Test description panel
In a test result description panel for a test result in an open launch you will see the active button Unmute.
How to fix
Well, to be honest, you don't need to fix it for if a test is muted, this was done intentionally by someone from your team and mute is one of the ways you resolve failures in your tests.
When the problem with a failing test is fixed, mute can be removed. Otherwise, you can always ask a person that has muted a test, why did they do it.
Missing results - Size of a result file
There is a 2,000,000 bytes limit for a test result file (*-result.json
).
If the size of a result file exceeds 2,000,000 bytes, such result file won't be processed but will be considered as an attachment by Allure TestOps. This is done intentionally to avoid failures due to insufficient resources when processing big amount of files.
How to check
Check your allure-results folder to see if there any *result.json
files with the size more than 2,000,000 bytes.
How to fix
Consider transferring some of the information you're adding directly into test results to attachments instead. Allure TestOps with the allure framework under the hood allow adding textual information as strings or CSV tables. This will unload the system and for most of the cases will represent the information in a way better than just the information added to test cases' scenario.
Missing results - Errors during the upload
Actually, this one should be the item #1 in the list, but it is what it is.
Tests could have missing results due to the settings of the network equipment (routers, firewalls) and network software (reverse proxy like nginx etc.).
How to check
You either need your network administrator or DevOps or any other person responsible for the network configuration, reverse proxy configuration, firewalls configuration.
You need to check the following
- settings for the reverse proxy (if any) described here
- timeouts
- data transfer limitations
- network timeouts on routers/firewalls similar to reverse proxy timeouts
- network settings on routers/firewalls similar to reverse proxy limits for data transfer size
- blocking rules (black lists, white lists etc.)
How to fix
Try to fix the limits for timeouts and data transfer size to recommended values
Missing test results - Retries
Symptoms: There are several / many *-results.json
files for several tests but in the test results of a launch there is just one test result instead.
That could happen if...
- your tests are from different projects (code wise) but have same full qualified name for some reason.
- your test is parameterized but you don't provide parameters in the test results.
How to check
- Go to the test results of a launch Launches => Specific launch => Tree tab
- Select your test case
- In test case's panel go to retries tab
- If your test result has retries and that was only one test run, it seems you need to update your code.
How to fix
If test result has retries and this is not a parameterized test, you need to ensure that full path of your tests differs. There are several different ways to do that:
- Explicitly Assign Allure ID to both of your tests in your code.
- Make sure the test methods are named differently.
- It's better to apply both from above.
If test result has retries and this is a parameterized test, then you need clearly provide the parameters to the test results as some of the test frameworks do not do that by default.
Here is an example for Java:
<snip>
private static final String OWNER = "allure-framework";
private static final String REPO = "allure2";
<snip>
@Story("Create new issue")
@ParameterizedTest(name = "Create issue via api")
@ValueSource(strings = {"First Note", "Second Note"})
public void shouldCreateUserNote(String title) {
parameter("owner", OWNER);
parameter("repo", REPO);
parameter("title", title);
steps.createIssueWithTitle(OWNER, REPO, title);
steps.shouldSeeIssueWithTitle(OWNER, REPO, title);
}
For the example above, the information about the parameters owner, repo and title will land in the test result file in the parameters section:
"parameters": [
{
"name": "owner",
"value": "allure-framework"
},
{
"name": "repo",
"value": "allure2"
},
{
"name": "title",
"value": "First Note"
}
],
These parameters will be processed by Allure TestOps and for the same test case you will see several test results:
... and the then the following test case will be created:
Cropped values in the test result fields
When working with automated tests in Allure TestOps, please note that any field of an automated test can contain maximum 255 characters, other symbols will be cut off when loading the test result into TestOps
Automated tests are stuck on "In progress" status when run or rerun
This can happen if the received test result parameters or environment variables differ from the expected values.
For example, if you have set two environment variables when triggering a test run, say OS=iOS and VER=123, but then the value of one of them got changed during the run (say, it's being assigned dynamically and always gets new value), Allure TestOps will create a new test result upon receiving the result file, and will still wait for a result with the correct values (which would never arrive).
The same will happen if Allure TestOps receives a test result with a different parameter value.
If you are using test parameters or environment variables to store dynamic values, consider using attachments for this purpose instead. Alternatively, some integration modules allow you to mark parameters as excluded so changing their values would not lead to creating a new test result instead of one we expect to receive.
Test results attachments
Attachments are missing in Allure TestOps but are available in Allure Report
I can see attachments in Allure Report but I cannot see them in Allure TestOps
How to check: attachments in folders
- Go to the source of your test result (CI pipeline, local folder, IDE project, etc.).
- For sake of this example we're assuming the results are stored in the allure-results folder.
- Check that all attachments are located in the root of allure-results folder. If your allure-results contains subfolders with attachments, then it is correct that you don't have them in Allure TestOps.
How to fix: attachments in folders
Configure your project in a way when all files are stored in the allure-results folder without subfolders.
Allure Report will render the attachments as if it works with files on a file system. Meanwhile, Allure TestOps has no information about folders, and the path to an attachment file will be considered as part of the file name. Consequently, the attachment won't be linked to the test results and will be deleted by Allure TestOps as not linked to any test results.
How to check: allurectl configuration
Check the settings on the CI side for allurectl. If this option is used, then allurectl will skip the upload for attachments of successfully passed files.
How to fix: allurectl configuration
Use the test results from the CI pipeline or temporary enable the upload of the attachments for successfully passed tests.
How to check: cleanup rules
Allure TestOps has the internal routine that deletes old artifacts to free up the storage. This could be a reason you cannot see the attachments for old launches / test results.
How to fix: cleanup rules
You can use attachments for newer launches or gather them from the CI artifacts if the attachments have already been deleted from the Allure TestOps storage.
Services do not start
Allure TestOps service does not start (database lock)
If you find one of the following messages in the log
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource org.
springframework.beans.factory.BeanCreationException: **Error creating bean with name 'liquibase'** defined in class path resource [org/springframework/boot/autoconfigure/liquibase/LiquibaseAutoConfiguration$LiquibaseConfiguration.class]:
Invocation of init method failed;
nested exception is liquibase.exception.LockException:
Could not acquire change log lock.
liquibase.lockservice: Waiting for changelog lock...
liquibase.lockservice: Waiting for changelog lock...
liquibase.lockservice: Waiting for changelog lock...
liquibase.lockservice: Waiting for changelog lock...
it means that the database is locked and the service cannot access it. It can happen if the service was abruptly shutdown due to infrastructure or network failure.
To fix this:
Stop the Allure TestOps service. Keep the PostgreSQL server running.
Docker example:
docker compose stop testops
Log in to the
testops
database using psql:psql -h <host> -U <user> <database>
.Docker example:
docker exec -it <container id> psql -h 127.0.0.1 -U testops testops
Make the following query to display all database locks.
SELECT * FROM databasechangeloglock;
Find and remove the lock that is preventing the services from starting.
UPDATE databasechangeloglock SET locked=FALSE, lockgranted=NULL, lockedby=NULL WHERE id=1;
Start the previously stopped Allure TestOps services.
Docker example:
docker compose start testops
Allure TestOps service crashes or uses too much CPU when making a large number of API calls
If you are using an API token that you have generated in your profile to access the API, you need to switch to OAuth authentication (using JWT as Bearer token). See API Authentication for more information.
UI errors with code 409
"Request failed with status code 409" when trying to move a test case to another project
This can happen when an automated test case with the same properties already exists in the target project.
For an example, imagine the following situation:
- You upload two test results to Project A resulting in two new test cases being created. Let's say they have IDs of 1 and 2.
- You move one of the created test cases (ID = 2) to Project B.
- You upload results for the same two test cases to Project A. One of the test cases already exists in Project A (ID = 1), so one new test case will be created (ID = 3).
Now, if you try to move test case with ID number 2 from Project B back to Project A, an error will occur because an identical test case (except for the ID) already exists in Project A.
Getting binary files for the deployment
Cannot download packages from qameta.jfrog.io
We have moved all our packages to dl.qameta.io.
You can find the updated installation instructions here. Your old access credentials should work, but if they don't, contact our support.
Cannot log in to Docker Desktop using Qameta credentials
This is the correct behavior. The credentials you received from the Qameta sales team should only be used with the Docker command-line tool to download the Allure TestOps images as described in this article.
User login issues
Cannot log in to Allure TestOps as Admin
Make sure that you are trying to log in using the built-in administrator account and not a third-party authentication account.
- If you are using an external IAM system to authenticate users (LDAP, SAML2, OpenID) as main option for the authentication, then the login to the system with local account needs to be done using https://{URL}/login/system address.
If you are using Allure TestOps Server, you can find the password in the configuration files of your instance.
If you are using Allure TestOps Cloud, you can find the password in the message you received upon creating your instance.
Error 401: CSRF Token Missing
If you see an error in browser's DevTools which looks like "Request failed with status code 401. The expected CSRF token could not be found" when an end user cannot login to Allure TestOps, try to do the following.
The problem is most likely linked to a "stuck session" in Redis which is used for storing user sessions. Root cause of such "stuck session" can be different time zones user operates in, i.e. Allure TestOps is in TZ1, the user's computer is in TZ2, and user utilizes VPN connection which endpoint is in TZ3. This soup of timezones can cause the creation of a session which is always expired form its very start or not yet active in point of view of Allure TestOps instance.
- Try the
redis-cli FLUSHALL
: command.
redis-cli FLUSHALL
Use this command to clear the session data stored in Redis. Remember that this action will reset sessions for all users and they will be logged out from the UI.
or alternatively you can
- Restart Redis service to get the same result as
FLUSHALL
command.
Redis' data for the usage with Allure TestOps mustn't be persistent, so restart will delete all the data on current users' sessions.
Important: Both actions will cause user sessions to be reset. Consider informing users in advance or performing them after business hours.