Test results
In a launch, a test result represents one attempt of executing a test. It is created either manually or automatically by uploading a test result file. In the most basic situation, you can think of a test result as a simple record containing the time and date of an attempt and its status (e.g., Passed or Failed).
For a manual test with a scenario, a QA engineer sets individual statuses for all test steps, optionally including attachments. For the test result as a whole, they can also provide issue links, tags, and custom fields.
An automated test result may include such data, too. Check out Allure Report documentation to learn more about steps, attachments and metadata that can be provided when running a test.
Each test result can also describe its environment — for example, the operating system and browser the QA engineer used. Depending on the configuration and how the launch was created, the environments may or may not be identical for all the test results in a launch. To learn more about this feature, see Environment.
When you close a launch, Allure TestOps processes all test results the launch contains. These test results are then used to create or update the relevant test cases and analytical data in the project.
Test statuses
Each completed test result has a test status. These statuses are reflected in various statistics and graphs throughout the Allure TestOps interface, and they also can be used for searching and filtering test cases.
When running a test manually, the newly created (incomplete) test result initially gets a temporary In progress pseudo-status, which is then changed to another status by a QA engineer. For automated test results, the status is loaded from a test result file (see also the Test statuses article in the Allure Report documentation).
There are five test statuses in Allure TestOps:
- Passed (green) — test finished successfully.
- Failed (red) — unexpected behavior was encountered during the test run. This means that the test itself seems to be valid (not Broken), but the test run ended with an invalid behavior (e.g., false assertion).
- Skipped (gray) — test is part of a test plan but was skipped for some reason.
- Broken (orange) — test run was interrupted because of a test defect. Unlike Failed, this status means that the test was unable to check the product behavior as intended, therefore this status may or may not indicate an actual problem with the product.
- Unknown (violet) — test status was not explicitly reported. In a manual test, this means that the launch was closed while the test was still In progress. In an automated test, this is most likely due to a bug in the Allure Report adapter that was used.
In manual test case scenarios, you can set individual statuses for each step and sub-step. These statuses do not directly affect the overall status of a test and may be used for different purposes by QA engineers.
Sorting and filtering test results
When viewing a launch, you can sort and filter the list of test results in the same way as the list of test cases.
To sort a list of test results:
- Click Display.
- From the Sort by menu, select the field by which you want to sort the test results.
- From the Direction menu, select the direction of sorting.
To filter a list of test results, click the search box above the list and select the field and value required.
For example, to show only automated tests in the list:
- Click the search box.
- Select Type in the drop-down list.
- Select Automated.
You can find more information on working with filters in the Test cases article.
Bulk actions
If a launch consists of a large number of tests, you can use the bulk actions menu to modify several test results at once. To do this:
- Select one or more test results using Ctrl or by checking the boxes next to them.
- Click Status in the bottom panel to change the status, or click
⋯
to select a different action.
Using the bulk actions menu you can:
- change test result statuses;
- assign team members;
- rerun tests;
- link tests to defects;
- mute test cases;
- add and remove tags;
- export test results to CSV;
- remove test results from a launch.
If a launch is closed, only the Export CSV action will be available in the menu.
Supported file formats
Allure TestOps supports several file formats for importing results of automated tests. You can upload these files via the web interface or using allurectl.
- Allure Report 2.x format —
*-result.json
,*-container.json
,*-attachment.*
; - Allure Report 1.x format —
*.xml
(supported for compatibility with older adapters and integrations); - Cucumber reporting format —
*.json
; - JGiven reporting format —
*.json
; - JUnit XML reporting format —
*.xml
; - Visual Studio reporting format —
*.trx
(obsolete, not recommended to use); - XCTest reporting format —
*.xcresult
(only files from XCode 11 and higher are supported); - xUnit.net reporting format —
*.xml
.
Matching test results to test cases
When you upload test results to Allure TestOps, for each uploaded result Allure TestOps does one of the following:
- Creates a new test case if this is the first uploaded result for that test case.
- Updates an existing test case if you have already uploaded results for it before.
- Marks the uploaded result as orphaned if there is invalid data (see below for more info).
After the initial upload, all subsequent uploads need to be matched with the created test case. To do this, Allure adapters generate a unique ID for each test case, separate from the ID you see in the Allure TestOps interface. This ID is a hash of the data provided by the testing framework.
If the testing framework does not provide a ready-made test case identifier, the adapter uses the signature of a test function combined with additional data like package name to make it more unique.
This process can be represented as follows:
testCaseId = md5(fullName + sort(params))
where fullName
is the name of the function with some extra information (e.g., my_package#my_function
), and params
are the function parameters.
Then, after the upload, Allure TestOps will try to find an existing test case with that ID and update it with the new data. If no such test case exists, it will create a new one.
If you don't use an Allure adapter and your test results don't have this field, Allure TestOps will generate this ID automatically when you upload the test results. It won't necessarily be the same as the value generated by an adapter.
A potential issue with identifiers generated this way is their sensitivity to code changes. For example, if a function's signature was used to create the ID and you later modify the function name or its parameters, the identifier will change. This results in Allure TestOps creating a new test case instead of updating the existing one.
To avoid this, after the first test result upload, you can modify your code to explicitly specify the test case ID from Allure TestOps:
You cannot specify an ID that does not yet exist in Allure TestOps. If you do that, the uploaded test results will be orphaned and no test cases will be created or updated.
@Test
@AllureId("123")
public void myTest() {
step("Step 1", () -> {});
}
it("My Test @allure.id:123", async () => {
await allure.step("Step 1", async () => {});
});
@allure.id("123")
def my_test():
with allure.step("Step 1"):
assert 1 == 1
After you run the test, the adapter will create a test result file with the following fields:
"testCaseId": "db57b9d129d73c4b4d9a1d1023774f53",
"labels": [
{
"name": "as_id",
"value": "123"
}
]
Here, testCaseId
is a test case identifier generated by the adapter (a hash of data provided by the testing framework). In the example above, instead of testCaseId
, the explicitly specified test case ID from Allure TestOps will be used — as_id
(or allure_id
in some adapters).