Page tree

You are on the Xray Cloud documentation. If you are looking for Xray Server documentation, you can find it in this page

Skip to end of metadata
Go to start of metadata


Learn more

Please read Understanding the calculation of coverage status and the status of Tests for an in-depth explanation of the calculation of these statuses.


Analysis Scopes

Tests and coverable issues can be analyzed from different perspectives/scopes.

The same Test or Story can be analyzed, for example, on version 1.0 and also on version 2.0. This will take into account the respective executions made for those versions.

Therefore, a Story may be OK on version 1.0, but may be NOK on version 2.0 due to regression.

Tests and coverable issues can be analyzed from these scopes (or dimensions):

  • Latest
  • Version
  • Test Plan

They can also be analyzed based with some additional criteria that will affect the calculated values, such as:

  • Test Environment
  • Final statuses precedence over non-final ones

Latest

If you don't care about versions, or you're not using versions at all and just want to see the calculated statuses based on the latest runs, analysis by "Latest" can be used for that purpose.

This is useful to have a quick view of the latest results or calculated status for the Test or coverable issue.

Note that when analyzing by latest results, it also considers the latest results made in the different environments (i.e., when "All Environments" is selected as Test Environment).

Version

The "Version" scope allows users to analyze Tests and coverable issues from a version perspective. It helps address such questions as

  • "How is the requirement on version X?"
  • "How are these Tests performing on version Y?"

When analyzing by version, only the Test Executions made for the given version are considered (i.e., the Test Execution's Fix Version field).

As an example, a user story aimed for version 3.0 may be analyzed from the point of view of the executions made for version 3.0 or for the ones made afterwards on version 4.0.

Test Plan

This gives you the ability to evaluate the Test status or coverage status based on some planned testing (i.e., on the Tests and related executions made in the scope of the selected Test Plan).

When analyzing by Test Plan, only the Test Executions linked to that Test Plan are considered.

This allows you to evaluate if a given coverable issue is covered by the Tests of some Test Plan or not. If so, then you can see how is it going based on the executions performed from the related planned Test Executions.

Test Environment

Analysis by Test Environment gives the ability to analyze the Test status or the coverage status of an issue for some Test Environment.

It addresses such questions as "How is this requirement doing on environment X?" 

When analyzing by Test Environment, only Test Executions made for the given Test Environment are considered.

Final Statuses Precedence

Final statuses precedence is used to perform the analysis based on "finished work" (non-intermediate Test Runs).

The flag "Final statuses have precedence over non-final statuses" gives the additional ability to consider just those Test Runs whose status is configured as being a final status.

This helps address questions such as:

  • "What is the current status of this requirement (or test)?"  (if final statuses precedence is unchecked)
  • "What is the status of this requirement (or test), considering just the finished work?"  (if final statuses precedence is checked)

Analyzing Xray entities

Xray entities can be analyzed in the issue view screen as well as in specific reports containing those entities. 

Coverable issues

The status of coverable issues can be evaluated directly in the issue view screen as well as on some reports, including the Tests Coverage report.

Issue screen

Within the issue screen, the coverage status can be evaluated for the specified scope within the Test Coverage section. The calculated coverage status is shown on the right side. 

Reports

Test Coverage

Coverable issues can be analyzed for a given scope using the Test Coverage report. More info about this report in Test Coverage Report. The chosen scope will be store in the user preferences, and will affect all issues in the same project.

Tests

The status of Tests can be evaluated directly in the Test issue view screen as well as on some reports, including the Tests List report and, indirectly, the Test Sets List report.

Tests can also be evaluated in the coverable issue screen, within the "Test Coverage" section.

Issue screen

Within the Test issue screen, the Test status can be evaluated for the specified scope. The calculated status is cached for performance reasons. Since data between Jira and Xray may not be in sync, it's possible to enforce a recalculation to display an up-to-date value. The chosen scope will be store in the user preferences, and will affect all issues in the same project.

Reports

Tests List

The Tests can be analyzed for a given scope using the Tests List report. 

Test Sets

The status of Test Sets (i.e., of the Tests contained within a Test Set) can be analyzed using the Test Sets List report.

Reports

Tests Sets List

The Test Sets, and implicitly the Tests within them, can be analyzed for a given scope using the Test Sets List report.

Troubleshooting


"I have an Xray Project to Test my Requirements project. I've created a Test and I have already executed the Test, but I always get Requirements status Covered but Not Run."


You might be using environments on Test Executions that are affecting the aggregated status for the Tests. If you have a couple of Test Executions, one with a specific environment (e.g., Android) and another without any environment, then Xray will also consider the empty environment when calculating the aggregated status (this is, of course, if all Test Executions are within the same Version or Test Plan).

You probably have not configured your versions properly. Remember that coverage status of some issue for some version is calculated based on the Fix Version associated with the Test Execution issues.




  • No labels