We provide performance testing as a service for Atlassian Data Center (DC) apps (Jira, Confluence, JSM). This testing is fully automated with Selenium and the official atlassian/dc-app-performance-toolkit.
During automated performance testing, several failure checks are in place to ensure the reliability and accuracy of your app's results. Below are the common reasons a test may fail, based on logs and validation scripts :
The overall execution status could not confirm success.
This typically indicates that the test terminated unexpectedly or did not finish all required steps.
🛠️ Action Required:
Review the attached logs in the results ZIP and ensure the pipeline completes all steps as expected.
The test summary indicates a failure in one or more benchmark checks.
Logged as:Summary run status FAIL
🛠️ Action Required:
Investigate specific failed metrics or actions in the log and fix the root cause.
The system failed to extract the test outcome due to missing or malformed log output.
Logged as:Could not determine test status from logs.
🛠️ Action Required:
Ensure the test environment is correctly set up and logs are not truncated or corrupted.
The framework expects your custom test cases (app-specific actions) but did not detect them.
Logged as:App specific actions not found
🛠️ Action Required:
Ensure your dc-app-repo contains valid test cases and is zipped correctly. Follow the structure required by the dc-app-performance-toolkit.
May happen due to format issues or incorrect logging in the test script.
🛠️ Action Required:
Double-check the app-specific test output and verify it conforms to the expected format.
One or more of your app-specific test scenarios failed.
Detected pattern in log:Fail APP-SPECIFIC
🛠️ Action Required:
Review test case logic and results. Fix any failed conditions in your app or test cases before requesting a retest.
We use the latest stable release of the official performance toolkit:atlassian/dc-app-performance-toolkit: release/8.6.0
Notes
After each test run, a ZIP archive of the test results is attached to the ticket for analysis and debugging.
Payment must be completed before testing begins.
Prior to testing, in the "Waiting for Review" stage, we run automated validation scripts to verify the provided parameters.
Update the Automation Review status based on result Validation Success or Validation Failed.
Upon successful review, the test progresses based on the selected environment (dev or prod).
If validation fails, the ticket will move back to the "Waiting for Customer" stage.