Quality/Test processes

= Mer QA process =

Mer has two testing processes: package and release testing. Check the overall process description.

Package testing
Package test content should be fast to execute, no long lasting or soak tests, and should test the changed package thoroughly.



A contributor makes and submits a change which triggers the following process:
 * 1) BOSS notices the change in the gerrit
 * 2) Input: BOSS delivers the changed package to OBS and OBS builds the package for all architectures. Output: The build result
 * 3) If no errors in the building, send early notification. Output: 'regress' / 'improvement' / 'no change' indicator from the vendors
 * 4) If no errors in the early notification, send later notification: Output: 'regress' / 'improvement' / 'no change' indicator from the vendors
 * 5) Report results back to gerrit




 * 1) A vendor receives a test request notification from the Mer QA (also sources of the changed packages?)
 * 2) Input: Update changed packages to vendor OBS. Output: Repository where to find the packages
 * 3) Input: List of changed packages and test stage. Output: Kickstart files, list of needed packages for testing and test plans for the OTS
 * 4) Input: Kickstart file and needed packages. Output: URL to image(s)
 * 5) Input: Test plans and image locations. Output: The test results
 * 6) Returning 'regress' / 'improvement' / 'no change' indicator to Mer QA

Release testing
(Pre) Release testing has two main test sets: core and feature set. Both sets are executed to every release.

Release testing can take more time than package testing, and it should include non-functional tests.

Core set

The Core set contains set of tests that verify overall quality of the Mer Core release. It has tests for all architecture domains and API tests. Core set content is stable, only minor changes are allowed between the releases.

Feature set

Feature set includes tests for those packages that have changed from the previous release. Basically feature set is set of package testing tests. The feature set content changes to every release.

Manual testing

 * QA Tools support manual testing.
 * When manual tests can be executed and how to report results
 * What is the testing process in manual testing

= Reference QA processes =

MeeGo
This is from the public side of the MeeGo!


 * No CI testing in place
 * Only hourly, nightly and release testing
 * QA-Reports was used to report test results
 * QA Tools were OTS, testrunner-lite, tdriver
 * MCTS tests were used

Maemo
Please fix if you see mistakes here.


 * Active CI testing in place
 * Test packaging was used, the naming policy was strict
 * Own dashboard for managing test packages
 * Test package had in the Debian control file following parameters
 * XB-Maemo-CI-Packages and XB-Maemo-CI-Stage
 * When test package was updated to the version control, the control file information was moved to database
 * When a package changed, tests that defined the package in the control file were executed
 * Test packages were installed into the testing image and test automation executed all tests found from the image
 * All tests from the test package were executed
 * Caused problems if the test package had long or many test cases
 * QA-tools were testrunner-lite, ots
 * OTS had hundreds of test requests per day