The Mer Wiki now uses your Mer user account and password (create account on https://bugs.merproject.org/)


Quality/Terminology

From Mer Wiki
< Quality
Revision as of 12:45, 13 April 2012 by Esmietti (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Quality

Contents

Glossary

Smoke Test

Smoke test ~ intake+smoke test from ISTQB. smoke test (set): A subset of all defined/planned test cases that cover the main functionality of a daily build for Mer distribution, to ascertaining that the most crucial features of a distribution work, but not bothering with finer details. Smoke test is carried out at the start of test execution phase to decide if the distribution is ready for detailed and further testing.

The purpose of Sanity Test for Mer:

  • Provide basic health status of the whole system on daily basis, so people have basic understanding where we are in terms of quality
  • Report out outstanding issues and regressions and track them fixed quickly
  • Sanity test results are used to measure if the Mer repositories are in good shape so that decisions can be made if the software is ready for further testing or release. Of course, the release decision cannot be made only based on sanity test results, further testing results will also be referred to.

Test package

Test packaging is the mechanism to wrap any tests in rpm packages for manual and automatic executions.

  • contain all tests, scripts and configuration files required to run tests
  • define dependencies - the ones it tests, the test tools and test data it depends on (if any)
  • contain test plan located at /usr/share/<packagename>-tests/tests.xml

Test plan

A test plan is a XML file that defines test cases and test case's properties. Test plan can include tests for multiple domain areas and/or test types.

Test case

  • (Low level) Test case: A test case with concrete (implementation level) values for input data and expected results.
  • High level test case (a.k.a test idea): A test case without concrete (implementation level) values for input data and expected results. Logical operators are used; instances of the actual values are not yet defined and/or available.

Test case verdict

QA verdict definition:

  • Pass: A test is deemed to pass if its actual result matches its expected result.
  • Fail: A test is deemed to fail if its actual result does not match its expected result.
  • N/A - Not Applicable - can be seen as a initial value which is left as a verdict for a test case when no other verdict cannot be given due to some reason (e.g. feature is not implemented yet, test case could not be executed due to failure in test infra, etc.).

For all Fail verdicts it would be preferable that bug ID is given.

For all N/A verdicts it would be preferable that reason why pass/fail verdict could not be given is documented either by comment "Test case X verdict not given due to reason Y" or bug ID.

For the case which functionality is implemented but case is missing, it is recommended to find case substitute to cover the check point. If no substitute is available to cover, mark the case as N/A.

If a testrun has one or more N/A verdicts it will be rendered as Fail.

References

Personal tools