The Mer Wiki now uses your Mer user account and password (create account on https://bugs.merproject.org/)


User:Timoph

From Mer Wiki
Revision as of 20:25, 20 March 2012 by Timoph (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Contents

Test definition overview

The basic principle of a test definition XML and tool support is that you can use 'any' executable for testing. Test results are checked from exit codes (automated testing) or prompt (manual testing). Executed test plan XMLs produce XML results - wrapping variety of test methods in consistent format required by test automation and data processing.

The test plan information stored in Test Definition XML files consists of:

 1. suite, set, case: Hierarchical structure of the tests.
 2. feature, subfeature, requirement: Information about why the tested software has been implemented in the first place (which is why it’s being tested as well). 
 3. type: Information about the viewpoint of the tests (which quality aspect of the software they are testing, see DevelopmentTestArea in Agile Testing Wiki).
 4. level: Information about which test level the tests belong - for test level.
 5. domain: Information about which architectural domains the tests are focused on.
 6. description: descriptions of the tests (what each test is, and what it is supposed to do).
 7. step: Execution instructions (for automated tests), which determine the actual commands to execute to run each test.

Note: that all of the above are not mandatory. Mandatory fields for executing tests are defined with test plan XML validation schema.

Creating Test plan

There are certain mandatory things you’ll need to provide in a test definition XML.

The structure of the XML and the possible attributes/values are defined in the Test Definition XML schema. It’s important to validate your XML against the up-to-date schema.

Test plans can be created either by using Testplanner tool or manually.

Test Cases

The main thing in test plan are the test cases, which define what to execute (test steps), what to expect from the execution (expected results) and also some additional information for reporting purposes. An example test case could be:

<case name="my-first-case">
   <description>Creating my very first test case</description>
   <step>ls</step>
   <step>uname -r</step>
</case>

What did we just do? We created a simple test case named my-first-test-case, which executes two steps and expects that they return the common “success return value” 0. Want to check for some other return value? Use expected_result:

<step expected_result="-1">ls</step>

In case you feel that some test cases are insignificant, i.e. that they shouldn’t be taken into consideration when choosing whether the test run is passed or failed, you can use insignificant-attribute (defaults to false, i.e. every case is considered to be significant):

<case name="not-so-important-test" insignificant="true">

In addition to test execution, the “story” of the test case should also be told, and for telling this story, we have fields for test type (what quality characteristics are tested) and test level (what level of system is being exercised. The possible values for both of these are listed in the appendices. Let’s add level and type to our test case:

<case name="my-first-case" level="Product" type="Functional">
   <description>Creating my very first test case</description>
   <step expected_result="-1">ls</step>
   <step>uname -r</step>
</case>

And that’s our first test case! Not that difficult...

Grouping Cases - Sets and Suites

Grouping cases to sets and suites makes your life easier and more organized. A good idea is to group test cases that test the same features into same sets, and sets that test the same architectural domain under same suites. E.g.:

<suite name="my-multimedia-tests" domain="Multimedia">
   <set name="video-playback-tests" feature="Video Playback">
       <description>Video</description>
       <case ...
       <case ...
   </set>
</suite>

So, sets contain test cases, and may also contain description (the same way as the cases do). More about sets in Test Definition Execution part

Note: There are several attributes that need to be defined on the test case, test set, or test suite level in the test definition XML.
You can find more information about these attributes in the Appendices section of this page.

Time outs

Test steps will time out by default after 90 seconds. You can change the default time out by adding timeout attribute to your test case. for example

<case name="my-test-case" timeout="120">

Note that this the timeout is for single test step not for the whole case.

For [pre|post]-steps the the default timeout is 180 seconds, it can also be changed:

<pre_steps timeout="600">

Putting It All Together

Now that you have some knowledge on grouping and test cases in general, it is time to put it all together. Before showing an example, it is important to note that all the mentioned case attributes (e.g. level, type) are inheritable. So, if you have e.g. a set which contains only certain type of cases, you can define the type at the set-level, instead of writing it separately into each case.

Now, the example. Let’s first add the mandatory tags, which each test definition should have:

 <?xml version="1.0" encoding="UTF-8"?>
 <testdefinition version="1.0">
 </testdefinition>

That’s the mandatory part; we defined that we have a XML-document, and that this particular one is test definition version 1.0. Let’s add more beef to the bones by adding one suite with couple of sets:

<?xml version="1.0" encoding="UTF-8"?>
<testdefinition version="1.0">
   <suite name="my-multimedia-tests" domain="Multimedia">
       <description>Testing AF stuff</description>
       <set name="video-playback-tests" feature="Video Playback">
           <description>Video playback tests</description>
       </set>
       <set name="video-recording-tests" feature="Video Recording">
           <description>Video recording tests</description>
       </set>
   </suite>
</testdefinition>

Okay, we have one suite named my-multimedia-tests with two sets, testing video playback and recording features. Cases are still missing:

<?xml version="1.0" encoding="UTF-8"?>
<testdefinition version="1.0">
   <suite name="my-multimedia-tests" domain="Multimedia">
       <description>Video playback tests</description>
       <set name="video-playback-tests" feature="Video Playback">
           <description>Video playback tests</description>
           <case name="playback1" type="Functional" level="Component">
               <step>execute_playback_test</step>
           </case>
       </set>
       <set name="video-recording-tests" feature="Video Recording">
           <description>Video playback tests</description>
           <case ...
       </set>
   </suite>
</testdefinition>

And that’s the basic story. Next chapters cover more details about the definition, but you should now have the general understanding of the subject!

Controlling Environment for Execution

Setup and Teardown

In case you want to do some setup and cleaning before and after the cases are executed, sets may have pre- and post-steps in them:

<set ...>
   <pre_steps>
       <step>do_some_setup</step>
   </pre_steps>
   <case ...
   <post_steps>
       <step>clean_up</step>
   </post_steps>
</set>

If you want to ensure that pre-step is executed properly before starting the real testing, you can use expected result also in those:

<pre_steps>
   <step expected_result="1">do_some_setup_that_may_fail</step>
   ...
Warning When using expected result in this context, the process return value will be waited for, thus you cannot use this with daemon or background processes since they basically never return and cause execution to jam. Another thing to note is that the steps are executed in separate shells, so it is not possible to e.g. set environment variables or change directories in pre-steps as those will be lost.

Filtering Based on Hardware Identifier

If different tests sets for different hardware are required then hwiddetect feature can be utilised. User can define a command used to get a hardware identifier within hwiddetect tag. The hardware identifier returned by the command is matched with optional hwid attribute of a test set. If not equal, test cases in the set are skipped and are not written to the result file. A test set will never be skipped if hwid attribute has not been defined for it. You can also define multiple hwid values separated by comma for a set.

Command defined by hwiddetect can be a shell command or a separate executable. The executable should be included in the test package. Testrunner-lite removes extra whitespace and linefeeds from the the output of the hwiddetect command so that test developer does not need to care about it.

Example usage of hwiddetect:

<?xml version="1.0" encoding="UTF-8"?>
<testdefinition version="1.0">
  <hwiddetect>/usr/bin/getmyhwid</hwiddetect>
  <suite name="suite1">
    <set name="test_feature_X_on_hw_bar" hwid="bar">
    <case name="test_X_1">
      <step>echo "hwid is bar"</step>
    </case>
    </set>
    <set name="test_feature_X_on_hw_foo" hwid="foo">
      <case name="test_X_1">
        <step>echo "hwid is foo"</step>
    </case>
    </set>
    <set name="test_feature_X_on_hw_foo_or_bar" hwid="foo,bar">
      <case name="test_X_1">
        <step>echo "hwid is foo or bar"</step>
    </case>
    </set>
  </suite>
</testdefinition>

Fetching Additional Files

In addition to normal result file, you can also fetch what ever files you need with get-tag:

<set ...>
   ...
   <get>
       <file>/tmp/myadditionalresult.1</file>
       <file delete_after="true">/tmp/myadditionalresult.2</file>
   </get>
</set>

In the example above my additionalresult.2 file is deleted after fetching it (think of command mv).

Measurement data

In case file is tagged as measurement data as in the below example, the data will be evaluated and transferred to results.

<case ...>
   ...
   <get>
       <file measurement="true">/path/to/measurement/measurement.txt</file>
   </get>
</case>

The measurement data has the following CSV format:

name;value;unit;
name;value;unit;target;failure;
name;value;unit;target;failure;

Where name and unit are strings; value target and failure floating point numbers. Example:

bt.upload;1.4123432;MB/s;
cpu.load;23.41;%;5;90;
mem.load;80.16;%;80;99;

If target and failure are specified the measurement can affect (fail) test case result. When target is greater than failure, the value should be smaller than failure and vise versa.

When measurements consists of a series of data, series attribute can be used in file element:

<file measurement="true" series="true">/path/to/measurement/series.txt</file>

Measurement series CSV file format is different from the one consisting of single measurement values (optional parts are enclosed by brackets):

name;unit[;target;failure]
[yyyy-mm-ddThh:mm:ss[.ssssss];]value
[yyyy-mm-ddThh:mm:ss[.ssssss];]value
[yyyy-mm-ddThh:mm:ss[.ssssss];]value
...

The first line specifies series name and unit. In addition, target and failure limits can be specified. Next lines list all measurement values with optional timestamp (conforming to ISO 8601). A CSV file can contain only a single measurement series.

Manual, Automatic and Semi-automatic Test Cases

If not specified by the "manual" attribute all cases are automatic. The value for the attribute is inherited from higher entity (set->case->step). By semi-automatic test case we mean a manual case that has some automatic parts; the idea being that only those steps that have to be manual, are. Note that this does not work the other way around, we cannot have automatic test case with manual steps (would be semantically weird thus our tools do not support it).

The example below tries to clarify the above. The "example_set" shows a manual, automatic and semi-automatic case. The example works with testrunner-lite/testrunner-ui, give it a try: File:Example definition.xml.

<?xml version="1.0" encoding="UTF-8"?>
<testdefinition version="1.0">
  <suite name="example_suite">
  <set name="example_set">
    <description>Example test set with manual, automatic and semi-automatic case.</description>
    <case manual="true" name="manual_case">
      <description>Manual test case with three steps inside one step tag.</description>
      <step>Step 1: execute command ttt on shell.
            Step 2: write something into edit box.
            Step 3: press ok button.
            Expected: Text should be updated into label.
      </step>
    </case>
    <case timeout="96" name="automatic_case">
      <description>Automatic test case that executes some shell commands.</description>
      <step>ls /tmp</step>
      <step expected_result="2">ls /nosuchfile</step>
      <step>pwd</step>
    </case>
    <case manual="true" name="semi_automatic_case">
      <description>A case with two automatic and two manual steps.</description>
      <step manual="false">xcalc &amp;</step>
      <step>Step: Type in 2 + 2 =. Expected: 4 is displayed </step>
      <step>Press x² button. Expected: 16 is displayed.</step>
      <step manual="false">killall xcalc</step>
    </case>
  </set>
 </suite>
</testdefinition>

(A real test developer / tester might come up with nicer examples, any input welcome).

Test Plan Execution

Test plans are run with test execution tools such as Testrunner and testrunner-lite, which read the plan and produce a result XML file.

About name attribute

Note that the name attribute is of type anyURI. Do not use the following characters in suite,set or case names.

  • ; / ? : @ & = + $ , (reserved)

or

  • { } | \ ^ [ ] ` (unwise)

Also the use of <space> is not recommended, since some validators accept if for anyURI, some don't. Use _ or - instead.

Personal tools