Test Automation and Test Data - Microsoft Dynamics NAV Team Blog

This post discusses the use of test data when developing automated tests based on the testability features that were released with Microsoft Dynamics NAV 2009 SP1. The practices outlined here have been applied during the development of the tests included in the Application Test Toolset for Microsoft Dynamics NAV 2009 SP1.

Overall, the execution of automated tests proceeds according to a four-phased pattern: setup, exercise, verify, teardown. The setup phase is used to place the system into a state in which it can exhibit the behavior that is the target of a test. For data-intensive systems such as Dynamics NAV, the test data is an important part of setting system state. To test the award of discounts, for instance, the setup phase may include the creation of a customer and setting up a discount.

One of the biggest challenges of testing software systems is to decide what test data to use and how and when to create it. In software testing the term fixture is used to describe the state in which the system under test must be to expect a particular outcome when exercised. The fixture is created in the Setup phase. In the case of an NAV application that state is mostly determined by the values of all fields in all records in the database. Essentially, there are two options for how to create a test fixture:

  1. Create the test fixture from within each test.
  2. Use a prebuild test fixture which is loaded before the tests execute.
The advantage of creating the fixture in the test is that the test developer has as much control over the fixture as possible when tests execute. On the other hand, completely creating the test fixture from within a test might not be feasible in terms of development effort and performance. For example, consider all the records that need to be created for a simple test that posts a sales order: G/L Setup, G/L Account, General Business Posting Group, General Product Posting Group, General Posting Setup, VAT Business Posting Group, VAT Product Posting Group, Customer, Item, and so forth.

The advantage of using a prebuild test fixture is that most data required to start executing test scenarios is present. In the NAV test team at Microsoft, for instance, much of the test automation is executed against the same demonstration data that is installed when the Install Demo option is selected in the product installer (i.e., CRONUS). That data is reloaded when necessary as tests execute to ensure a consistent starting state.

In practice, a hybrid approach is often used: a common set of test data is loaded before each test executes and each test also creates additional data specific to its particular purpose.

To reset the common set of test data (the default fixture), one can either execute code that (re)creates that data or restore an earlier created backup of that default fixture. The Application Test Toolset contains a codeunit named Backup Management. This codeunit implements a backup-restore mechanism at the application level. It may be used to backup and restore individual tables, sets of tables, or an entire company. Table 1 lists some of the function triggers available in the Backup Management codeunit. The DefaultFixture function trigger is particularly useful for recreating a fixture.

Table 1 Backup Management

There are both maintainability and performance perspectives on the creation of fixtures. First there is the code that creates the fixture. When multiple tests use the same fixture it makes sense to prevent code duplication and share that code by modularizing it in separate (creation) functions and codeunits.

From a performance perspective there is the time required to create a fixture. For a large fixture this time is likely to be a significant part of the total execution time of a test. In these cases it could be considered to not only share the code that creates the fixture but also the fixture itself (i.e., the instance) by running multiple tests without restoring the default fixture. A shared fixture introduces the risk of dependencies between tests. This may result in hard to debug test failures. This problem can be mitigated by applying test patterns that minimize data sensitivity.

A test fixture strategy should clarify how much data is included in the default fixture and how often it is restored. Deciding how often to restore the fixture is a balance between performance and reliability of tests. In the NAV test team we've tried the following strategies for restoring the fixture for each

  1. test function,
  2. test codeunit,
  3. test run, or
  4. test failure.
With the NAV test features there are two ways of implementing the fixture resetting strategy. In all cases the fixture (re)creation code is modularized. The difference between the approaches is where the fixture creation code is called.

The fixture can be reset from within each test function or, when a runner is used, from the OnBeforeTestRun trigger. The method of frequent fixture resets overcomes the problem of interdependent tests, but is really only suitable for very small test suites or when the default fixture can be restored very quickly:

An alternative strategy is to recreate the fixture only once per test codeunit. The obvious advantage is the reduced execution time. The risk of interdependent tests is limited to the boundaries of a single test codeunit. As long as test codeunits do not contain too many test functions and they are owned by a single tester this should not give too many problems. This strategy may be implemented by the test runner, the test codeunit's OnRun trigger, or using the Lazy Setup pattern.

With the Lazy Setup pattern, an initialization function trigger is called from each test in a test codeunit. Only the first time it is executed will it restore the default fixture. As such, the fixture is only restored once per test codeunit. This pattern works even if not all tests in a test codeunit are executed or when they are executed in a different order. The Lazy Setup may be implemented like:


As the number of test codeunits grows the overhead of recreating the test fixture for each test codeunit may still get too large. For a large number of tests, resetting the fixture once per codeunit will only work when the tests are completely insensitive to any change in test data. For most tests this will not be the case.
The last strategy only recreates the test fixture when a test fails. To detect failures caused by (hidden) data sensitivities, the failing test is then rerun against the newly created fixture. As such, some type of false positives (i.e., test failures not caused by a product defect) can be avoided. To implement this strategy the test results need to be recorded in the OnAfterTestRun trigger of the test runner to a dedicated table. When the execution of a test codeunit is finished, the test results can be examined to determine whether a test failed and the test codeunit should be run again.

For the implementation of each of these fixture strategies it is important to consider any dependencies that are introduced between test codeunits and the test execution infrastructure (e.g., test runners). The consequence of implementing the setup of the default fixture in the test runner codeunit may be that it becomes more difficult to execute tests in other contexts. On the other hand, if test codeunits are completely self-contained it becomes really easy to import and execute them in other environments (e.g., at a customer's site).


Comments

Popular posts from this blog

Software Testing @ Microsoft

Trim / Remove spaces in Xpath?