Rethinking user interface test automation - Gojko Adzic

Geoff Bache presented on “Making GUI testing productive and Agile” today at Agile Testing Days 2010. Bache started by saying that there is an assumption in the community at the moment that GUI testing is hard to pull off correctly, much harder than data-driven testing, and that many teams have given up on that approach. But instead of bypassing the user interface, he advised teams to re-think the entire user interface testing approach and make it productive.
Record/playback implementations are tightly coupled with user interface layouts which makes them brittle and hard to maintain. That is the reason why teams gave up on user interface tests. According to Bache, the problems with data-driven approach (how he called API-level testing with tools such as FitNesse, Concordion or Cucumber) are that it doesn’t give non-programmers confidence in the whole system working and it deals with abstractions, and abstract tests require writers who can think in abstractions.

To get the confidence in the system and also easy maintenance, Bache advised combining domain-language descriptions of use cases with recorded user interface actions. The pyUseCase recorder tool he built allows teams to record actions and assign domain names to such recorded actions, which can later be combined into scripts. This provides a level of abstraction to make the test system more reusable and easier to maintain, but also makes recorded test cases easier to understand. There are similar tools for Java and .NET. (Bache said that Java Use Case recorder is no longer maintained, though).

Bache also advised teams to re-think the traditional approach of assertions as a way to validate actions in scripts. “Assertions mean variables, and variables mean programming. This breaks nice domain language use cases and starts mixing them with programming concepts” said Bache. This reduces readability. Instead of that, he advised verification by logging – inspecting textual log outputs of the system to validate that a function was performed correctly. “Tests are much more about behaviour change management than assertions”, said Bache, “It’s more about here is what my test does today, and I can manage a change of that tomorrow”. He uses TextTest to compare textual outputs of the system.

This approach allows teams to record domain activities, combine them into readable scripts and, save detect behaviour changes by comparing log outputs. Teams can focus on behaviour changes and domain language use cases. “Often the devil is in the detail and you want to manage changes in detail when your system behaviour changes”, said Bache. Instead of defining assertions, teams can inspect changes and decide whether the changes were expected or unexpected.

As additional advantages of this approach, Bache said it allows teams to create new tests required little or no programming. Resulting tests are robust and readable and cheap to maintain when user interface layout changes.

From my experience, aligning tests with the business domain model (which Bache called “thinking in abstractions”) works well for user interface tests as well and it allows teams to avoid the sine of death of UI test automation. More importantly, it allows teams to drive their business model using tests. But this is not at odds with the Use Case recording model. I see how Use Case Recorders can be very useful when recorded user interface testing is the only thing you can do, such as in implementing functional test automation on legacy system, especially when the state of the system depends on a large number of factors.

 
Source: Rethinking user interface test automation - Gojko Adzic

Comments

  1. Anonymous6/10/10

    Kudos to the author (Gojko), his other posts are good as well..

    Thanks Aditya for sharing.

    ReplyDelete

Post a Comment

Popular posts from this blog

Software Testing @ Microsoft

Trim / Remove spaces in Xpath?