Thick Client Automation Using LeanFT - HPE

Recently at our STAG(Software Test Automation Group) meetup we had Kate Droukman who presented how we could learn and use LeanFT.


Test Automation Anti-Patterns

Test Automation Anti-patterns:

There is enough data out there which explains what testing anti-patterns are, we will look at some of the most interesting test automation anti-patterns here:

The first one and the most important one is the right amount of UI testing in the testing pyramid:

  • Whenever we talk to most stakeholders about test automation, they think about UI automation.
  • "A team is doing too much UI automation when they are finding NPEs (Null pointer exceptions) via their UI automated tests."
  • Manual Session based testing is a fancy way of saying exploratory testing
  • Automated API tests are the best bang for the buck wrt tests that are easy to write and stable and a reliable signal.
  • Integration Tests are always valuable; but we need to ration the number of integration tests one writes as they can be more difficult to debug and have a high noise factor due to all the moving pieces
  • Automated component testing allows one to test a particular component (server) in isolation. you mock everything else out and ensure this component behaves as intended
For more anti patterns in the testing pyramid like the ICE-CREAM CONE, CUPCAKE PATTERN , HOURGLASS, DUAL PYRAMID read this:

A similar catalog has been created for anti-patterns in unit testing

Some other classic test automation anti-patterns:
  1. Including business logic at the test case level instead of building a business layer. Existing pattern: Page Object Pattern
  2. Declaring page elements inline in test cases with locator information instead of in the business layer. Existing pattern: Page Object Pattern
  3. Sleeping for arbitrary amounts of time. Existing pattern: polling, explicit/implicit waits.
  4. Assertions as part of PageObject class Existing pattern:  PageObject just provides status of the element to the caller. The caller itself will verify the status
  5. Different Stack The automated tests (also test frameworks) are implemented using a different software stack than the SUT uses.
Good read:



Hermetic Testing?

Q. What are hermetic tests?

A. Tests should be hermetic: that is, they ought to access only those resources on which they have a declared dependency. If tests are not properly hermetic then they do not give historically reproducible results. This could be a significant problem for culprit finding (determining which change broke a test), release engineering auditability, and resource isolation of tests (automated testing frameworks ought not DDOS a server because some tests happen to talk to it).

Q. How Hermetic servers are used for E2E testing?
A. Google uses this trick to  design their end-to-end tests.
What is a Hermetic Server? The short definition would be a “server in a box”. If you can start up the entire server on a single machine that has no network connection AND the server works as expected, you have a hermetic server! \

Q. How do you design a Hermetic server?
A. 1. All connections to other servers are injected into the server at runtime using a suitable form of dependency injection such as commandline flags or Guice.
2. All required static files are bundled in the server binary.
3. If the server talks to a datastore, make sure the datastore can be faked with data files or in-memory implementations.
4. Make sure those connection points which our test won’t exercise have appropriate fakes or mocks to verify this non-interaction.
5. Provide modules to easily populate datastores with test data.
6. Provide logging modules that can help trace the request/response path as it passes through the SUT.

An Interesting slide deck by Spotify on "Hermetic environment for your functional tests"


How HP tested LeanFT using LeanFT ?

Here is another amazing podcast on how LeanFT is used straight from the HP LeanFT team :
Thanks to Joe who runs these wonderful podcasts.



Interesting stats on what happens in an internet minute?



Software Testing @ Microsoft

In continuation from my previous series of posts targeting testing in top few companies from the technology space.

After Software Testing @Facebook and @Google we shall now look at Microsoft:

  • Software Development Engineers in Test (SDETs) are usually just called Test and sometimes Software Testing. SDETs are responsible for maintaining high testing and qualityassurance standards for all Microsoft products.
  • Software Development Engineers (SDEs) are often referred to as Software Development. SDEs write the code that drives Microsoft products and upgrades.
  • Very few teams in Microsoft are still using SDETs to do significant amount of manual testing. Manual testing is mainly outsourced. Actually, as early as in 2004, for example, most of the manual testing of MSN Explorer was outsourced. Think about it this way: it just doesn't make sense for test manager to spend their headcount (SDETs) on something can be easily outsourced. 
  • Until 2005, Microsoft actually used two different titles for testers. Software Test Engineer (STE) and Software Development Engineer in Test (SDE/T). This dual-title process was very confusing. In some groups, the SDE/T title meant the employee worked on test tools, and in others it meant he had a computer science degree and wrote a lot of test automation.
  • There is no SDET any more in Microsoft. In the "Combined Engineering" change last year (2014), SDET and SDE were merged into one job: Software Engineer. What previously SDETs were doing, now it's a part of the Software Engineer's job.

  • The key question of how software is tested at MS is never really answered. For example:
    • Linux maintainers use Coverity on the Linux Kernel. Does MS use such tools on their Kernel?
    • What sort of scripting languages are used for automation testing of Office or Windows or any other MS product?
    • What sort of Unit Testing software do MS developers use? CppUnit? NUnit? The Unit testing feature in VS2008? What do some of these unit tests look like?
    • What does the typical test plan at MS look like?
    • What sort of white-box testing do developers perform? There are a few vague references to unit testing, but what about performance and coverage testing? What specific tools do they use? What do their result reports look like?
  • At the end of the day, when it comes to writing code, a software engineer in Microsoft (and also in many other companies) may write three kinds of code: a) the product code which makes money for the company, b) the test code which make sure the product code works as expected, c) the tools code which helps in writing/running/maintaining the product code and test code. SDE's job was to write the product code and tools code, while SDET's job was to write the test code and tools code.
  • To deliver high-quality features at the end of each iteration, feature crews concentrate on defining "done" and delivering on that definition. This is most commonly accomplished by defining quality gates for the team that ensure that features are complete and that there is little risk of feature integration causing negative issues. Quality gates are similar to mile-stone exit criteria
  • The Value of Automation - Nothing seems both to unite and divide software testers across the industry more than a discussion on test automation. To some, automated tests are mindless and emotionless substitutes for the type of testing that the human brain is capable of achieving. For others, anything less than complete testing using automation is a disappointment. In practice, however, context determines the value of automation. Sometimes it makes sense to automate every single test. On other occasions, it might make sense to automate nothing. Some types of bugs can be found only while someone is carefully watching the screen and running the application. Bugs that have an explanation starting with "Weird—when I dismiss this dialog box, the entire screen flashes" or "The mouse pointer flickers when I move it across the controls" are types of bugs that humans are vastly better at detecting than computers are. For many other types of bugs, however, automated tests are more efficient and effective.
    • Automate Everything - BVTs run on every single build, and then need to run the same every time. If you have only one automated suite of tests for your entire product, it should be your BVTs.
    • Test a Little -  BVTs are non-all-encompassing functional tests. They are simple tests intended to verify basic functionality. The goal of the BVT is to ensure that the build is usable for testing.
    • Test Fast - The entire BVT suite should execute in minutes, not hours. A short feedback loop tells you immediately whether your build has problems. 
    • Fail Perfectly - If a BVT fails, it should mean that the build is not suitable for further testing, and that the cause of the failure must be fixed immediately. In some cases, there can be a workaround for a BVT failure, but all BVT failures should indicate serious problems with the latest build. 
    • Test Broadly -  Not Deeply BVTs should cover the product broadly. They definitely should not cover every nook and cranny, but should touch on every significant bit of functionality. They do not (and should not) cover a broad set of inputs or configurations, and should focus as much as possible on covering the primary usage scenarios for key functionality
  • An old software testing blog - Microsoft
The book:

The Abuse and Misuse of Test Automation - Interview with Alan Page

P.S: Based on my research and what's on internet these are just some very interesting snippets. I will try to keep this updated as and when I hear more.


Also shared on LinkedIn:

Happy Testing until the next Testing@ post.


Software Testing @ Google

In continuation from my previous series of posts targeting testing in top few companies from the technology space.

After Software Testing @ Facebook we shall now look at Google:

  • Test exists within a Focus Area called Engineering Productivity. Eng Prod owns any number of horizontal and vertical engineering disciplines, Test is the biggest.
  • A product team that produces internal and open source productivity tools that are consumed by all walks of engineers across the company. We build and maintain code analyzers, IDEs, test case management systems, automated testing tools, build systems, source control systems, code review schedulers, bug databases... The idea is to make the tools that make engineers more productive. Tools are a very large part of the strategic goal of prevention over detection.
  • A services team that provides expertise to Google product teams on a wide array of topics including tools, documentation, testing, release management, training and so forth. Our expertise covers reliability, security, internationalization, etc., as well as product-specific functional issues that Google product teams might face. Every other FA has access to Eng Prod expertise.
  • Testers are no different but the cadence of changing teams is left to the individual. I have testers on Chrome that have been there for several years and others who join for 18 months and cycle off. Keeping a healthy balance between product knowledge and fresh eyes is something a test manager has to pay close attention to.
  • So this means that testers report to Eng Prod managers but identify themselves with a product team, like Search, Gmail or Chrome. Organizationally they are part of both teams. They sit with the product teams, participate in their planning, go to lunch with them, share in ship bonuses and get treated like full members of the team. The benefit of the separate reporting structure is that it provides a forum for testers to share information. Good testing ideas migrate easily within Eng Prod giving all testers, no matter their product ties, access to the best technology within the company.
  • By far the biggest is that testers are an external resource. Product teams can't place too big a bet on them and must keep their quality house in order. Yes, that's right: at Google it's the product teams that own quality, not testers. Every developer is expected to do their own testing. The job of the tester is to make sure they have the automation infrastructure and enabling processes that support this self reliance. Testers enable developers to test.
    • Canary Channel is used for code we suspect isn’t fit for release. Like a canary in a coalmine, if it failed to survive then we had work to do. Canary channel builds are only for the ultra tolerant user running experiments and not depending on the application to get real work done.
    • Dev Channel is what developers use on their day-to-day work. All engineers on a product are expected to pick this build and use it for real work.
    • Test Channel is the build used for internal dog food and represents a candidate beta channel build given good sustained performance.
    • The Beta Channel or Release Channel builds are the first ones that get external exposure. A build only gets to the release channel after spending enough time in the prior channels that is gets a chance to prove itself against a barrage of both tests and real usage.
  • Instead of distinguishing between code, integration and system testing, Google uses the language of small, medium and large tests emphasizing scope over form. Small tests cover small amounts of code and so on. Each of the three engineering roles may execute any of these types of tests and they may be performed as automated or manual tests.
    • Small Tests are mostly (but not always) automated and exercise the code within a single function or module. They are most likely written by a SWE or an SET and may require mocks and faked environments to run but TEs often pick these tests up when they are trying to diagnose a particular failure.
      • The question a small test attempts to answer is does this code do what it is supposed to do?
    • Medium Tests can be automated or manual and involve two or more features and specifically cover the interaction between those features. I've heard any number of SETs describe this as "testing a function and its nearest neighbors."
      • The question a medium test attempts to answer is does a set of near neighbor functions interoperate with each other the way they are supposed to?
    • Large Tests cover three or more (usually more) features and represent real user scenarios to the extent possible. There is some concern with overall integration of the features but large tests tend to be more results driven, i.e., did the software do what the user expects? All three roles are involved in writing large tests and everything from automation to exploratory testing can be the vehicle to accomplish accomplish it.
      • The question a large test attempts to answer is does the product operate the way a user would expect?
  • Finally, the mix between automated and manual testing definitely favors the former for all three sizes of tests. If it can be automated and the problem doesn’t require human cleverness and intuition, then it should be automated. Only those problems, in any of the above categories, which specifically require human judgment, such as the beauty of a user interface or whether exposing some piece of data constitutes a privacy concern, should remain in the realm of manual testing.
  • Google performs a great deal of manual testing, both scripted and exploratory, but even this testing is done under the watchful eye of automation.
  • We also automate the submission of bug reports and the routing of manual testing tasks. For example, if an automated test breaks, the system determines the last code change that is the most likely culprit, sends email to its authors and files a bug. The ongoing effort to automate to within the “last inch of the human mind” is currently the design spec for the next generation of test engineering tools Google is building.
  • SETs and SWEs are on the same pay scale and virtually the same job ladder. Both roles are essentially 100% coding roles with the former writing test code and the latter doing feature development. From a coding perspective the skill set is a dead match. From a testing perspective we expect a lot more from SETs. But the overlap on coding makes SETs a great fit for SWE positions and vice versa.
  • Highly encourage reading this post for anyone involved in End-to-End testing
  • TDD
  • TDI (Test Driven Integration) -
  • As an automation engineer if you don't know about GTAC 
P.S: Based on my research and what's on internet these are just some very interesting snippets. I will try to keep this updated as and when I hear more.


Also shared on LinkedIn:

Happy Testing until the next Testing@ post.


Software Testing @ Facebook

This topic has always been of interest by testers who like to understand how some of the best companies perform testing in their organization. Based on my research and what's on internet here are some very interesting snippets. I will try to keep this updated as and when I hear more but at the same time will have a series of posts targeting the top few companies in the technology space.
We shall start with Facebook here:


  • Places huge emphasis on individual engineers making sure their changes are legit
  • Strong shame culture around making irresponsible changes to the site, the apps, etc. (see What are the roots of 'clowning' or 'clowny behavior' or 'clowntown' at Facebook? and to a lesser but tone-setting extent When people who work at Facebook say "clowntown", what do they mean?If you do something really bad (i.e. take the site down, kill the egress of the site in a significant way) you may get a photo of yourself posted to an internal group wearing a clown nose.
  • Put huge emphasis on "dog fooding" (see to the site for up to a week before the general user will see the changes. Every FB employee uses the site differently, which leads to surprisingly rich test coverage on its own.
  • Running a laundry list of automated PHP and JavaScript (via Jasmine) unit tests based on what stuff you're making changes to in the codebase.
  • Lint process that runs against all the changes an engineer is making. The lint process flags anti-patterns, known performance killers, bad style, and a lot more. Every change is linted, whether it's CSS, JS, or PHP. This prevents entire classes of bugs by looking for common bug causes like type coercion issues, for instance. It also helps prevent performance issues like box-shadow use in mobile browsers which is a pretty easy way to kill the performance of big pages.
  • WebDriver ( run site behavior tests like being able to post a status update or like a post. These tests help us make sure that changes that affect "glue code" (see, which is pretty hard to unit test, don't cause major issues on the site.
  • Engineers can also use a metrics gathering framework that measures the performance impact of their changes prior to committing their changes to the code base. This framework (which is crazy bad ass btw) allows an engineer to understand what effects their changes have in terms of request latency, memcache time, processor time, render time, etc.
  • There is also a swath of testing done manually by groups of Facebook employees who follow test protocols. The results (or I should say issues) uncovered by this manual testing are aggregated and delivered to the teams responsible for them as part of a constant feedback/iteration loop.
  • Overall, the priorties are speed of testing, criticality (yes it's not a word meh meh meh) of what we test, and integrating testing into every place where test results might be affected or might guide decision making.
  • How about a war story instead? read more on the links below
  • Automation@ Facebook
    • For our PHP code, we have a suite of a few thousand test classes using the PHPUnit framework. They range in complexity from simple true unit tests to large-scale integration tests that hit our production backend services. The PHPUnit tests are run both by developers as part of their workflow and continuously by an automated test runner on dedicated hardware. Our developer tools automatically use code coverage data to run tests that cover the outstanding edits in a developer sandbox, and a report of test results is automatically included in our code review tool when a patch is submitted for review.
    • For browser-based testing of our Web code, we use the Watir framework. We have Watir tests covering a range of the site's functionality, particularly focused on privacy—there are tons of "user X posts item Y and it should/shouldn't be visible to user Z" tests at the browser level. (Those privacy rules are, of course, also tested at a lower level, but the privacy implementation being rock-solid is a critical priority and warrants redundant test coverage.)
    • In addition to the fully automated Watir tests, we have semi-automated tests that use Watir so humans can avoid the drudgery of filling out form fields and pressing buttons to get through UI flows, but can still examine what's going on and validate that things look reasonable.
    • We're starting to use JSSpec for unit-testing JavaScript code, though that's still in its early stages at this point.
    • For backend services, we use a variety of test frameworks depending on the specifics of the services. Projects that we release as open source use open-source frameworks like Boost's test classes or JUnit. Projects that will never be released to the outside world can use those, or can use an internally-developed C++ test framework that integrates tightly with our build system. A few projects use project-specific test harnesses. Most of the backend services are tied into a continuous integration / build system that constantly runs the test suites against the latest source code and reports the results into the results database and the notification system.
    • HipHop has a similar continuous-integration system with the added twist that it not only runs its own unit tests, but also runs all the PHPUnit tests. These results are compared with the results from the same PHP code base run under the plain PHP interpreter to detect any differences in behavior.
  • Our test infrastructure records results in a database and sends out email notifications on failure with developer-tunable sensitivity (e.g., you can choose to not get a notification unless a test fails continuously for some amount of time, or to be notified the instant a single failure happens.) The user interface for our test result browser is integrated with our bug/task tracking system, making it really easy to associate test failures with open tasks.
  • A significant fraction of tests are "push-blocking"—that is, a test failure is potential grounds for holding up a release (this is at the discretion of the release engineer who is pushing the code in question out to production, but that person is fully empowered to stop the presses if need be.) Blocking a push is taken very seriously since we pride ourselves on our fast release turnaround
  • This youtube video might be also interesting for you.
    The major part of it is about (large scale) testing.
    "Tools for Continuous Integration at Google Scale" 


Also shared on LinkedIn:

Happy Testing until the next Testing@ post.


Internet of Things Landscape 2016 - what's in it for Testing

I am sure most of you who read blogs / LinkedIn posts would have seen this image:

If you can't see the image open it in a new tab.

What's interesting here is to think of ways to test these next generation apps/platforms/software /hardware let alone automate them.


  • Personal 
    • Wearables
    • Fitness
    • Health
    • Entertainment
    • Family
    • Sports
    • Toys
    • Elderly
  • Home
    • Automation
    • Hubs
    • Security
    • Kitchen
    • Sensing
    • Consumer Robotics
    • Pets
    • Garden
    • Trackers
  • Vehicles
    • Automobiles
    • Autonomous
    • UAVs
    • Space
    • Bikes/Motorcycles
  • Enterprise
    • Healthcare
    • Retail
    • Payments/Loyalty
    • Smart Office
    • Agriculture
    • Infrastructure
  • Industrial Internet
    • Machines
    • Energy
    • Supply Chain
    • Robotics
    • Industrial Wearables
Platforms and Enablements
  • Platforms
    • Software
    • Full Stack
    • Developer
    • Analytics
    • Sensor Networks
    • Connectivity
    • Security 
    • Open Source
  • Interfaces
    • Virtual Reality
    • Augmented Reality
    • Others
  • 3D
    • Printing/Scanning
    • Content/Design
Building Blocks
  • Hardware
    • Processors/Chips
    • Sensors
    • Parts/Kits
    • Charging
  • Software
    • Cloud
    • Mobile OS
  • Connectivity
    • Protocols
    • Telecom
    • M2M
    • WiFi
  • Partners
    • Consultants/Services
    • Alliances
    • Retail
    • Manufacturing
    • Incubators
    • Funding