Monday

Software Testing @ Google



In continuation from my previous series of posts targeting testing in top few companies from the technology space.

After Software Testing @ Facebook we shall now look at Google:

Testing@Google
  • Test exists within a Focus Area called Engineering Productivity. Eng Prod owns any number of horizontal and vertical engineering disciplines, Test is the biggest.
  • A product team that produces internal and open source productivity tools that are consumed by all walks of engineers across the company. We build and maintain code analyzers, IDEs, test case management systems, automated testing tools, build systems, source control systems, code review schedulers, bug databases... The idea is to make the tools that make engineers more productive. Tools are a very large part of the strategic goal of prevention over detection.
  • A services team that provides expertise to Google product teams on a wide array of topics including tools, documentation, testing, release management, training and so forth. Our expertise covers reliability, security, internationalization, etc., as well as product-specific functional issues that Google product teams might face. Every other FA has access to Eng Prod expertise.
  • Testers are no different but the cadence of changing teams is left to the individual. I have testers on Chrome that have been there for several years and others who join for 18 months and cycle off. Keeping a healthy balance between product knowledge and fresh eyes is something a test manager has to pay close attention to.
  • So this means that testers report to Eng Prod managers but identify themselves with a product team, like Search, Gmail or Chrome. Organizationally they are part of both teams. They sit with the product teams, participate in their planning, go to lunch with them, share in ship bonuses and get treated like full members of the team. The benefit of the separate reporting structure is that it provides a forum for testers to share information. Good testing ideas migrate easily within Eng Prod giving all testers, no matter their product ties, access to the best technology within the company.
  • By far the biggest is that testers are an external resource. Product teams can't place too big a bet on them and must keep their quality house in order. Yes, that's right: at Google it's the product teams that own quality, not testers. Every developer is expected to do their own testing. The job of the tester is to make sure they have the automation infrastructure and enabling processes that support this self reliance. Testers enable developers to test.
    • Canary Channel is used for code we suspect isn’t fit for release. Like a canary in a coalmine, if it failed to survive then we had work to do. Canary channel builds are only for the ultra tolerant user running experiments and not depending on the application to get real work done.
    • Dev Channel is what developers use on their day-to-day work. All engineers on a product are expected to pick this build and use it for real work.
    • Test Channel is the build used for internal dog food and represents a candidate beta channel build given good sustained performance.
    • The Beta Channel or Release Channel builds are the first ones that get external exposure. A build only gets to the release channel after spending enough time in the prior channels that is gets a chance to prove itself against a barrage of both tests and real usage.
  • Instead of distinguishing between code, integration and system testing, Google uses the language of small, medium and large tests emphasizing scope over form. Small tests cover small amounts of code and so on. Each of the three engineering roles may execute any of these types of tests and they may be performed as automated or manual tests.
    • Small Tests are mostly (but not always) automated and exercise the code within a single function or module. They are most likely written by a SWE or an SET and may require mocks and faked environments to run but TEs often pick these tests up when they are trying to diagnose a particular failure.
      • The question a small test attempts to answer is does this code do what it is supposed to do?
    • Medium Tests can be automated or manual and involve two or more features and specifically cover the interaction between those features. I've heard any number of SETs describe this as "testing a function and its nearest neighbors."
      • The question a medium test attempts to answer is does a set of near neighbor functions interoperate with each other the way they are supposed to?
    • Large Tests cover three or more (usually more) features and represent real user scenarios to the extent possible. There is some concern with overall integration of the features but large tests tend to be more results driven, i.e., did the software do what the user expects? All three roles are involved in writing large tests and everything from automation to exploratory testing can be the vehicle to accomplish accomplish it.
      • The question a large test attempts to answer is does the product operate the way a user would expect?
  • Finally, the mix between automated and manual testing definitely favors the former for all three sizes of tests. If it can be automated and the problem doesn’t require human cleverness and intuition, then it should be automated. Only those problems, in any of the above categories, which specifically require human judgment, such as the beauty of a user interface or whether exposing some piece of data constitutes a privacy concern, should remain in the realm of manual testing.
  • Google performs a great deal of manual testing, both scripted and exploratory, but even this testing is done under the watchful eye of automation.
  • We also automate the submission of bug reports and the routing of manual testing tasks. For example, if an automated test breaks, the system determines the last code change that is the most likely culprit, sends email to its authors and files a bug. The ongoing effort to automate to within the “last inch of the human mind” is currently the design spec for the next generation of test engineering tools Google is building.
  • SETs and SWEs are on the same pay scale and virtually the same job ladder. Both roles are essentially 100% coding roles with the former writing test code and the latter doing feature development. From a coding perspective the skill set is a dead match. From a testing perspective we expect a lot more from SETs. But the overlap on coding makes SETs a great fit for SWE positions and vice versa.
  • Highly encourage reading this post for anyone involved in End-to-End testing http://googletesting.blogspot.com.au/2015/04/just-say-no-to-more-end-to-end-tests.html
  • TDD http://googletesting.blogspot.com.au/2008/09/test-first-is-fun_08.html
  • TDI (Test Driven Integration) - http://googletesting.blogspot.com.au/2010/06/test-driven-integration.html
  • As an automation engineer if you don't know about GTAC http://googletesting.blogspot.com.au/2016/04/gtac-2016-save-date.html 
  • https://developers.google.com/google-test-automation-conference/2013/presentations#OpeningRemarks
Update:
P.S: Based on my research and what's on internet these are just some very interesting snippets. I will try to keep this updated as and when I hear more.

Sources:
http://googletesting.blogspot.com.au/2011/01/how-google-tests-software.html
http://googletesting.blogspot.com.au/2011/03/how-google-tests-software-part-four.html
http://googletesting.blogspot.com.au/2011/03/how-google-tests-software-part-five.html
http://googletesting.blogspot.com.au/2011/05/how-google-tests-software-break-for-q.html
http://googletesting.blogspot.com.au/2015/04/just-say-no-to-more-end-to-end-tests.html
http://www.informit.com/articles/article.aspx?p=1854713

Also shared on LinkedIn:
http://www.linkedin.com/pulse/software-testing-google-aditya-kalra-ady-?trk=pulse_spock-articles

Happy Testing until the next Testing@ post.

No comments:

Post a Comment