Software Testing @ Microsoft
In continuation from my previous series of posts targeting testing in top few companies from the technology space.
- Software Development Engineers in Test (SDETs) are usually just called Test and sometimes Software Testing. SDETs are responsible for maintaining high testing and qualityassurance standards for all Microsoft products.
- Software Development Engineers (SDEs) are often referred to as Software Development. SDEs write the code that drives Microsoft products and upgrades.
- Very few teams in Microsoft are still using SDETs to do significant amount of manual testing. Manual testing is mainly outsourced. Actually, as early as in 2004, for example, most of the manual testing of MSN Explorer was outsourced. Think about it this way: it just doesn't make sense for test manager to spend their headcount (SDETs) on something can be easily outsourced.
- Until 2005, Microsoft actually used two different titles for testers. Software Test Engineer (STE) and Software Development Engineer in Test (SDE/T). This dual-title process was very confusing. In some groups, the SDE/T title meant the employee worked on test tools, and in others it meant he had a computer science degree and wrote a lot of test automation.
- There is no SDET any more in Microsoft. In the "Combined Engineering" change last year (2014), SDET and SDE were merged into one job: Software Engineer. What previously SDETs were doing, now it's a part of the Software Engineer's job.
- The key question of how software is tested at MS is never really answered. For example:
- Linux maintainers use Coverity on the Linux Kernel. Does MS use such tools on their Kernel?
- What sort of scripting languages are used for automation testing of Office or Windows or any other MS product?
- What sort of Unit Testing software do MS developers use? CppUnit? NUnit? The Unit testing feature in VS2008? What do some of these unit tests look like?
- What does the typical test plan at MS look like?
- What sort of white-box testing do developers perform? There are a few vague references to unit testing, but what about performance and coverage testing? What specific tools do they use? What do their result reports look like?
- At the end of the day, when it comes to writing code, a software engineer in Microsoft (and also in many other companies) may write three kinds of code: a) the product code which makes money for the company, b) the test code which make sure the product code works as expected, c) the tools code which helps in writing/running/maintaining the product code and test code. SDE's job was to write the product code and tools code, while SDET's job was to write the test code and tools code.
- To deliver high-quality features at the end of each iteration, feature crews concentrate on defining "done" and delivering on that definition. This is most commonly accomplished by defining quality gates for the team that ensure that features are complete and that there is little risk of feature integration causing negative issues. Quality gates are similar to mile-stone exit criteria
- The Value of Automation - Nothing seems both to unite and divide software testers across the industry more than a discussion on test automation. To some, automated tests are mindless and emotionless substitutes for the type of testing that the human brain is capable of achieving. For others, anything less than complete testing using automation is a disappointment. In practice, however, context determines the value of automation. Sometimes it makes sense to automate every single test. On other occasions, it might make sense to automate nothing. Some types of bugs can be found only while someone is carefully watching the screen and running the application. Bugs that have an explanation starting with "Weird—when I dismiss this dialog box, the entire screen flashes" or "The mouse pointer flickers when I move it across the controls" are types of bugs that humans are vastly better at detecting than computers are. For many other types of bugs, however, automated tests are more efficient and effective.
- Automate Everything - BVTs run on every single build, and then need to run the same every time. If you have only one automated suite of tests for your entire product, it should be your BVTs.
- Test a Little - BVTs are non-all-encompassing functional tests. They are simple tests intended to verify basic functionality. The goal of the BVT is to ensure that the build is usable for testing.
- Test Fast - The entire BVT suite should execute in minutes, not hours. A short feedback loop tells you immediately whether your build has problems.
- Fail Perfectly - If a BVT fails, it should mean that the build is not suitable for further testing, and that the cause of the failure must be fixed immediately. In some cases, there can be a workaround for a BVT failure, but all BVT failures should indicate serious problems with the latest build.
- Test Broadly - Not Deeply BVTs should cover the product broadly. They definitely should not cover every nook and cranny, but should touch on every significant bit of functionality. They do not (and should not) cover a broad set of inputs or configurations, and should focus as much as possible on covering the primary usage scenarios for key functionality
- An old software testing blog - Microsoft https://blogs.msdn.microsoft.com/micahel/
The book: http://www.wangyuxiong.com/wp-content/uploads/downloads/2013/02/HowWeTestSoftwareatMicrosoft.pdf
The Abuse and Misuse of Test Automation - Interview with Alan Page
P.S: Based on my research and what's on internet these are just some very interesting snippets. I will try to keep this updated as and when I hear more.
Also shared on LinkedIn:
Happy Testing until the next Testing@ post.
Also shared on LinkedIn: