I'm in software testing eBook - Shivakumar Mathivanan

Daily Testing Tip and Software Testing Club have launched an eBook recently. I’m extremely happy to see the eBook created with one simple but powerful idea which turned into a fabulous eBook for a great cause. The idea was a twitter challenge prompted by Daily Testing Tip's Anne-Marie Charrett to complete the phrase "If I were a test case I would..." The response was overwhelming to tribute for a cause.

The cause is to help Chandrashekar B.N (Chandru). Chandru is a passionate tester and we testers are contributing our sincere prayers and money to beat his Blood Cancer. Yes, Chandru is affected with Acute Lymphoblastic Leukemia. For more information please continue surfing more web links here: | | .You're helping hands are equally better as you're praying lips. Please help Chandru. Thank you!

@xploresqa is Shiva Mathivanan:
I was patiently going through each sections of eBook page-by-page. However, I was impatient to check whether I have made my five responses up for eBook?. I typed CTRL + F in my QWERTY keyboard to get where I wanted to reach. Yes I have made it. My twitter name @xploresqa along with my 3 responses was featured in software testing eBook. I smiled with pleasure :) . Thank you @charrett for this wonderful initiative.

I am glad to see my name in popular software testing eBook. Out of my 5 responses you can find 3 of them in the eBook which you gonna download shortly. Those were:

Download eBook for Free:
It contains over 200 interesting responses and cartoons (created by Cartoon Tester) from the testing community answering and completing the following sentence: "If I were a test case, I would...".Please do have a read. There are some really great responses there. Download it for free here -
Enjoy reading testing eBook!


Why do performance testers need to know business? - The Performance Testing Zone

Well the question does seem easy enough at the first look. This is what the managers across the board have been crying hoarse. Testers need to know business. Or else how can they test? Agreed. But when it comes to performance testers, does this hold good?
From a performance testing standpoint, what does a normal performance testers do? In almost all cases, the only way where the critical business flows are identified is via the hits on the web server. If it is a new application, most of the time, the business will have a fair idea of which flows will be critical and those will be handed down to the performance testers.
Now our performance tester will come in and write scripts, designs the scenario based on the requirements and executes the test. During the analysis, the following things will be looked at.
1. Server Health
2. Performance issues in code
3. SLAs
So keeping this in mind where does the business come in? Does it really matter to the performance tester to understand the business? The only that matters to him is whether the SLA has been achieved or if there are some hidden performance bugs which may crop up apart from the server health. Everything related to do with the technology and almost nothing with business except the SLAs, if there are any. Experienced performance testers, do not even need to see the flow and they can make the script robust. On the other hand a functional tester cannot do without knowing the business, with the automation engineer falling somewhere in between these two extremes. Is this the correct way of looking at it?
Now coming to the answer to the question in the title of the post. Every business for its survival have to create a positive impression on its customers/clients. Any customer who interacts via a business transaction takes with him/her an experience.  And this experience is what makes the customer comes back again and again building a relationship with the business. Out in a retail shop, businesses control this by having good sales force, good ambience, etc. However, online the only experience that a user can get is the look and feel of the application, ease of traversing and the speed with which the customer’s job is done. Thus performance testers directly impact the bottom-line of the business as they are responsible for this user experience. Thus it becomes imperative for performance testers to know the business, from where the revenue comes to ensure the application creates a good user experience which in turn helps the business to grow!
PS: The other testing is also equally important. However, I just wanted to bring out the importance of a business knowledge for performance testers and their impact on business which is much more than what most performance testers tend to believe.

Source: Dishit D


QTP VS Selenium - QA Blog

My company was using QTP to do the test automation for the past 3-4 years. But we are now moving to cucumber and selenium Webdriver. This blog is about why we are moving away from QTP in my company.

One of the main reason that we move from QTP is that it is a reactive approach of writing test automation. The product is finished and the tester starts writing the automation test. Only the tester writes the automation. With cucumber and selenium , writing automation test is a joint effort between the developers and the testers. The testers write the test cases in cucumber scenario format, then the developers write the steps definition. This way, everyone is contributing and everyone is reviewing the test cases.

In our case, our QTP automation tester left the company and the QTP vbs code is hard to maintain and to read compared to cucumber and ruby code. I have tried to read QTP code to try to convert the test to cucumber and it is just taking quite a lot of time to just understand what the scripts do. With cucumber, selenium webdriver reading what the tests do only take 1-2 minutes as cucumber scenarios describe the behaviour of the system. With cucumber and selenium, everyone is contributing to the automation code and therefore knowledge is being transferred all the time.

QTP is very costly while cucumber, selenium webdriver is free.

Is there any version control in QTP? In our case, we put the automation test code (cucumber, selenium webdriver) as part of the system source code so the automation code is sourced controled.

We can develop the automation code in Windows, Ubuntu, Mac or in any other platforms that you like. This is another topic, but in our case everyone is moving away from windows as well especially the developers which is why QTP is abit useless as you can use Windows only.

Continous Integration is also an ease. We use hudson to automatically run the automation test whenever there is a new build so the developers can get an instant feedback of their changes. QTP just can't do this.

Selenium webdriver can do alot more tests than the previous version (selenium rc) including java script requests and ajax test.

Getting a support answer is very fast in the open source community. Also, you have access to the source code. Since it is a joint effort to write the automation test, we are able to solve alot of issues together.

Other IT departments in our company are already ditching QTP in favor of cucumber and selenium and we never want to go back to QTP after this.

I believe QTP have other benefits as well that I do not know. However, in our case using cucumber and selenium webdriver is more relevant and better than using QTP.

Source: QA Blog - Bagas


Shortcuts that help in automation !

There are a lot of situations wherein we use shortcuts as a workaround while automating. Here are some shortcuts for windows and Microsoft apps:

 P.S:  Click on the image and download for quick referencet apps:


Code to get attribute values from an xml

Was facing a problem in finding the attributes in an xml file, Here is the code of how to solve that:

Const XMLDataFile = "C:\Test.xml"
Set xmlDoc = CreateObject("Microsoft.XMLDOM")
xmlDoc.Async = False
Dim strAttribute
Set nodes = xmlDoc.SelectNodes("/Path till the node where the attribute is present")
For i = 0  To(nodes.Length -1)
      strAttribute= nodes(i).getAttribute("Attribute Name")
      MsgBox "Node #" & (i + 1)& ": " & strAttribute

Hope this helps!


QTP - Hybrid Automation Frame Work - Bharath

What is Automation Framework?

Framework is a wrapper around complex internal architecture which makes end user to interact with the system easily. It also define guidelines and set standards for all phases of ADLC (Automation Development Life Cycle).

This “Dual Function” (Hybrid) framework is a combination of three frameworks (Functional Decomposition, Data Driven, Keyword Driven) and (re-usability, advanced error logic, custom QTP methods, dual function) techniques.

Key features of Dual function Framework

1. Re-usability, low maintenance design (Dual Function).
2. Support different application environments (Development, Quality Control, production) and custom client settings.
3. Externally Configurable - Execute all the test cases, execute only failed test cases, execute test cases by test case id, execute test cases by using flags...can be configured in different ways.
4. Self Configurable and can run unattended with out results being overwritten.
5. Better ROI (Return Of Investment).
6. Real time status monitoring…display detail status during the test execution and automatically close the pop-up status in x sec.
7. Test status reporting in three ways…detailed, high level and summary log file.
8. System automatically send email notifications with test results once the test is complete.
9. Screen Capture on failed test step, notified in the test case itself, assign unique name and stored in the separate folder for easy accessing.
10. Automation test cases resemble manual test cases (Just replace with keywords and test data)
11. Easy way of creating test cases with test case generators (Select keywords from drop down).
12. Easy to maintain and develop scripts (DP, Custom Methods and Dual Function).
13. Test execution time stamp for each step, so that we can easily trace when the test was executed.
14. Same script can be executed on QTP 9.2, 9.5 and 10.0. Backward, forward compatibility.
15. If the test is running in different location, system automatically identify the place of execution using System IP. If the IP is unknown not in the defined list or changed, it would publish the IP address in the results.
16. If object is missing or object properties got changed on the application, system will notify object is missing and proceed with the next test case. This will not stop the test execution.
17. Calculate each page response time automatically.
18. Framework is designed in such a way that it can be initiated using AOM script or directly run using QTP GUI especially useful during debugging or script creation.
19. It would automatically stop automatically if specific number of test cases got failed in sequence, this will help us know there is serious problem in the application.
20. Multi language support.
21 . Sending test results as SMS.
22. It would automatically add new library files.
23. As the entire code reside in library files, total size of the framework is very small. This will help us to easily copy, backup and upload into version control tools.
24. Automatic results archiving facility, old results will not erased.

Advantages of Dual Function Framework and ROI (Return Of Investment)

1. Reduce testing time.
2. Improve testing productivity.
3. Improve product quality
4. Reduce QA costs.
5. Consistent test results.
6. Calculate page response time of all the pages.
7. Schedule test runs.

Frame Work Folder Structure

Framework contain different files associated with different functionality, all the files to be placed in the corresponding folders; so that it is easy to maintain.

Note: Framework can't identify the folders if folder names get renamed, except sub folders created under "AutomationTestDocuments".

AutomationTestDocuments folder contain sub folders relating to each test. If we have two tests, there will be two sub folders. Create new folders as per the test requirement. There is no restriction on the folder name.

Each test (ProductionLoginTesting from the above screen shot) contain 2 Files (Startup, Control) and 3 sub-folders (AutomationTestCases - contain test cases; ExecuationLog - system automatically create two new files once test is complete, based on the configuration, Test summary.txt and Response time.xls; ScreenShots - contain application screen shots, if there are any errors. It would automatically assign unique name with time stamp for each screen shot and notified in the test step itself.

Config folder contain xls files containing custom client settings.

ExecutedScripts folder contain copy of the test folder after execution. If we run the same test again and again (schedule on daily basis) results are over written. In order to preserver the test results, framework will copy the test folder into newly created  folder with time stamp, once the test got completed.

Frame_WorkGep folder contain QTP code of one action "KickOff" with single line of code "Call StartUP". Entire project code code reside in the VBS file.
Following are the reasons for choosing this kind of design.
1. VBS file are very light in terms of size.
2. Framework is entirely based on DP (Descriptive programming).
3. We can easily implement any configuration management tool.

Library folder contain application libraries which contain code relating to keyword functions and contain sub folder called "CoreLibraries" that contain files corresponding to actual framework only.

Dual Function Framework Architecture 

Note: High-level design intended to show in a simple manner. Many other scripts are called between intermediate stages.

QTP AOM Script
1. vbs file created using AOM.
2. Attached to windows scheduler.
3. Identify the place of execution using system IP. If script is executed on new location, it would consider IP address as place of execution.  
4. As there are many tests under "AutomationTestdocuments", script need to be updated with the test folder name and path that we are planning to execute.
5. Script will open the QTP and associate the test specified in the step 4.
6. Two parameters are passed as environment variables (location and test path).
7. Copy the results in "ExecutedScripts" folder to protect results from overwriting, if initiated through AOM script only.
8. Test can also be executed in standalone mode without AOM script, especially useful during test creation and debugging. 

KickOff Action
This is the only QTP action used in the framework, it contain "Call funKickOff" to initiate KickOff script.

KickOff Script
Contain global object declaration, initialization and controlling test execution mode. This script will initiate StartUp script.

StartUp script
This script will configure the framework as per the settings provided in the startup spreadsheet.

Control Script
Start up script will initiate control script. Based on the startup spreadsheet settings, script will read each test case from the control spread sheet and call the driver script with test case id, test case name. This script will also generate real time status of the test in the form of pop-up message and update the test status.

Driver script
Based on the test case id and test case name, it will map the test case lower bound and upper bond, then it will read each line of the test case and call corresponding key word functions. 

How lower bound is computed ?
Based on the test case id received from the control script, it will start verifying the line having the keyword "TestCaseID", if ID under P1 (Column F) is matching, then it is treated as lower bound.

How upper bound is computed ?
Once script identify the lower bound, it will start searching the immediate "STOP" key word, this will be considered as upper bound.

Then it will start reading all the keywords between lower and upper bound by calling corresponding key word functions and updating each step.

Application library
These functions will actually manipulate(enter data or perform certain action on the control) the application as per the design and send the status back to the driver script.

To understand different variables that I have created in the framework, look into the function available under this link.

Note: Above scripts contain complex validation and error logic techniques, just gave you high level overview. Following top down approach and each script is independent entity, this will help us to maintain framework with ease and change the functionality without effecting other scripts.

Start-up spreadsheet

Row 1 - Default URL - If you are using the same test in different environments (Development, Quality control, production), just change the URL, scripts would run without any issues. This will help us to run the tests in different environments with very little change.
Row 2 - DB Name - If you are accessing the database through QTP, you can specify the name after creating DSN.
Row 3 - Test Execute Flag - Drop Down values All,Yes,No,Empty - Execute Flag,Check,Test Case Ids
These settings are applied by the control script on the control spreadsheet.
All - Execute all the test cases
Yes(Y) - Execute test case with Y flag
No(N) -   Execute test case with N flag
Empty - Execute Flag - Execute test case with empty flag
Check - When selected, it would just generate test report with consolidate pass/fail status (if test is run multiple time, initially all test cases, next failed test cases) without actual execution the test. 
Test Case Ids - Execute only specific test case ids.
Row 4 - Test Execution Status Flag -  Drop Down values All,Pass,Fail,Bad Inputs,Empty - Execution Flag,Check,Test Case Ids
These settings are applied by the control script on the control spreadsheet.
If we want to control the test based on Pass/Fail status use this control.
All - Execute all the test cases
Pass - Execute test case with Pass status
Fail -   Execute test case with Fail status 
Bad Inputs -   Execute test case with Bad Inputs status 
Empty - Execute Flag - Execute test case with empty flag
Check - When selected, it would just generate test report with consolidate pass/fail status (if test is run multiple time, initially all test cases, next failed test cases) without actual execution the test.
Test Case Ids - Execute specific test case ids.
Row 3 and 4 can be used in different combination while executing the test, so that desired test cases are executed. Most of the combination's are considered.  
Row 5 - Execute Specific Test Case Ids - Mention test case ids that you want to execute, "Test Case Ids" flag need to be selected in Row 3 or 4. This will override all the settings.
Row 6 - Tester Name - Name of the tester who is executing the script, same name appear in control and log files after test execution.
Row 7 - Release no/ Module name.
Row 8 - Test Cycle - Mention test cycle.
Row 9 - Exceptions exist counter - Test would automatically stop once the failed test case count reach the counter value. It doesn't make sense to keep on executing the test with 50 or more failed test cases, some thing gone wrong badly.
Row 10 - Test status pop-up display - Drop down values On,Off,Valid Test Case
On - Always display real time test status pop-up.
Off - Don't display real time test status pop-up.
Valid Test Case - Display while executing valid test case, don't display for invalid test case, determined from the combination of Row 3 and 4 flag settings.
Row 11 - Test status pop-up display (sec) - How may seconds real time test pop-up need to be displayed, automatically close after specified seconds.
Row 17 - Object Highlighting - On/Off
On - It will highlight all the objects during test execution, reduce test execution speed.
Off - No highlighting
Row 18 - Silent Mode - On/Off
On -Framework generate many messages during execution, when test is getting executed unattended, we can suppress these message by selection On.
Off - User need to click OK for each framework message, used during debugging.
Row 19 - Calculate average page response time - On/Off
On - It will generate an excel file with each page response time and store it in the "ExecuationLog" folder. Extra coding required, need to modify excel COM model function as per the requirements.
Off - It will not generate any page response file.
Row 26 - Beep on any click - On/Off
On - Generate beep sound when system select any control. 
Off - No beep.
Reason for implementing this feature - One of the developer told me that you are not calculating page response time correctly there is flaw in page synchronization code. In order to prove that I am correct, I have implemented this feature. When system select any action on the control it will generate a beep, once page sync is complete, it will generate different beep. After showing the same to the developer, he agreed that my page response time code is correct.

Control Spreadsheet

Headers highlighted in YELLOW - User need to fill the columns.
Headers highlighted in WHITE - System automatically fill the columns during execution.
One control spread sheet for entire test.

Column 1 - Test Case Id - Each test case is assigned unique id in ascending order only.
Column 2 - Test Case Description - Optional, small description about the test case.
Column 3 - Execute Flag(Y/N/Empty) - Control script will retrieve the settings from the start up spread sheet Row 3 and check the flag. If both flags match, system will execute the test case else it will ignore it. This system will help us to quickly configure which test cases to be executed.
Column 4 - Test case file name - Need to specify the corresponding test case file name.  Each test can have multiple test cases if required, so that functionality can be distributed and easy to maintain.
Test case - Group of test steps(procedure) with specific test data to validate a requirement.
Test case file - Group of test cases placed in a file.
Column 5 - Response Time - Consolidated response time of all the test steps under specific test case.
Column 6 - Execution Status - Once test case id is executed, system will update the status with Pass/Fail/Bad Inputs (test data provided to the test steps are incorrect).
We can also use this flag to control test execution. Row 4 start up spreadsheet.
Column 7 - Execution Place or System IP / Status Message - Display execution place/ Error and check point messages, if there are any inconsistencies.
Column 8 - Time stamp - Test execution time stamp in Local time.
Column 9 - Test tester name - As specified in the start up spreadsheet.
Column 10 - Release no/ module name - As specified in the start up spreadsheet.

Test case Spreadsheet
Before Execution
Headers highlighted in YELLOW - User need to fill the columns.
Headers highlighted in WHITE - System automatically fill the columns during execution.

Test case spreadsheet consist of test steps similar to manual test cases.

Column A - Keyword - Perform specific action on the screen like enter data, retrieve data, check point. These keywords are selected from the drop down, not required to type the keywords. Once test case creator select the keyword, based on the design, column F to Q are highlighted and corresponding cells are inserted with comments, these are input parameters P1, P2, P3....P20. "LogInSignon" (Row 118) contain 4 parameters, P1 - User name, P2 - Password, P3 - Credentials are correct, P4 - Language.

After Execution
System start filling columns B, C, D and F during test execution.

Mapping of manual and automation test cases to check the coverage


We can create new column "Automation test case ID" in the manual test cases spreadsheet, so that there will be one to one mapping without missing any functionality.

Test summary log file

Generated and placed in the 'ExecutionLog" folder one test is completed.

It contain following information.
1. Place of execution.
2. Test start and end time with time zone.
3. Total executes test cases (Pass/fail/bad inputs) and ignored test cases (based on start up spreadsheet settings).
4. Star up spreadsheet details.
5. Information relating to sending email notifications.
6. Average page response time (depend on the start up spreadsheet settings)
7. Available PC physical memory before and after execution with time stamp.
8. Test termination message By system completed/ By user/ By exception exist counter.
9. Whether email sent successfully.
10. PC information.

It is a summary of Test results + Framework settings + PC info.

Real-Time status POP-UP message


It provide live health of your test, this can be configured in the start up spread sheet.
Above image is self explanatory.
If your test is running for 2 hours and don't have this feature, you will understand nothing if you look at your running test after one hour. It would be better to implement this feature as per your requirements, not so exhaustively as mentioned above. 

Note: Implementing these features will not have any performance bottleneck in terms of execution speed or memory usage.

 Test case generator (Keyword sheet to create test cases)
Simple Sheet

Test case generator spreadsheet contain two sheets
1. KeyList - Where you design keywords, same as above screen shot
2. Keyword - Where you create automation test cases by selecting the keywords from the drop down.
Macros are written to connect both sheets

How to create KeyWords?
Lets create keyword for the login screen, where you have User name, Password and Login button.
Flow - Enter user name, Password and select login button.
Keyword name - LoginSignOn 
Input Parameters - User name, Password
Assign two parameters P1, P2 under column E and F.
Create cell comments for P1 and P2 as user name and password.
O- Optional parameter
M - Mandate parameter (System will generate "BAD INPUTS" if users don't enter values and execute the script.

Points to be considered while creating Keywords
1. Standard CamelCase format.
2. Keyword granularity (never have same keyword doing same or overlapping functionality).
3. When creating complex keywords choose following format (product name + page Name + functionality)
4. Don't worry about the length of keyword, you are not typing it, just selecting from the drop down. It should be self explanatory.

Advantages of this design.
1. It is not required to enter test object properties in the test case, already implemented in the code.
2. We can created a document which contain screen shot of the page and corresponding keywords. By looking at it any one can create test cases.
3. Generally in keyword driven framework, we can enter one value or fire one method per line; in this design you can enter maximum of 20 values in a single line. 
4. It is not required to memorize all the inputs associated with a keyword, Just select the Keywork, it would automatically highlight the parameters and assign cell comments to each parameter, so that you will understand easily what P1, P2...parameters for.
5. Each key word is self explanatory designed using camel case, any person can understand it.

Complex Sheet with lengthy keywords

After creating the core framework, next job would be creating keywords as per the functionality and executing the test.

Continued... QTP - Hybrid FrameWork - Part II

Source: Bharath Marrivada


Zephyr v3.0 - Test Mangement tool

Zephyr v3.0an advanced and comprehensive test management system.

Zephyr v3.0 has the following features:
  • Custom reports generator: Zephyr's new custom report generator makes it simple for every team to be able to run any report they need. The new functionality builds on the existing out of the box functionality which comes with more than 16 drill down reports and metrics.
  • Extensive API’s: Zephyr's philosophy has always been to integrate with existing infrastructure and not force teams to rip one tool out to replace it with another. With the new extensive API's Zephyr further extends the capability of its platform to support the myriad needs of global software quality teams. The web services API's are fully documented at http://developer.yourzephyr. com.

  • Enhanced productivity levels/Search Technology: Zephyr's test management solution promotes a new level of organization at the release, project, department and global level. With the 3.0 release, Zephyr debuts a new search technology that searches in the body of fields, not just in the title or subject field. Teams will achieve new levels of productivity.

  • Trend reporting: With Zephyr's new, first of its kind dynamic trend reports, teams now have the ability to look back at stages of former projects and quantify how long it took to complete the release. With this information in hand, the same teams will now able to plan accordingly for not only current releases, but also for future projects.

  • Enhanced JIRA integration: Zephyr enhances its one of a kind 2-way JIRA integration by supporting JIRA custom fields. Teams have always been able to enter, search, change, report, analyze and close any defect in JIRA from within Zephyr through optimized testing desktops. With the new release, teams who have customized or plan to customize JIRA will also be able to leverage the robust functionality and achieve higher productivity and efficiencies.

    Online help is available at product_help/3.0/Zephyr_3.0_ Help.htm


Testing News - Dec 8th

  • Announcing the First-Ever Selenium Conference in San Francisco - Sauce Labs, the company co-founded by Selenium creator Jason Huggins, today announced the inaugural Selenium Conference, which will take place on April 1-3, 2011 Source: News 
  • Blueberry Launches Version 3.0 of Its Software Testing Tool for Developers Source: News  
  • HP Intros Application Lifecycle Management 11: a unified system for managing the lifecycle of a key business application when it needs to be built from a variety of services running in heterogeneous environments.Source: News
  •  IBM and QAI Partner to Launch Dual Certification Program in Functional Testing: Leading to Dual Certifications in Software Testing (CSTE) and Rational Functional Testing (RFT) 
  • India currently has approximately over 132,000 professionals working in the Independent Testing Services Business, out of which approximately 65% of the work done, is in the field of Functional Testing.  Source: News


Lean as a toolkit towards agility - Ranjan

Definitely Lean!!
Over the last few years, agile software development processes like XP and Scrum have gained a lot of prominence, and a majority of organizations have tried one of these one time or another. I mention two here - Scrum is an agile management methodology, XP more of agile engineering practices, because on various fora, you will find debates on one over the other, and sometimes hopefully one mixed with the other. Both the methodologies are practical extensions to the agile manifesto, concrete processes that supplement a philosophy. Unfortunately, there seems to be a majority which assumes complete agility is achieved when practicing one of these methodologies or a mix of them. Blindly adopting a methodology is no different from following older so called non-agile practices. Granted there might be immediate visible improvements with better communication and development techniques, this ends up as just a linear function of the ability to change for the better. Given time, if self organized teams do not retrospect and improve on past performances, they hit roadblocks as before, may be just a little further, but they do. The last line of the manifesto -"At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly" is a statement that is very important, yet ignored, or practiced weakly.

Indeed, what the final tenet on the manifesto encourages a team to do is make sure it is learning and improving its processes over time. Scrum or XP are just basic frameworks which facilitate an initial shift, and then the team must make sure that they choose the right processes under given circumstances to be effective and inclined towards a delivery. During such a process a team might decide to break one or more basic tenets of the framework they choose to follow, if it improves their effectiveness and delivery. For example, YAGNI is a fundamental tenet of XP, but a team might choose to tweak it if they find it necessary to design upfront on certain tasks. Theorists and fanboys will shout foul, but realists and seasoned practitioners will understand the needs to do this. (Having said this, the team must at all times be congnizant of the risks taken when making these decisions, and observe changes closely for a few iterations.) These tenets are not commandments to follow staunchly, they are guidelines to follow a philosophy of continuous improvement to deliver the best value to a customer. They provide a meta-plan, or a planning approach, which is not any more important than responding to change. This realization itself facilitates better localized processes. The argument above is an example of improvement within a process. The shift towards learning from Lean Manufacturing is an excellent example of how not just a team, but the community as a whole looks around for synergies in different workplaces to improve business processes and deliver more value.

The commercialization and marketing of XP or Scrum as a development practice, sadly has not done justice to Agile as a philosophy. As more and more corporations try to join the bandwagon, I have noticed frequent claims of agility subsequent to following an XP or Scrum approach. While adapting to XP and Scrum practices is not wrong, it does benefit a team to do so, but if the team does not understand the underlying philosophy behind the practices, it fails to see a fundamental dimension of growth. By looking at just the practices within the methodology for improvement and not without, it ignores tools and practices that might help itself to improve on a larger scale. Any failures in such a premise becomes then a failure of the process followed, and management decisions being made to repeal processes. Such superficial following is thus harmful for the corporation as it misses out on a good philosophy as it chooses to discard it, based on an implementation fault.

Lean software development is the new kid on the block. Lean manufacturing and Agile Development are very aligned towards each other, for starters both are people centric and concentrate on reducing wastes. As the hype to be Lean increases, similar superficial dabs will increase, some will succeed and some will fail. Should such failures count as failures of the Lean model? Should teams stop striving to be Lean when they figure out that they would not be able to stop the line, or follow another much proclaimed Lean tenet? Shouldn't teams be looking at Lean practices as empowering tools and not laws? By educating itself towards a mindset and not just manifest of it(read Lean), wouldn't a team benefit more?

In fact what a team should figure out is that XP, Scrum and now Lean are just toolkits that facilitate aligned and effective delivery of customer demands. Stand-ups, TDD, Scrum of Scrum, Frequent releases are all a part of a greater toolkit that facilitate agility, and a team should pick and choose what is most effective for itself. Lean, like XP is a set of practices teams have evolved over a period of time to be more effective. It has evolved over prior practices like Taylorism and Fordism at Toyota. The point to note is that Toyota did not invent or create a lean process out of nowhere. They optimized based on a given set of circumstances and experimented new approaches towards manufacturing. That is an instance of pure agility. The current state of Lean is just a product of agility, and thus provides a good toolkit, just like XP and Scrum and other agile manifests do.

By treating these manifests as toolkits, we open up our perspective and become more responsible to better delivery, by following them blindly, we close our eyes and wait for good things to happen, just because we are following a process. Indeed, true agile teams improve continuously. Upon hitting roadblocks, they come up with solutions which are measurable in terms of effectiveness. They break dogma and blind faith to increase effectiveness and thus deliver more for the same dollar spent than the last iteration. And when do they stop looking for improvements? Never.

Source: Ranjan Sakalley


Test Automation tools in the market!!

Here is not a complete list but almost complete list of functional test automation tools:

Open Source GUI testing tools:
  • Abbot
  • AutoHotkey
  • AutoIt
  • Canoo
  • CubicTest is a graphical Eclipse plug-in for writing Selenium and Watir tests.
  • Dogtail by Red Hat
  • FitNesse
  • Linux Desktop Testing Project by
  • Maveryx is an automated functional testing, regression testing, GUI testing and data-driven testing tool.
  • QAliber by QAlibers, free open source for testing applications and web over windows OS
  • Selenium for Web UI testing
  • SWTBot functional testing of SWT and Eclipse based applications
  • Tellurium Automated Testing Framework (runs on top of Selenium).
  • Watir browser driver for web UI testing
  • WatiN Web automated testing in .NET
Lots more open source tools here

Commercial GUI testing tools
  • AutoIt
  • Automation Anywhere
  • eggPlant by TestPlant Ltd
  • GUIdancer by Bredex, for Java (Swing, RCP/SWT, GEF) and HTML
  • HP QuickTest Professional (QTP) by Hewlett-Packard (formerly by Mercury Interactive)
  • IBM Rational Functional Tester by IBM
  • IcuTest GUI unit testing for WPF
  • iMacros
  • Network Automation
  • Phantom Automation Language Microsoft Windows GUI Testing
  • QF-Test by Quality First Software, for Java/Swing, Eclipse/SWT and HTML only
  • Ranorex
  • RIATest for Flex
  • SilkTest by Micro Focus International (formerly by Segue Software then Borland)
  • Soatest (absorbed WebKing starting in version 6.0) by Parasoft
  • Test Automation FX Windows UI testing with Visual Studio
  • TestComplete by SmartBear Software
  • TestPartner by Micro Focus
  • WinRunner by Hewlett-Packard (formerly by Mercury Interactive)
  • WindowTester by Instantiations for testing Swing, SWT, and Eclipse/RCP based applications


Selenium RC - User Extension.JS - Bharath

As a Selenium programmer you may require to create new methods or functions that doesn't exist.
One of the best method is by using UserExtension.js. In future posts I will explain about extending the existing selenium class.

Add following libraries to your project, attaching the snap shot.

Create new java class using following code. This example is created on, so that any one can execute the code with out any issues.

What is this program about ?
1. Created new Method "MyClick".
2. Two new functions to calculate page response time (timerStart, timerStop).
3. Open the google page, search for an item and display page response time in Milli seconds for the results page.
Initially I had lot of issues while working with user-extensions, I want to bring clarity to my users by providing this example. There are differences between IDE and RC extensions, read my timer extension post for better understanding.

package package1;

import static org.testng.AssertJUnit.*;
import org.testng.annotations.*;
import com.thoughtworks.selenium.*;

public class Sample2
  private static final String Timeout = "30000";
  private static final String BASE_URL = "";
  private static final String BASE_URL_1 = "/";
  private Selenium selenium;
  private HttpCommandProcessor proc;
  protected void setUp()throws Exception
     proc = new HttpCommandProcessor("localhost", 4444, 
"*iexplore", BASE_URL);
     selenium = new DefaultSelenium(proc);     
  protected void tearDown() throws Exception
  public void test_GoogleSearch() throws Exception
    selenium.type("name=q", "selenium HQ");
new String[] {"GooglePage"}));
    //"btnG"); //selenium command 
    proc.doCommand("myClick",new String[] {"btnG"}); 
    //user extension for Click ()
    System.out.println (proc.doCommand("getTimerStop",
new String[] {"GooglePage"}));
    Thread.sleep(5000); //Show the page for few seconds 
  public void test_GoogleSearch1() throws Exception
    selenium.type("name=q", "Bharath Marrivada");
new String[]{"GooglePage"}));
    //"btnG"); //selenium command
    proc.doCommand("myClick",new String[] {"btnG"}); 
    //user extension for Click ()
    System.out.println (proc.doCommand("getTimerStop",
new String[] {"GooglePage"}));
    Thread.sleep(5000); //Show the page for few seconds 


Selenium.prototype.doMyClick = function(inputParams) 
 var element =;;
 return null;

var globalTime = new Object();
Selenium.prototype.getTimerStart = function(target) {
 var dt1 = new Date();
 if (target == null || target == "")
  return  ("Target not present so timer was not started");
 } else {
  globalTime[target] = dt1;
  return null;
 delete dt1;

Selenium.prototype.getTimerStop = function(target) {
 var dt = new Date();
 if (target == null || target == "")
  return  ("Please specify a target");
 } else if (globalTime [target] == null) {
  return  ("Start time was not called for " + target);
 } else {
   return  ("Time Passed for " + target + ":" + Math.floor
 (dt - globalTime[target]) + " msec");
   delete globalTime[target]; 
 delete dt;
Note: Make sure you create the .js file in the selenium.jar folder.

To execute JavaScript functions from .js file, need to use  http command processor "proc.doCommand".
If the function is not returning any value start the function name with "do", else use "get".
Function names are case sensitive, always start with lower case. For "do" function, remove "do" while calling the function from Java.

To start the selenium server use following command.

java -jar selenium-server.jar -userExtensions user-extensions.js

To start the Selenium server quickly, every time instead of going into command prompt, I use following text saved as .vbs file. Double click the file, your selenium server will be up and running. (Modify the folder path)

Dim oShell
Set oShell = WScript.CreateObject ("WScript.Shell") "cmd /K CD C:\selenium-remote-control-1.0.3\
selenium-server-1.0.3 & java -jar selenium-server.jar
-userExtensions user-extensions.js"
Set oShell = Nothing


Source:  All about QTP, LOADRUNER, NeoLoad, Performance&Security Testing, VB Script, Selenium...


Skillset - SDET for web application

Here is what a company in seattle is looking for in a Automation engineer as a SDET (Software Design Engineer in Test ) 
We're looking for an experienced engineer who loves test automation - writing real code to test large-scale web systems. You'll join a talented, experienced, fun software team, automating complex and highly-scalable online systems.

  • 5+ years of professional experience as a software engineer (either in development or in test) on commercial-grade software in Java, Perl, and/or C++
  • you need the basics on test strategy and approach
  • Nice to have: web test and/or development experience - Selenium, Watir, etc. coding really valuable
  • Nice to have: SQL testing and/or development experience
  • Nice to have: CS degree or equivalent
  • Great team member, plays well with others, etc.

We're definitely looking for real development experience!


RefCardz - Do you want to learn faster?

RefCardz are one of the best ways to learn a process, technology or a tool and RefCardz for Testing. These are definite downloads and must haves if you are using any of them:
  1. Selenium 2.0: Using the WebDriver API to Create Robust User Acceptance Tests
  2. PHPUnit: PHP Test-Driven Development - Automated Tools to Improve Your PHP Code Quality
  3. JUnit and EasyMock 
  4. Getting Started with Fitnesse
  5. Getting Started with Firebug 1.5
To download these refcardz follow the link below:
Five Great Refcardz For Software Testing


Make your tests agile too: wiki - Trish Khoo

This is a story about my experience in using a wiki to manage test cases.

Over the past few years, I have been evaluating different test case management approaches and tools. At first I was looking for a one-fits-all solution, but it quickly became apparent that such a dream was impossible to achieve. At Campaign Monitor, we are constantly adapting and improving our test approach to fit each release cycle. So I started focusing on finding a tool that supports our current test approach, but is flexible enough to adapt when the test approach changes to suit a new context.
I have found that the way in which many test tools are designed can force the testers into taking a certain test approach. A wiki is somewhat like a series of blank canvases all linked together, so it seems like a very flexible solution. In practice, this proved to be the case, but was flexibility enough?

We had more freedom in our test cases

We began with a repository of regression test cases in TestLink and a suite of 1000+ automated GUI tests that ran every night. When we decided to try out FitNesse, we stopped adding new test cases to TestLink and added them to a wiki page in FitNesse instead.
The format of our test cases changed as well. TestLink’s interface encourages the user to enter test cases in the format of “Title” “Steps” and “Expected results”. With a blank wiki page, we could write tests in whatever format we desired. So we used Given-When-Then, which is a very concise and easy-to-read format. For example:

As a paying customer
Given that I have added an item to my shopping cart
When I choose to checkout
Then I should be shown a summary of my order
And I should be prompted to pay for my purchase

One thing I like about this format is that it gives the tester a lot of freedom in the way they can run the test. In the example above, it doesn’t specify *how* the paying customer adds items to the cart, or even in what way they are prompted to pay for their purchase. Given the flexible nature of feature requirements on our projects, this suits us very well. In addition, it increases the likelihood that different testers will run the test in entirely different ways.

Everyone was on the same page

This was the first experience I had ever had where developers not only read the test cases without needing to be coaxed into it, but also edited and added to the scenarios. This was probably due to the easy-to-edit nature of the wiki, the easy-to-understand format of the test cases and our good relationship with our developers.
The first time we tried this approach, we wrote a page of test cases that were relevant to a particular developer’s assigned feature. We added a series of questions that we had about the feature to the same wiki page. Then we sent the URL for that page to the developer. The developer read the test cases, answered the questions and added a few more test cases of his own. We had some follow-up questions to some of his responses, so we went to his office to discuss them in person. While discussing, we were able to quickly update the test cases in the wiki page from his computer, and add additional notes that we could expand into new test cases later on.

In a later release, a different developer had not seen the Given-When-Then format before and was initially confused. I went to his office to explain it, and after about 30 seconds of explanation he easily figured it out. So we went through the test cases together, and he modified them and added new ones as we discussed them. Many of the test cases were based on assumptions and he was able to quickly validate them and correct them as necessary.

Using a wiki for test plans had unexpected benefits

I used to write detailed, 20 page test plan documents that nobody ever wanted to read. After realising this was a pointless exercise, these test plans were eventually whittled down to two-paragraph summaries on the company intranet noticeboard. When we started using FitNesse to store test cases, we needed a central document to tell us which tests we would be using for each release, and where they were located. So creating a test plan document in the wiki to hold this information seemed like a sensible thing to do. As it was easily editable, it encouraged me to update it with important information as the release progressed.
The test plans became a guide to the testers for what was testable, what work we had done and what work was remaining. We added a list of high-risk feature areas to the plan and it became a central battle plan for regression testing. The latest version is easily shared with the team just by sending the URL.

Scalability could be an issue

We made a manual test fixture to mark manually run test cases as passed or failed. However, each of these results has to be individually reset so it wasn’t really a scalable solution. It’s worked okay so far due to the small amount of tests and the fact that we only use a subset of the tests for regression, but I expect that it could become a problem as more tests accumulate.

Automation integration was surprisingly disadvantageous

FitNesse is designed to be hooked into an automation tool, such as Selenium. The idea is that tests written in decision tables or even written in sentence form can be run as automated tests. At first it’s a bit tough to get your head around the concept that plain text written in a wiki can magically turn into an automated test. Basically what’s happening is that the automation is using keywords from the wiki text as method names and parameters in the code behind. So the wiki text is like a series of commands and data inputs that is fed into the automated test code. For more information, check out FitNesse’s two minute example.
We tried using it with Selenium and ran into a few issues. First of all, it was a real pain to set up. This was mostly due to lack of clear information about how to set up FitNesse and Selenium with a C# codebase. Second of all, writing tests to suit the FitNesse model turned out to be pretty time consuming. We did get tests working in the end, but I don’t think it suited our style of testing very well. But at least now we have the capability of running automated tests this way, and we can use that if we ever find a situation where it could be an advantage.

More plus than minus

Overall I’ve been pretty happy with this wiki experience and I’m going to stick with it and keep evaluating. Our current plan is to take test cases out of the wiki and add them to the our automated test suite, which runs nightly (independent of FitNesse). For now, I think this may suit us better than running tests from FitNesse itself. This may help address the scalability issues too.

Source: Purple Box Testing


Testing News - Nov 30th

  • GUIdancer test tool to become Eclipse project and Open source : Long-time Eclipse member Bredex plans to release core components of its GUIdancer test automation tool as an open source project hosted by the Eclipse Foundation. Source: News
  • Microsoft is adding private beta group support to the Windows Phone 7 Marketplace, allowing developers to do limited testing of their applications before submitting for full app store inclusion.Source : News
  • National Institute of Standards and Technology (NIST) researchers recently released a new and improved bug catching system designed to more efficiently find software glitches during the development process. NIST reported in 2002 that software bugs cost the economy nearly $60 billion even though 50 percent of software development budgets are devoted to testing. Testing every possible variable is not practical. A system with 34 on and off switches, for example, would require 17 billion tests. Source: News


Software Testing! good one - Jagan

Software Testing - A good read….not intended against any group as such, but still entertaining ... read on ...

A university scholar, Mr. John Smith approaches his friend a software-testing guru telling him that he has a Bachelor in programming, and now would like to learn the software testing to complete his knowledge and to find a job as a software tester. After summing him up for a few minutes, the software-testing guru told him "I seriously doubt that you are ready to study software testing. It's the serious topic. If you wish however I am willing to examine you in logic, and if you pass the test I will help teach you software testing. "

The young man agrees.

Software testing guru holds up two fingers "Two men come down a chimney. One comes with a clean face and the other comes out with a dirty face. Which one washes his face?

The young man stares at the software-testing guru. "Is that a test in Logic?" software testing guru nods.

"The one with the dirty face washes his face," He answers wearily.

"Wrong. The one with the clean face washes his face. Examine the simple logic. The one with the dirty face looks at the one with the clean face and thinks his face is clean. The one with the clean face looks at the one with the dirty face and thinks his face is dirty. So; the one with the clean face washes his face."

"Very clever" Says Smith.  "Give me another test"

The software-testing guru again holds up two fingers "Two men come down a chimney.One comes out with a clean face and the other comes out with a dirty face. Which one washes his face?

"We have already established that. The one with the clean face washes his face"

"Wrong. Each one washes his face. Examine the simple logic. The one with the dirty face looks at the one with the clean face and thinks his face is clean. The one with the clean face looks at the one with the dirty face and thinks his face is dirty. So; the one with the clean face washes his face. When the one with the dirty face sees the one with the clean face washing his face, he also washes his face. So each one washes his face"

"I didn't think of that!" Says Smith. " It's shocking to me that I could make an error in logic. Test me again!."

The software-testing guru holds up two fingers "Two men come down a chimney.One comes out with a clean face and the other comes out with a dirty face. Which one washes his face?

"Each one washes his face"

"Wrong. Neither one washes his face. Examine the simple logic. The one with the dirty face looks at the one with the clean face and thinks his face is clean. The one with the clean face looks at the one with the dirty face and thinks his face is dirty. But when the one with clean face sees that the one with the dirty face doesn't wash his face, he also doesn't wash his face So neither one washes his face".

Smith is desperate. "I am qualified to study software testing. Please give me one more test"

He groans when the software-testing guru lifts his two fingers, "Two men come down a chimney. One comes out with a clean face and the other comes out with a dirty face. Which one washes his face?

"Neither one washes his face"

"Wrong. Do you now see, John, why programming knowledge is an insufficient basis for studying the software testing? Tell me, how is it possible for two men to come down the same chimney, and for one to come out with a clean face and the other with a dirty face? Don’t you see?"

Source: Jagan's Eyes "What I See"


Automation maturity - what should a Test Manager focus on? - AnitaG

Automation is frequently a passionate debate, usually around how much and whether it is effective.  But  are test managers prepared for the effects of automation as it grows?  Instead of focusing on whether or not to automate or by how much, let's focus on what having automation on a test team means for the manager, assuming the team has already decided the correct balance of what needs automated and what doesn't (and in what priority).

The infancy of automation: Initially, a team may say they have automation.  I've learned that when I drill down on this, they don't necessarily have test cases automated, but instead they only wrote tools to help with parts of the testing process, like installation/setup/deployment tools or tools for emulating inputs due to a dependency on an unreliable source.  There is a difference between writing tools and writing automation (although that can become a blurred line when describing a test harness or execution engine).

Establishing the automation report:  As teams get better at automation and their automation grows, managers can benefit by pulling reports from the automation.  This is an extremely necessary result of having automation and one that a manager should focus on.  At times, I have started generating the reports before the automation is written just to help the team focus on what needs done.  This could be as simple as listing the builds, BVT pass rate, and % of BVTs that are automated.  One can argue that BVTs should always pass 100%, but let's save that discussion for another time.  As the team completes automation for BVTs (Build Verification Tests), I start reporting on functional automation reporting and code coverage numbers.

A significant location change:  As the team continues to write automation, the process of them running their growing suite of automation on their office machines starts becoming a bottleneck.  It is key that the Test Manager thinks ahead and plans for this with the beginnings of an automation lab.  Continuing to run automation in testers' offices takes up machine time and limits the amount of coverage that could be achieved.  Using a lab will allow for running your automation on different software and hardware environments to catch those bugs that couldn't be caught by just running on the same machine day-after-day in a tester's office.  The automation lab also makes reproducible results an achievable goal because every day the test automation can run on the same group of machines.

The overgrown automation suite:  When I have a team that is mature in their processes around writing automation, there are a few different issues that need focus or the automation efficiency starts to suffer.  The two biggest problems I have seen is legacy test automation and analysis paralysis. 

Legacy automation is automation code that was written years ago by someone probably not on the team anymore.  It tests key features in the product, or at least that's what every thinks.  The team is usually afraid to change or affect this automation in any way because of concerns that coverage will diminish.  But the automation may also make running the whole suite very long.  If lucky, it will always pass because investigate a failure may become difficult if nobody on the team knows the code very well.  Also, if it always passes, it is questionable if the automation is truly still testing things correctly.  Is it cost effective to investigate this automation, verify it's correctness, and modernize it to the current technologies?  That depends on many factors within the team.

Analysis paralysis is when too much automation is run on too many machines too frequently.  Is that really possible?  Yes it is.  When that happens and the results come back as anything less than 100% passing (which is always the case), the test team will have to focus on why the automation failed and if it was a bug in the automation code or the product code.  Of course that's what they would do.  That's part of the expectations when having automation.  The key point here is that too much of a good thing can overload the test team to the point that they are blocked from doing anything else because all their time is spent investigating automation failures.  But if they don't investigate the failures to understand why the automation results aren't at 100%, is that ok?  If your automation passes at less than 100% or bounces around a lot, is it still beneficial to run it?  Are you missing key bugs?  Those are the questions I ask when in situations like this.  I have lots of opinions about this that I will save for later blogs.

I have experienced teams in these different levels of automation development.  I've managed teams with no automation, I've managed teams with too much automation, I've managed teams that only ran automation labs and produced results daily.  There's not one solution that works.  But as a manager, I found that staying aware of how much automation my team has and watching closely if the automation is a benefit or a burden is key to allowing the automation to be effective in improving product quality.