Tuesday

Software Testing! good one - Jagan



Software Testing - A good read….not intended against any group as such, but still entertaining ... read on ...

A university scholar, Mr. John Smith approaches his friend a software-testing guru telling him that he has a Bachelor in programming, and now would like to learn the software testing to complete his knowledge and to find a job as a software tester. After summing him up for a few minutes, the software-testing guru told him "I seriously doubt that you are ready to study software testing. It's the serious topic. If you wish however I am willing to examine you in logic, and if you pass the test I will help teach you software testing. "

The young man agrees.

Software testing guru holds up two fingers "Two men come down a chimney. One comes with a clean face and the other comes out with a dirty face. Which one washes his face?

The young man stares at the software-testing guru. "Is that a test in Logic?" software testing guru nods.

"The one with the dirty face washes his face," He answers wearily.

"Wrong. The one with the clean face washes his face. Examine the simple logic. The one with the dirty face looks at the one with the clean face and thinks his face is clean. The one with the clean face looks at the one with the dirty face and thinks his face is dirty. So; the one with the clean face washes his face."

"Very clever" Says Smith.  "Give me another test"

The software-testing guru again holds up two fingers "Two men come down a chimney.One comes out with a clean face and the other comes out with a dirty face. Which one washes his face?

"We have already established that. The one with the clean face washes his face"

"Wrong. Each one washes his face. Examine the simple logic. The one with the dirty face looks at the one with the clean face and thinks his face is clean. The one with the clean face looks at the one with the dirty face and thinks his face is dirty. So; the one with the clean face washes his face. When the one with the dirty face sees the one with the clean face washing his face, he also washes his face. So each one washes his face"

"I didn't think of that!" Says Smith. " It's shocking to me that I could make an error in logic. Test me again!."

The software-testing guru holds up two fingers "Two men come down a chimney.One comes out with a clean face and the other comes out with a dirty face. Which one washes his face?

"Each one washes his face"

"Wrong. Neither one washes his face. Examine the simple logic. The one with the dirty face looks at the one with the clean face and thinks his face is clean. The one with the clean face looks at the one with the dirty face and thinks his face is dirty. But when the one with clean face sees that the one with the dirty face doesn't wash his face, he also doesn't wash his face So neither one washes his face".

Smith is desperate. "I am qualified to study software testing. Please give me one more test"

He groans when the software-testing guru lifts his two fingers, "Two men come down a chimney. One comes out with a clean face and the other comes out with a dirty face. Which one washes his face?

"Neither one washes his face"

"Wrong. Do you now see, John, why programming knowledge is an insufficient basis for studying the software testing? Tell me, how is it possible for two men to come down the same chimney, and for one to come out with a clean face and the other with a dirty face? Don’t you see?"

Source: Jagan's Eyes "What I See"

Monday

Automation maturity - what should a Test Manager focus on? - AnitaG


Automation is frequently a passionate debate, usually around how much and whether it is effective.  But  are test managers prepared for the effects of automation as it grows?  Instead of focusing on whether or not to automate or by how much, let's focus on what having automation on a test team means for the manager, assuming the team has already decided the correct balance of what needs automated and what doesn't (and in what priority).

The infancy of automation: Initially, a team may say they have automation.  I've learned that when I drill down on this, they don't necessarily have test cases automated, but instead they only wrote tools to help with parts of the testing process, like installation/setup/deployment tools or tools for emulating inputs due to a dependency on an unreliable source.  There is a difference between writing tools and writing automation (although that can become a blurred line when describing a test harness or execution engine).

Establishing the automation report:  As teams get better at automation and their automation grows, managers can benefit by pulling reports from the automation.  This is an extremely necessary result of having automation and one that a manager should focus on.  At times, I have started generating the reports before the automation is written just to help the team focus on what needs done.  This could be as simple as listing the builds, BVT pass rate, and % of BVTs that are automated.  One can argue that BVTs should always pass 100%, but let's save that discussion for another time.  As the team completes automation for BVTs (Build Verification Tests), I start reporting on functional automation reporting and code coverage numbers.

A significant location change:  As the team continues to write automation, the process of them running their growing suite of automation on their office machines starts becoming a bottleneck.  It is key that the Test Manager thinks ahead and plans for this with the beginnings of an automation lab.  Continuing to run automation in testers' offices takes up machine time and limits the amount of coverage that could be achieved.  Using a lab will allow for running your automation on different software and hardware environments to catch those bugs that couldn't be caught by just running on the same machine day-after-day in a tester's office.  The automation lab also makes reproducible results an achievable goal because every day the test automation can run on the same group of machines.

The overgrown automation suite:  When I have a team that is mature in their processes around writing automation, there are a few different issues that need focus or the automation efficiency starts to suffer.  The two biggest problems I have seen is legacy test automation and analysis paralysis. 

Legacy automation is automation code that was written years ago by someone probably not on the team anymore.  It tests key features in the product, or at least that's what every thinks.  The team is usually afraid to change or affect this automation in any way because of concerns that coverage will diminish.  But the automation may also make running the whole suite very long.  If lucky, it will always pass because investigate a failure may become difficult if nobody on the team knows the code very well.  Also, if it always passes, it is questionable if the automation is truly still testing things correctly.  Is it cost effective to investigate this automation, verify it's correctness, and modernize it to the current technologies?  That depends on many factors within the team.

Analysis paralysis is when too much automation is run on too many machines too frequently.  Is that really possible?  Yes it is.  When that happens and the results come back as anything less than 100% passing (which is always the case), the test team will have to focus on why the automation failed and if it was a bug in the automation code or the product code.  Of course that's what they would do.  That's part of the expectations when having automation.  The key point here is that too much of a good thing can overload the test team to the point that they are blocked from doing anything else because all their time is spent investigating automation failures.  But if they don't investigate the failures to understand why the automation results aren't at 100%, is that ok?  If your automation passes at less than 100% or bounces around a lot, is it still beneficial to run it?  Are you missing key bugs?  Those are the questions I ask when in situations like this.  I have lots of opinions about this that I will save for later blogs.

I have experienced teams in these different levels of automation development.  I've managed teams with no automation, I've managed teams with too much automation, I've managed teams that only ran automation labs and produced results daily.  There's not one solution that works.  But as a manager, I found that staying aware of how much automation my team has and watching closely if the automation is a benefit or a burden is key to allowing the automation to be effective in improving product quality.

Friday

Testing News - 25th Nov


  • Growing Demand for Software Testing in the Banking & Finance, Insurance Industries in Indonesia: Maveric Systems Survey Source: News 
  • AppLabs, city-based software testing and quality management company, today announced that it has partnered with TurnKey Solutions, provider of testing solutions for ERP, CRM, and custom software application environments. The partnership will strengthen AppLabs' strong ERP Testing (SAP, Oracle , Siebel , and PeopleSoft) Source : News
  • The independent TMMi testing methodology, which is an alternative to Capgemini Sogeti's Test Process Improvement (TPI) methodology, will have its level 5, known as optimisation, completed next week. - Source: News 
  • The five levels of TMMi are:
    1. Initial
    2. Managed
    3. Defined
    4. Measured
    5. Optimisation
  • Brian Osman, Knowledge Engineer at Software Education, has become the first person in Australasia to gain the full suite of International Software Testing Qualifications Board (ISTQB) advanced certifications.- Source: News

Wednesday

XML DOM and Xpath Snippets - Asiq


In this article I am going to explain about the XML DOM methods and Xpath queries.

The Example XML Document Used For This Article



#1: Finding Elements by Attributes
If you want to access the name of the employee two, write attribute filter condition in your xpath query.
/Employees/Employee[@id='002']

Code:
gStrOrPath="C:\ Test.xml"'Path of your XML
Set gObjXmlOR = CreateObject( "Microsoft.XMLDOM")
gObjXmlOR.Async = "False"
gObjXmlOR.Load(gStrOrPath)
StrXQuery="/Employees/Employee[@id='002']" ' Attribute filter condition in your Xpath query
Msgbox gObjXmlOR.documentElement.selectSingleNode(StrXQuery).Text


The msgbox displays the child element’s text (I.e. Peter).

#2: Finding Elements with Text
Here I have written Xpath query to find the employee Peter’s gender using his name.
/Employees/Employee/name[.='Peter']

Code:
StrXQuery ="/Employees/Employee/name[.='Peter']" ' Text filter condition
Msgbox gObjXmlOR.documentElement.selectSingleNode(StrXQuery). Attributes.getNamedItem("gender").Text


The msgbox displays Sam’s gender as in the XML.

#3: Checking an Element has child or not
To find an element has child or not, then use XML Dom hasChildNodes method.
As per the example XML document, I am finding the employee node has child or not.

StrXQuery="/Employees/Employee[@id='002']
Msgbox gObjXmlOR.documentElement.selectSingleNode(StrXQuery).hasChildNodes


#4: Checking Errors in XML

Set gObjXmlOR = CreateObject("Microsoft.XMLDOM")
gObjXmlOR.async = False
gObjXmlOR.load("Test.xml")
If gObjXmlOR.parseError.errorCode <> 0 Then
  MsgBox("Parse Error line " & gObjXmlOR.parseError.line & ", character " & _
  gObjXmlOR.parseError.linePos & vbCrLf & gObjXmlOR.parseError.srcText)


End If

Source: XML DOM and Xpath Snippets - Asiq Ahamed

Sunday

Keyword driven framework architecture in QTP -Bibek



Framework:A hypothetical description of a complex entity or process-Word Web.
Here, word hypothetical makes it clear that it’s based primarily on surmise rather than adequate evidence.There are no rules and standards on test framework development.It varies from organization to organization and from team to team.Project internal architecture provides a great source of information for organizing the components of our framework.Testing components needs to be well organized to accomplish the task effectively,efficiently and in a systematic order.
We need to develop framework in a way that leads to accomplish the task without much human intervention.The heart of “Keyword driven test automation framework” is common functions that resides within the function library and the keyword acts as  it’s brain.Framework must be enough flexible to run with the desired keyword from the pool of available keywords.
well,It’s been my passion to play with electronic chips since my childhood.I can even survive in desert if i get electronic scarps and a screwdriver ;) Hence,I decided to develop a flexible framework looking at the architecture of RADIO that have been rusted on my bedroom for years.
The component used & it’s analogy with RADIO:
QTP has it’s own limitation for loading resources;as it does not allow to load .QRS file in run time,here,it goes off the track from radio tuning mechanism.Once radio is turned on we tune to different stations and adjust it’s volume and rotate/pull up-down it’s antenna to have a clarity.
Here we do just opposite, we first do the appropriate setting and proceed with further run.
How keyword driven framework works:
1.Plug in to environment.
2.Do a little bit settings on test_environment setting file;eg,paths of resources,email configuration,project url,expected time etc.
3.Search & Select appropriate generic keywords from keyword pool.
4.Go
Radio analogy:
1.Plug in to switch board.
2.Search for station
3.Set the volume and adjust the antenna
4.Turn on.
(normally 4 comes before step 2)
1.. 2 …3… Go……..! that’s it.
Details of the Component Architecture are as follows: [Note:the architecture is based on my own assumption(analogy with RADIO),if you find it uncomfortable,kindly scold me :) with your invaluable suggestions]
Keyword driven framework architecture in qtp
1.Start.Exe: Starts the tool using AOM and pass the control to Driver.
2.Application Scenarios: Test plan,test case and test steps.
3.Test Object Description: Pool of test object properties and description.
4.Resources: Repository files for application dependent functions.
5:Mytest: Driver script resides.
6: Session manager: manages the session of the test run & helps in recovery if run is unsuccessful due to some unforeseen events.
7.Test Environment_Config settings: Test run settings. (application specific)
8.Func Library: Common functions and application dependent functions separately.
9:Recovery Files: .QRS or custom recovery functions.
10:Run Results: Application run results.
11:Help files: help files and framework documentations.
Following are my tweets so far (on framework development):
Automation framework rule #1:If it’s not simple, it won’t work! if it’s simple it’s fun to use.
Automation Framework rule #2:Achieving 100% automatic tests is an unreachable goal,never dream it.
Automation framework rule# 3:Make it easier to discover the errors that occur.
Automation Framework Rule #4:Focus on stability;the more stable the design, the lower the maintenance cost.
You must have realized as of now  that framework is just a wrapper around complex internal architecture to have
a great drive with less interruption.Let turn on your radio coz you need not to know how it’s playing.Stakeholders
should play with our flexible framework  just as we tune/play our radio.
For more interesting stuff on QTP :

Source: QTP Lab:- A touch of madness!

Thursday

Different ways to schedule tests from Quality Center

I had to research a lot to do this hope this helps others doing something similar.

Scenario:  To schedule tests from Quality Center:
Solutions: There can be different ways that we achieve this based on what our motive is while scheduling the tests.

Solution 1: Schedule test sets directly from Quality Center

1.       Login to Quality Center
2.       Navigate to the Test Lab Module
3.       Choose the test set that you want to schedule and click on the Execution Flow Tab
4.       Right-click on the test that requires configuration of Time Scheduler and click Test Run Schedule
5.       In the Run Schedule window, select the Time Dependency tab. The time and date of execution can be configured from this tab as shown below:
6.       Once the above step is complete, you can see the time dependency added to the relevant test cases
7.       Once Run All is clicked from the Automatic Runner dialog, the test status will change to Waiting  and QC will fire the tests to be run at the scheduled date and time.

To be used when: Can be used to run tests for nightly runs (usually after work hours).
Challenges: This is a good solution if scheduling is a onetime activity and not a repetitive process.


Solution 2: To schedule tests from Quality Center “RunTestSet” utility.

Using the OTA(Open Test Architecture) API and the Windows scheduler  to schedule tests:
1.       Save a copy of the “RunTestSet” utility on the machine you want to run the tests from.
2.       Go to the Windows Scheduler and press on Add Scheduled Task item.
3.       The task wizard window is displayed
4.       Click on Browse button to select the path where the “RunTestSet” utility is saved
5.       Define task name and specify periodic range: daily or weekly
6.       Fill NT user authentication information.
7.       Select “Open advanced properties...” checkbox and press Finish button.
8.       At the Run edit box, fill in RunTestSet parameters: server, project, user, password, testset and optionally the host or the host group.
E.g. : "C:\RunTestSetScheduler\RunTestSet.exe" /s:http://QCURL/qcbin /n:Domain/d:Project/u:user/p:Password /f:Folder structure of QC /l /m:mail id
9.       Create a batch file to make this activity a one click process.

To be used when: Can be used to run tests for repeatedly using the combination of the windows scheduler and the “Runtestset” utility
Challenges: This is a good solution if the machines are not locked and are in active state.


Solution 3: To schedule tests from Quality Center “RunTestSet” utility and a unlocking utility.

1.       Use a utility like the “Logon utility” 
2.       Create a Script file that will run this software on the intended machines.
The contents of the Script file will be something like this :  c:\users\Logon <-u user> -p password
3.       From the Local machine run this script, which will first unlock the system.
4.       Then use the “Runtestset” utility just like before and the tests are scheduled successfully.

What is achieved  by this:

·         Able to schedule tests from any machine
·         Can run tests either on the local machine or different remote machines
·         Can be used to either schedule the tasks for a single instance or multiple instances.
·         Can run tests irrespective of the machine being in a locked state or not.
·         Can send a mail to intended users once the execution is complete.

Challenges: The only challenge here is that the machine needs to have correct credentials and should not be in a logged off or shutdown mode.

Wednesday

Off the beaten path - Test Automation that can vary (and learn)... - John Hammink


The problems with most software test automation is essentially twofold: for the most part, it's only capable of following the same path, over and over again, and, it's incapable of ever learning. It's as dumb today as it was yesterday. That said, it's just not as classy as it could be.

What, it can be asked, is such repetitive test automation actually useful for? Well, most importantly, for verifying that something doesn't get broken as new features get added to our software; we opt to run our tests to give us a sense of confidence in what's already there.

Of interesting note, though, is the fact that we nearly always need to pair our automated tests with hands-on exploratory testing. Except that it never really happens like that. The hands-on exploratory testing gradually gives way to the Manual Monkey Test as the testers and managers lose confidence in those unique skills of perception and hunches that make us human and instead seek something quantifiable, repeatable and reproducible.


So how can we turn this around? Boredom with repetitive work begets automation; automation begets need to exploratory test which somehow though time pressures turns into more of the same....in the meantime, the defects to be found live "somewhere" off the path of this tedious nightmare.
There are no complete solutions presented here, but I hope you might get some idea of the possibilities available.

Well, one thing to do is to make our test automation more intelligent. Exploratory testing essentially involves two skills, before thought to be uniquely the domain of homo-sapiens - 1.) the ability to randomize inputs and 2.) the ability to assess and learn if a given output is appropriate given a non-linear input. Let me elaborate.

One of the great benefits of open-source software on test automation is (aside from not being bound to a proprietary language) the ability to leverage the wealth of extension libraries that come with the open source languages, and you're supported for free by a massive user community of volunteers. Tools like Squish and Selenium use Python (among others); Nokia's TDriver uses Ruby.

Without going into too much low level detail ;-), both Python and Ruby have randomizer functions. In Python, one can use the rand()function with an integer as an argument, for example rand(7) returns any number between 0 and 6; Ruby has exactly the same thing. Here's a recipe in Ruby for a simple sort and shuffle of a deck of cards, producing n log n variable swaps:









Now, this won't be the fastest way if you're dealing with a very large list, but for generating a small set to explore alternative paths, it could well suffice.

Now this randomizing can apply to just about any input that comes as an index - so this means just about anything: selected items in a list, buttons in toolbars, parts of a string or a numerical sequence. Once we are randomizing, we are dealing with inputs. But, how to know when to randomize?

Most users out there are going to use a software application in a given or fairly static way - just enough steps to suit their purpose. Configure the music player, cue it up and play the song. But every ONCE in a while, they might deviate from that sequence. So knowing when to randomize, means knowing how often, based on probability, to do something completely different.

By way of a practical example....Cucumber has become a popular way to express automatic tests in plain language. A cucumber test may read something like this:

Given the MusicPlayer application is started
When I press Options softkey
When I select Songs from menu
And I Select the Song number 1
Then Now playing view is opened correctly

Where each line in that block relates to a function implemented in the core language below it, for example:


Once we pass a certain threshold (e.g. the function has been accessed n times, we are twice or three times likely to deviate from our original number and choose a random index). Doing this effectively requires a trick though - the ability to visualize a software system as a state machine in 4D. And if we randomize at the function level, our cucumber test writer gets the benefit of that randomization without having to do anything differently
Once our scripts start to randomize, however, the fixed answer sets to our test runs will no longer suffice; our tests will require the ability to be able to be trained, to learn, and to guess based on previous history as per a correct answer. Fortunately, our open-source languages have the tools to allow us to be able to do just that.

Bayesian gems are already in common use in email spam filtering. Python, for example offers the PEBL and Classifier libraries; Ruby offers Bishop and Classifier gems. Training for the Classifier gem works along the lines as follows (note - NOT TESTED):
require 'rubygems'
require 'classifier'
classifier = Classifier::Bayes.new('Song', 'Not_Song)
classifier.train_Song('%r{.+\.(?i)mp3}')
classifier.train_Song('%r{.+\.(?i)3gp}')
...
classifier.train_Not_Song('%r{.+\.(?i)jpg}')
...
And the good stuff - where the real demonstration of learning happens - is here:
classifier.classify "bubba_chops.jpg"
#=>"Not_Song"
classifier.classify "song.3gp"
#=>"Song"
classifier.classify "song.m3u"
#=>"Song"
In the case where the script gets it wrong - you have to train it. But after more and more iterations - it will start to make the correct decisions in more and more of the cases.
This is where your script can think, and make decisions as you do.

And that's no monkey business.



Source: John Hammink- on Quality

Monday

Skillset - Web Automation Engineer



Here is what a Wireless co wants as a skillset in a Web Automation Engineer:


Job Description:


As an Automation Engineer you will work within the Quality Assurance department and be responsible for developing automated functional test frameworks for Web, database and handheld device applications

Responsibilities of Automation Engineer:
  • Participate in the product life-cycle from design to commercial release, following the Agile development methodology.
  • Develop and implement automation strategies, test plans, standards and processes.
  • Develop frameworks and collaborate with QA and developers in the development of automated testing scripts for our products.
  • Work with technical support, enterprise customers and core development engineers to address product scalability, usability, reliability, functionality, and performance-related issues.
  • Write SQL queries to verify Web page and custom report accuracy.

    Requirements of Automation Engineer:
    • 3+ years experience in software development, with work on entire product lifecycle.
    • Have a strong technical background (quantitative or engineering degree preferred).
    • Experience with SQL.
    • Have some programming experience ? Python or Java preferred.
    • Have experience with test automation frameworks like Selenium RC or Web Driver.
    • Comprehensive understanding of QA methodologies and how to adapt them to the specific needs of our organization and products.
    • Excellent verbal and written communication skills.

    Friday

    What are companies looking for in Testers?



    I always wanted to have a place where i could just add what is the skillset companies are looking for today?
    This will help us perform the Capabilities Assessment explained in the earlier post Five-gems-for-tester-to-groom-in

    (This is from various job offerings, and i'll try and update similar posts regularly in a more categorical manner)


    RESPONSIBILITIES INCLUDING BUT NOT LIMITED TO

    • Hands on experience in preparation of Test scenarios, Test cases & Test scripts
    • Excellent oral and written communication skill
    • Knowledge of Software Development Lifecycle and Software Testing Lifecycle
    • Knowledge of databases (MS SQL Server and oracle)
    • Have the ability to write SQL statements
    • Knowledge of Selenium or similar open source testing tools an added plus
    • Helps define scope and project timelines for a testing cycle.
    • Assists in reviewing/managing bug reports to ensure reproducibility and quality.
    • Experience creating Automation Test Scripts using tools like QTP
    • Proficient in using HP QTP for Test automation (a must) with minimum 2+ years hands on
    • Experience working with an Onshore/Offshore Outsourcing Model is a big plus.
    • Good analytical skills
    • Log the bugs into the bug tracking system
    • Develop test plans based on test strategy 
    • Develop scripts, utilities, simulators, data sets and other programmatic test tools as required to execute Test Plans 
    • Competent in analyzing the requirements and identifying the test cases
    • Strong thought leadership/innovation and passion for metrics driven continuous improvement 
    • WebServices testing experience
    • Testing process and standards test tools, including version control 
    • Framework experience 
    • Good individual contributor as well as a team player with ability to build strong partner confidence
    • Experience on a large financial application a plus.\
    • Testing tools: Quality Center, Performance Center, WinRunner/Vugen, QTP, Load Generators
    • Knowledge of web page composition (static vs. dynamic elements, browser behavior, etc.)
    • An understanding of networking and web technologies such as HTML, JavaScript, jsp, asp, Flash, etc.  
    • (ATDD) with ability to learn and champion Automated Acceptance Test Driven Development
    • Execute load and performance tests against Web applications, web services and SQL Server.
    • understanding of web protocols including HTTP, HTTPS, TCP/IP, and DNS
    • Unix skills and ability to perform secure file transfer function

    Thursday

    Testing News - 10th Nov


    • Worksoft® and HBO Consulting Partner to Provide Test Automation Solutions in South Africa.
      Source: News
    • Selenium IDE V 1.08 released three days ago Source: Selenium.org
    • HP released a new patch for 64bit application testing using QTP 11 Source: HP
    • QAI to Organize the 10th Annual STC, to Focus on Testing 3.0 – From Transaction to Transformation Source: Businesswireindia

    Monday

    Test Execution and Defect Tracking - SQA Campus





    This phase begins with the execution of Test Design described in previous phase of STLC. This includes:
    1. Execution of test cases manually.
    2. Execution of automation script.
    3. Update the test case status(pass/fail) after comparing the expected result with actual result.
    4. Defect Reporting and Tracking.
    5. Re-testing of defects fixes
    6. Regression testing of modules.
    7. Perform Sanity Testing on current build.
    During this phase, we comply to the guideline that is set for us. Here
    1. Execute the test cases which are meant for test to pass first.
    2. Execute the test cases priority wise.
    3. Execute the test case group wise which are interrelated.
    4. Execute the test cases which are written to break the functionality.
    Once we go through the test case execution process, there might be the case that some test cases will fail and some will pass. Now, it is of utmost important to track those fail test cases. This situation invokes the need of some defect management tools to track those fail test cases (bugs). To meet this need, we use defect tracking tools like Bugzilla, Mantis, Jira etc as per availability.
    Now, you might think what happens once a bug is found. Well, once a bug is found in the system, it takes it own life cycle. A bugs goes through many stages before it get fixed. Here, we are trying to elaborate the Bug Life Cycle a bug goes through. We are depicting the Big Life Cycle with the following diagram:
    Now, we are going to explore different stages of the Bug from it’s inception to closer. The life of bug starts from when it is logged and ends when it is fixed by developer, and verified & closed by Software Tester. Following are the status associated with a bug:

    To continue reading follow the link below:

    Thursday

    Only Testing News



    A series of posts would be continoulsy updated with news that affects the testing world and the testers around the globe.

    3rd November 2010 
    • Revolution IT, Australia’s leading application quality management company, has acquired Adelaide based specialist software testing firm Independent Test Services Pty Ltd. Source: News
    • U.S. software giant Microsoft Corp. will establish a research and development center in the Skolkovo innovation hub near Moscow.Microsoft is set to establish an information technology testing center that would help IT specialists test their software products in various conditions. Source: News
    • AppLabs (http://www.applabs.com), the world's largest software testing and quality management company, today announced its plans to hire over 100 professionals in the UK during the next 6 months. The hiring plan is backed by the surge in demand for the firm's software testing services (http://www.applabs.com/html/test-center-of-excellence.html) in the region and encouraging business prospects for the next year. Source: News
    -----------------------------------------------------------------------
    4th November 2010 
    • The theme for EuroSTAR 2010 is "Sharing the Testing Passion".EuroSTAR 2010 is set to take place in Copenhagen from Nov 29 to Dec 02. Testers from across the globe will listen, learn and engage with the leading minds in the industry through intensive tutorial sessions, interactive workshops, keynote presentations on various test topics Source: News 
    -----------------------------------------------------------------------

    Wednesday

    Test automation goes rogue - Bad Boys of Quality

    Test automation tools are much like the hunting gun. You have real uses for it, but you can use it to do nasty things. This blog is about Skype but same kind of issues can be at any other application.
    Let’s imagine that I’m evil h4x0r and I’d want to find new ways to extend my bot network. Everyone has become more and more careful with e-mail attachments so spamming is not the option anymore. Let’s take a quick look at Skype. Could we use it to automate our malware distribution? If I tried to register 1 million new user from web page, I’d have to give correct Captcha for each one of them. But if I do registration thru the Skype, the registration screen is shown below. No Captcha. When I registered, I also noticed there isn’t any e-mail confirmation.



    If I already have small bot network, I can use those weaknesses to register plenty of Skype users. Easiest way to do that is to use automation. My proof of concept was done with AutoIT which is free test automation library. If the desktop application doesn’t have same bot-prevention systems as web application, small automation script can create new users. If you are able to create users, you are also able to do any other task. So my bot could start to call to people, start to add contacts, send files and so on. Chat could start with:”Hello. I am Jack Nicholson from Skype security contact.  We have noticed that you have major security risk which can be fixed by installing the patch which I’m going to send you.”
    Skype has two major security related failures at registration thru the application:
    1. No Captcha which would have prevented automated guessing.
    2. No e-mail confirmation. Confirmation would have required exploiting the weaknesses of some free e-mail service.

    Those two steps would increase the cost a lot and simple few hours bot coding wouldn’t be enough. I reported these as security issue to Skype at mid-July.
    And how short the script is which creates the new user? Well… Here is my full proof of concept. It works only at my laptop and machines with same resolution and other visual settings. The attacker can make the script more generic with some work.
    Run("C:\Program Files\Skype\Phone\skype.exe")
    Sleep(5000)
    
    MouseClick("left", 628, 437)
    WinWaitActive("Skype™ - Luo tili")
    Send("Evil Robot")
    MouseClick("left", 500,501);
    Sleep(500)
    Send("q1w2e3")
    MouseClick("left",794,496)
    Sleep(500)
    Send("q1w2e3");
    MouseClick("left",485,549)
    Sleep(500)
    Send("something@kiva-mesta.net")
    MouseClick("left",778,549)
    Sleep(500)
    Send("something@kiva-mesta.net")
    Sleep(500)
    MouseClick("left",1051,678)
     For more interesting articles on quality follow the link below:

    Source: Test automation goes rogue - Teemu Vesala

    Monday

    Five Gems For A Tester To Groom In Professional Life - QAGuild


    A testing career like any other careers in Software Project has a long journey to cover to reach to a level of satisfaction. Tester needs to perform his career objectives consistently during all stages of his career. A consistency in performance will help him sustain his current level, but to acquire a continuous growth plan, he needs to think differently. A continuous improvement plan is required to attain an upward graph in his career growth. An extra edge over his peers can be attained by adopting following qualities that not only help in career growth but also provide a strong belief in his capabilities.

    The qualities can be listed as below:

    1. Capabilities Assessment: A Tester needs to assess his current capabilities at regular intervals. A regular monitoring helps him in analyzing the capabilities growth chart trend. If his capabilities increase continuously, the trust on him by his peers will keep increasing. His seniors also will rely more on him for any new challenges that come across during different phases of project related to quality areas. A regular trend also helps him in ascertaining his future capabilities.
    2. Optimization: How do you optimize your skills? Do you have a successful mandate for that? Do you realize the importance of that? Do you understand the benefits of that? All these questions will definitely prompt you to understand the importance of optimization of your skills. It not only helps you in your professional growth but also enhances your knowledge stream thereby providing you an internal satisfaction. The catalytic effect is enormous in assessing your potential once you start driving on this path. The best way to get the feel of it is through your peers, mentors, seniors and friends.
    3. Sustenance: Achieving a height is simple as compared to sustaining it. Next heights can be achieved only once you sustain your current potential. Then you can strive for your next leap. Higher level of capabilities can be tapped only by way of achieving it and more importantly sustaining it.
    4. Strategize: There are two ways of growing. One is choosing a smooth road and driving on it without much efforts and pains. Another way is to choose a road with lot of potholes, doing extra efforts, taking pains and making your drive a smooth drive. If you want to catch sight of your peers and managers, choose the second path to demonstrate your hidden skills and talent. The managers will not hesitate in empowering you once you win their trust once for all.
    5. Neutral: You need to take a neutral stride. Unbiased opinions and working style pays in long run. There is no short cut to success is a well known saying. Repeat it everyday so that it gets engrossed in your mind. Be open, neutral and transparent in your work.
    One of the best articles I have read to follow more interesting articles follow the link below:

    Source: Five Gems For A Tester To Groom In Professional Life - Jaideep Khanduja