Wednesday

Docker images for test automation

Here are some images that the test automation community would love:

Selenium
PhantomJS 2.0 / GhostDriver
Protractor
Cucumber
Appium
Chromium
Gatling
Serenity
Allure
NodeJS and Chimp
Jenkins + SonarQube +Nexus + Selenium Grid +GitLab
Microservices testing using RabbitMQ

For more repositories/images  https://hub.docker.com/explore/

Thursday

Docker + Compose + Selenium Grid = Automation awesomeness!


I have been trying to get my hands dirty with Docker and Selenium for a while. Finally what inspired me was a recent meetup where I saw some cool test automation reporting frameworks. No I did not see Docker there, but when I researched about Allure the test reporting framework I stumbled upon this cool video where the developer has used Docker Selenium and Allure:



Why Selenium Grid & Docker?

If you have been through the journey of CI (continuous integration), as an automation engineer you would know the challenges of building a reliable framework is time consuming.
This concept has revolutionized our way of thinking of how you build a selenium Grid, no more config mgmt/provisioning machines. All you need is a VM that can run the docker images as a container.

Contributors to this project who have made it a reality:
Matt Smith
Leo Galluci 


Selenium grid has been there for a while and matured with time. What it does really well is speed up your CI massively! How? By running them in parallel.Now for anyone who has used the grid before knows that maintaining the hubs /nodes / os/ browser combinations is a challenge.
The bigger challenge than that is the 
  1. Virtual Machines / Networked physical machines / VDI etc 
  2. They need to have the selenium server running correctly on them
  3. If something goes wrong debugging the node
  4. Collating data and reporting
Solutions:
The answer to the above problems is what we are looking at unless you want to spend some money to get SauceLabs/ BrowserStack / Rainforest. Here is a good diff between the three: (A topic for some other blog post)

Docker for the newbies:
If you haven't heard of Docker you haven't been reading blogs/attending meetups/conferences. In simple words Docker is a scaled down Virtual Machine that allows you to package all that you need - aaps - database - dependencies - configs - libraries - frameworks and so on into a standardized portable container. 
For detailed info read on at the Docker's website - Build, Ship, Run

What happens when the two meet?
If you want speed , efficiency and something cool in the test automation world - Enter Docker Compose. Once you have read about Compose on the official site you would know where we are heading to: Pre-configured selenium clusters that run on newly spun Docker images

The three amigos of test automation:
  1. Selenium Grid - that manages routing of tests in a hub/nodes format
  2. Docker - that configures browser and apps 
  3. Compose - The hub of the docker world that acts as central point from where everything is spun up on the go!
Fun facts:
  • Docker images run as user-space processes on a shared OS
  • These images are nothing but plain Dockerfiles
    • http://odewahn.github.io/docker-jumpstart/building-images-with-dockerfiles.html
    • Dockerfile
  • Therefore these images share same resources
  • but are still isolated and require far fewer resources to run than a VM
Let's get to the point:

1. Install Docker Compose - https://docs.docker.com/compose/install/
2. Verify your installation
The installer places Docker Toolbox and VirtualBox in your Applications folder. In this step, you start Docker Toolbox and run a simple Docker command.
  1. On your Desktop, find the Docker Toolbox icon.
    Desktop
  2. Click the icon to launch a Docker Toolbox terminal.
    If the system displays a User Account Control prompt to allow Virtual-box to make changes to your computer. Choose Yes.
    The terminal does several things to set up Docker Toolbox for you. When it is done, the terminal displays the $ prompt.
    Desktop
    The terminal runs a special bash environment instead of the standard Windows command prompt. The bash environment is required by Docker.
  3. Type the docker run hello-world command or $ docker info
  3. Selenium Grid Hub
Create the hub on localhost - download and run container from Docker repository with selenium hub:
   $ docker pull selenium/hub
   $ docker pull selenium/node-chrome
   $ docker pull selenium/node-firefox
$ docker run -d ‐‐name seleniumAdy -hub -p 4444:4444 selenium/hub
When the container is downloaded navigate to http://localhost:4444/grid/console and you should see an empty grid console

  4. Selenium Grid Nodes
Firefox profile
$ docker run -d -P ‐‐link selenium-hub:hub selenium/node-firefox
$ docker run -d --link selenium-hub:hub selenium/node-chrome:2.53.0

Chrome Profile:
$ docker run -d -P ‐‐link selenium-hub:hub selenium/node-chrome
$ docker run -d --link selenium-hub:hub selenium/node-chrome:2.53.0

All docker images here: There are 11 of them:
https://hub.docker.com/r/selenium/

  5. Validate the containers
$ docker logs hub
$ docker logs firefox
$ docker logs chrome
$ docker ps

Will list out all the containers(processes - ps) running. A hub and two nodes with Chrome and firefox



  6. Bring it all together with docker-compose
  1. Stop all running containers $ docker stop $(docker ps -a -q)
  2. create docker-compose.yml file that will decide how the images interact - The nodes need to be linked to the hub and the ports need to be defined:
  3. seleniumhub:
      image: selenium/hub
      ports:
        - 4444:4444
    
    firefoxnode:
      image: selenium/node-firefox
      ports:
        - 8000
      links:
        - seleniumhub:hub
    
    chromenode:
      image: selenium/node-chrome
      ports:
        - 8000
      links:
        - seleniumhub:hub
  4. Run it! $ docker-compose up -d
  5. Navigate to http://localhost:4444/grid/console and you should see everything as before , but with configs in a file 
  7. Scale it up!
$ docker-compose scale chromenode=20
$ docker-compose scale firefoxnode=30
Adds more nodes to the hub!

  8. Stop Docker
$ docker stop seleniumAdy

Happy Selenium Dockering!

Wednesday

Configure IntelliJ for a full stack JavaScript Automation


There are some crucial IntelliJ plugins to install:
  1. Base64 for IDEA and Storm
  2. BashSupport
  3. Bootstrap
  4. Bootstrap 3
  5. ddescriber for Jasmine
  6. JS Toolbox
  7. NUnitJS
  8. Markdown Support
As a peace offering to the mighty IntelliJ, use Java as project SDK:
use Java as project SDK, to keep IntelliJ happy
I prefer to configure four separate modules, to help separate back-end vs. front-end JavaScript dependencies:
I prefer to configure four separate modules, to help separate back-end vs. front-end JavaScript dependencies
Add the bower_components library to the client module, and the node_modules library to theserver module:
Add the bower_components library to the client module, and the node_modules library to the server module
And be sure to enable JavaScript libraries in the editor.
Right click in editor and choose JavaScript libraries to use
Per best practices, we do not commit the local IntelliJ IDEA configuration folder (/.idea/) to the repository, instead adding it to the .gitignore file like so:
# IntelliJ IDEA local workspace
.idea
However, for some developers' convenience (and others' dismay) we do commit the four IntelliJ module .iml files to the repository:
client.iml
server.iml
e2e.iml
doc.iml
Source:
http://stackoverflow.com/questions/25163410/how-do-i-configure-intellij-for-a-full-stack-javascript-web-app

https://www.jetbrains.com/help/idea/2016.1/javascript-specific-guidelines.html





Enable WebGL on Chrome

First, enable hardware acceleration:
  • Go to chrome://settings
  • Click the + Show advanced settings button
  • In the System section, ensure the Use hardware acceleration when available checkbox is checked (you'll need to relaunch Chrome for any changes to take effect)
Then enable WebGL:
  • Go to chrome://flags
  • Enable Override software rendering list  , WebGL Draft Extensions and WebGL 2.0 Prototype 
  • Ensure that Disable WebGL is not activated (you'll need to relaunch Chrome for any changes to take effect)
Then inspect the status of WebGL:
  • Go to chrome://gpu
  • Inspect the WebGL item in the Graphics Feature Status list. The status will be one of the following:
    • Hardware accelerated — WebGL is enabled and hardware-accelerated (running on the graphics card).
    • Software only, hardware acceleration unavailable — WebGL is enabled, but running in software. See here for more info: "For software rendering of WebGL, Chrome usesSwiftShader, a software GL rasterizer."
    • Unavailable — WebGL is not available in hardware or software.
If the status is not "Hardware accelerated", then the Problems Detected list (below the the Graphics Feature Status list) may explain why hardware acceleration is unavailable.
If your graphics card/drivers are blacklisted, you can override the blacklist. Warning: this is not recommended! (see blacklists note below). To override the blacklist:
  • Go to chrome://flags
  • Activate the Override software rendering list setting (you'll need to relaunch Chrome for any changes to take effect)
For more information, see: Chrome Help: WebGL and 3D graphics.

Thursday

Node NPM error: URIError: URI malformed at decodeURIComponent (native) at Url.parse


Problem: If you are using node and accidentally change your proxy settings every time  you run npm you get something like this:

NPM error: URIError: URI malformed at decodeURIComponent (native) at Url.parse

What Happened there?
Once a bad URL makes it in to the config file, it is broken, but you as the operator are unaware until you do another operation. This was confusing when I was setting both HTTP and HTTPS proxies. It delayed me from finding the real problem. To resolve this issue it would be great if the URLs were validated before they went in the file. This would reduce confusion.

How to solve this?

  1. You cannot reset npm config from command-line because the npm command doesn't work anymore
  2. Reinstalling node will not help
  3. Locate where npm has its global settings stored: C:\Users\Ady\.npmrc
  4. edit the file and everything is back to normal!

This solves a similar but different problem: 

What is reactor??





In simple words: Reactor is what makes multi-module builds possible

Reactor is a part of Maven that allows executing a goal on a set of modules. 
It determines the correct build order from the dependencies stated by each project in their respective project descriptors, and will then execute a stated set of goals. 

It computes the directed graph of dependencies between modules, derives the build order from this graph and then executes goals on the modules. In other words, a "multi-modules build" is a "reactor build" and a "reactor build" is a "multi-modules build" :) 

Reactor does this:
  1. Collects all the available modules to build
  2. Sorts the projects into the correct build order
  3. Builds the selected projects in order
How this fits in the test automation space?
Separate out:
  1. Test Data Seeding
  2. Test Execution
  3. Reporting Framework
  4. Test Coverage

What you should read: 

Sources:
http://stackoverflow.com/questions/2050241/what-is-the-reactor-in-maven

Monday

Automate Android native applications by installing the apk file on your PC / without the need of a mobile phone






Problem: Need for automating an android application using Appium/Selenium but without emulators and without a mobile device

Solution:  Install android app inside chrome as an extension! hOW?

Prereq: Chrome 37+ for the solution to work!

How this works?
The App Runtime for Chrome (or ARC) is the piece of software that allows Android apps to run in Chrome. In the same way that ART (and the older Dalvik) currently run Android apps in Android itself.

What we will need?
  1. ARChon Custom Runtime: ARC is officially only designed for Chrome OS at the moment. To get around this, developer vladikoff created the ARChon Custom Runtime, which not only allows Windows, OS X, and Linux to run Android apps, but also removes the limit on how many can be run.
  2. Unpacked Extension: Extensions normally come from the Chrome Web Store or prepackaged in a .CRX file. For the purposes of Android apps, we're going to use unpacked extensions. These are folders that contain all the files for an extension (or, in this case, Android APK). They function the same as extensions, but are not wrapped up in a single file.
Let's automate:
  1. Install ARChon Runtime
    • Download the ARChon runtime here.
    • Unzip the archive.
    • Open your extensions page in Chrome by going to Menu > More Tools > Extensions
    • Enable Developer mode in the top right corner, if it is not already enabled.
    • Select "Load unpacked extension."
    • Choose the folder containing the ARChon runtime you unzipped earlier.
  2. Install Existing Android Apps
  3. Once you have a .zip file containing one of these modified APKs, here's how to install it:
    • Unzip the file and place the folder (likely named something like "com.twitter.android") in a place you can easily find.
    • Open the Extensions page in Chrome.
    • Click "Load unpacked extensions."
    • Select the folder with the modified APK you downloaded
  4. Automate using Appium/Selenium
You could also Repackage Your Own Android Apps for Chrome: 
Read this  Sources: http://lifehacker.com/how-to-run-android-apps-inside-chrome-on-any-desktop-op-1637564101