January 7, 2015
Selenium Grid Nodes to the Rescue for Automated Testing
In the never-ending game of application vs. automation, applications always seem to try to have functionalities that can’t be automated. The challenge that led me to create this solution - and write this blog post - arose out of one such instance.
Let the story begin with this case study:
Suppose you have a scenario that asks a tester to verify certain content which depends on the tester’s current geographical location. What if you have to test any such test case as written below:
Verify that the dropdown under the text “I want my complementary fitness session in ____" shows user’s nearest health club based on his/her geographical location.
One way (the easiest one) to overcome the problem in testing such a scenario is to mark this as “Manual,” but this will give opponents of manual testing a lead. Let’s not give them an easy breakthrough. So, we opted for the slightly tricky automated test in Chrome (though it is not so flexible, anyways its flexibility is not the scope of this blog).
- Create a new Chrome profile.
- Add the location mocking plug in to it, like, I used manual Geolocation.
- Save this profile and use this profile for your automation.
Even if we are concerned about this solution not being flexible, there was another challenge of implementing this solution on a distributed automation infrastructure like Selenium Grid. The solution will only work if each node has that particular Chrome profile. Sure, we can have this - but after each execution the profile gets bulkier (obviously, because of adding cache/cookies and other browser history elements). In order to overcome this problem, there has to be a mechanism that flushes the current set of profile copy and creates a clean one for each execution.
Adding SOA to this solution:
Services are unassociated, loosely coupled units of functionality that are self-contained. Each service implements at least one action (like in the above case, creating a fresh copy of the browser profile). The reason for having this architecture is not to load your Selenium framework with such custom logic.
Ingredients and their roles:
Node and client machines: In the current implementation, the nodes have the services deployed on them, where the client sends the request to the service deployed on the node and performs the action.
Hub: This is the normal Selenium hub which interacts with its nodes to run Selenium commands.
Server and Services:In this implementation, the server will be running on all the nodes and certain services will be running on it. These are normal services that are made to do some specific operations. These services will be called and the response will be taken outside the conventional Selenium grid-hub interaction and this implementation will wrap that.
Service building framework: This will create the stub and skeleton around all the methods and help you to create a service around your method. I used Apache CXF. You can follow this link for more details on Apache CXF.
Considering all those above elements, the architecture will look like:
Client Server flow: This flow occurs between Client and Server deployed on the node. Client sends all the commands to the server and the service does the proceedings through the server node flow.
Server node flow: This is a local flow which does all the operation to set up the node.
In our current implementation, the client and hub may be the same machine. So in the above diagram, the client and hub can be merged together to form a single block. As you can see, there is a parallel communication path created between client and node (via server) and hub to node. Now, before starting the Selenium automation cases, you can set up your node to make the prerequisites. The best trait of this approach is that it is a complete SOA and your existing Selenium framework will have minimal LOC changes if you want to set up this implementation. All you have to do is make a call to the service and do your need.
Making a call to service:
For a general CS based architecture, it is one of the simplest things to do. Then, why have I added this segment, Well… there is a small hurdle here - getting the IP of registered nodes. Yes, there is no direct mechanism that Selenium APIs have to get you the list of all registered nodes. So, right now, we are fetching it from grid console (from UI) and we are trying to create a better solution for this.
What else can we achieve using this implementation?
This solution can be scaled up to have numerous benefits:
- Start or restart and register Selenium node to the hub just by creating and calling a service
- Doing CRUD operation on a file you want
- Install or remove a package before your tests
- The solution is platform independent, so you can use it on any cross-platform combination (Mac/Windows), which is an extremely important trait from the Selenium Grid perspective.
All you have to do is create and deploy your services and call those. Make sure you start the Tomcat (or any other server if you want) at system startup.
Like any other implementation, this also has certain limitations as of now:
- Like explained above, right now we do not have any mechanism to fetch the IPs of registered nodes so, UI fetching has to be included right now,
- Hub and Client are two separate entities. So the flow is independent of each other.