March 9, 2017
Parallel Testing on Multiple Android Devices Using Appium and Cucumber
Introduction
Testing has become an important part in delivering a high quality product. In order to obtain a reliable application, it’s important to have a good mix between manual testing and automation testing. While manual testing can be focused on exploratory or usability testing, automation testing can take care of regression testing or performance testing.
One of the biggest problems that we are facing while testing mobile apps is the device fragmentation, especially on Android where there are many hardware manufacturers, several software versions, and different screen sizes. More information about Android distribution can be found on: https://developer.android.com/about/dashboards/index.html
If your app has a slightly different layout for different screen sizes, it is imperative that you run your set of tests on different devices without duplicating the tests. Having the same set of tests running on 2 or 3 devices can become time consuming; therefore, the solution would be to run the same set of tests simultaneously on all devices.
Appium is a widely used open-source tool that can be used for cross-platform mobile automation. Appium already offers support for different software versions and platforms, and can be easily adapted to our needs to run our set of tests on several devices. In this article, I will present a way to run your automated tests built with Appium in parallel on several Android devices.
Prerequisites
- Appium 1.5.3
- Cucumber
- Ruby 2.3.0
- Android SDK
- For the scope of this article, I used the app from here: http://square.github.io/picasso/
Project Overview
The current project structure:
Folder/File | Description |
Gemfile | A list of all the needed libraries for the automation project |
/android | Holds APK files and an Appium configuration file (appium.txt) |
/android/appium/txt | Capabilities description for Appium |
/config | Contains configuration files for Cucumber |
/config/cucumber.yml | The template file describes some output formats for Cucumber. In our template, we set the output format as HTML and JUnit. The JUnit output will be interpreted by Jenkins. |
/features | Contains the main project files |
/features/reports | By default, this is empty. This folder will contain the output files generated by Cucumber |
/features/*.feature | Cucumber feature files, containing test scenarios |
/features/step_definitions/*.rb | Contains all the Ruby files needed to interpret the steps of the scenarios |
/features/support/pages/*.rb | Page objects are stored here |
/features/support/env.rb | The env.rb file is a Cucumber specific one, containing set up and teardown methods, along with other configurations, such as selecting the platform for test execution (iOS/Android) |
The documentation on the Appium official page states that the same scenarios can be run in parallel on the same machine for different devices, provided that we start a different Appium server instance for each device. One way to achieve this is to create a script that will launch several Appium instances for each connected device and launch parallel Cucumber commands.
Steps For Running Our Test Suite on Several Devices
1. Identify all devices that will be used for parallel testing
The first step is to identify all devices that will be used for parallel testing and create different folders named after each UDID. To identify the UDID for each device, connect each device to your PC, open a terminal and run “adb devices.” Each folder will contain an “appium.txt” file.
For each appium.txt, you need to add these extra parameters:
- A “deviceName” that will be used for friendly identification of each test report
- The UDID that will be the same as the parent folder
- The port that will be used for starting Appium. Make sure that you configure a port that is not used by other processes and that each device has a different port configured
- noSign = true; this parameter was added because the Appium code was signing the application to a .tmp file for one of the connected devices and failed when we tried to do the same for the second device
Each time we need to add a new device for testing, it is required to create a new folder with its UDID.
2. Create the script to run the same tests on several devices. This has 3 steps:
a. Identify and create the list of connected devices
One of the first issues that we encountered was knowing which devices were connected on the CI machine. Because we used the same set of devices for manual and automated tests, we did not want to be forced to always use the same set of devices.
index=0 cd android adb devices > devices.txt for f in *; do if [ -d ${f} ]; then if grep -q "$f" devices.txt then echo "device found: $f" list_of_devices["$index"]="$f" ((index++)) fi fi done cd ..
This block is comparing a list of predefined devices with the connected devices and generates the list of known connected devices. This list will be used in the next steps of this script.
b. Start Appium server for each connected device
For each connected device, the Appium server is started on the corresponding Appium port (as defined in the appium.txt file), Appium bootstrap port, and device UDID(appium -p “appium port” -bp “appium bootstrap port” -U “device udid”).
p_temp="$(grep -i "port =" android/"$i"/appium.txt)" #retrieving the port for starting the appium session p=${p_temp#*port = } bp=$(($p + 1000)) log_file="features/reports/appium_logs_"$i"_$RANDOM.txt" #creating appium log file for each device echo "Appium logs saved to $log_file." sleep 2 appium -p "$p" -bp "$bp" -U "$i" >> "$log_file" 2>&1 & # starting appium on desired port and saving logs in the corresponding log file sleep 2
Depending of your server capabilities, you can increase the sleep duration if you observe that Appium is taking longer to be started.
Make sure that any two different devices will always use different Appium ports and different bootstrap ports.
c. Run Cucumber commands for each device and create different reports files
Single device execution tests are triggered with the Cucumber command: cucumber platform=android --guess --tags @settings
In order to trigger the execution of the same set of tests on several devices, we need to run the same Cucumber command for all connected devices.
for i in "${list_of_devices[@]}" do cucumber platform=android device_type="$device_type" devices="$i" -t @my_test -f html -o features/reports/"$i".html & done wait_value=$(($index+1)) for (( c=1; c<=$(($index)); c++ )) do wait %$wait_value ((wait_value++)) done
The parameter devices="$i"
is the UDID of each connected device. The parameters “platform” and “devices” are Cucumber environment variables that will be used when the device capabilities are loaded for each scenario.
Appium::Driver.new(Appium.load_appium_txt({file: "#{$platform}/#{ENV["devices"]}/appium.txt", verbose: true}))
The Cucumber reports are saved separately for each device through the command -o features/reports/"$i".html
Changes needed in Jenkins
In the Execute shell block in Jenkins, it is required to change the Cucumber command with the script command:
The script requires execution rights. The reports and Appium logs can be accessed for each connected device:
For a friendly report in each scenario, the “device_name” that was defined in the appium.txt can be printed.
Before do |scenario| puts "running scenario on #{device_name}" end
The HTML Cucumber Test Report:
Short Demo
This short demo shows all of this in action:
Test Execution Timing
In order to have a clear view of the benefit of having parallel testing on several devices, I took a set of tests, ran them separately on two devices, and then ran the same set of tests using the parallel script.
Device | Run 1 (min) | Run 2 (min) | Run 3 (min) | Run 4 (min) | Run 5 (min) |
Nexus 7 | 2.03.34 | 2.01.02 | 1.58.04 | 1.59.01 | 1.58.41 |
Nexus 5 | 1.48.97 | 1.47.68 | 1.48.05 | 1.48.47 | 1.48.23 |
Nexus 7 and Nexus 5 in Parallel | 2.00.09 | 1.57.40 | 1.59.02 | 2.09.49 | 2.00.93 |
These results show that running the set of tests simultaneously will speed up the test execution, as the total time spent is almost the same as the time executing the tests on the slowest device connected.
Conclusion
In conclusion, implementing a parallel execution of your tests is a great improvement for your automated regressions tests, which can be done with as minimal an effort as having a bash script without major changes in your automated tests.