Manual testing can be a time-consuming and error-prone process, therefore automating the test cases comes as a natural solution. Running automated tests offers the QA team the ability to focus on more productive tasks, rather than running the same steps over and over again.
I’m currently working on a mobile application that is designed to deliver daily news to users in a digital format. The mobile application is implemented as a native application for both iOS and Android platforms. The application delivers the same set of features on both platforms, with slight variations in UI/UX to provide a native experience on both platforms. Automating test cases for each platform would result in writing the same logic, twice, but with small differences. We have decided to look for a tool that would empower our team to write cross-platform automated tests.
Out of a number of available options, we decided to go with Appium, an open-source mobile application UI testing framework that offers users the possibility to execute tests on mobile devices regardless of the type of OS. Appium supports automation tests on real devices or on an emulator or iOS simulator, as well as native, hybrid, or web application testing.
Just like in any other project, maintainability of the codebase is a primary concern in automation projects as well. Code duplication is just one of the many “code smells” in software projects that can draw attention to possible maintainability issues in the codebase. A “code smell” is a symptom in the source code of a project that usually indicates design issues. Once identified, you can eliminate code smells from your source code by applying good programming practices. Often design patterns offer such guidelines.
Although Appium helps run the same automated test cases on multiple platforms, at some point you still have to write a bit of custom code for each platform. One such example from our experience is identifying elements on the screen, which might result in platform checks done very often in order to correctly identify the same UI element on each platform you run the test cases on.
After a thorough analysis of our code base, we found code duplication to be the primary code smell indicating bad design in the automation project. Identifying UI elements is done using the same API, but depending on the platform the test case is running on, you might have to use different properties to correctly identify them. For example, on iOS you might have to identify a button based on its text, while on Android you might need to use its content description (the text used by the accessibility support). Moreover, the same functionality might be achieved by a slightly different sequence in the UX. For example, opening a screen could be done by clicking a button on iOS, or by opening a menu and clicking an option on Android. These slight differences led to a bad design in our codebase, which we have managed to identify and improve through the usage of Strategy and Factory Design Patterns.
Next, we presented the usage of each design pattern while solving our problem. We started with the strategy design pattern, which we can use when we want to run tests on a specific platform. The platform implementations representing the algorithm will be chosen on certain conditions; on the other hand, with the help of the factory design pattern, all of the platform choosing conditions and page objects instance creation is done in one place only. Thus, when adding new conditions or new instances, we only need to modify in one place and the changes will be observed all over the code where our factory is called.
In our project, we used the Page Object pattern intensively. The Page Object pattern represents the screens from our mobile app as a series of objects. Because the application is cross-platform, page objects tend to get very big, making it hard to add new logic and making it risky to modify existing functionality. This is due to the fact that is difficult to follow where the changes are used because of the huge amount of code. This problem can be addressed by using the Strategy design pattern.
In the Strategy pattern, we create objects that represent various strategies and a context object whose behavior varies as per its strategy object. The strategy object changes the executing algorithm of the context object in the following ways:
Our automation tests are written in Ruby, and Ruby does not formally support interfaces in the same way Java or C# do, but that doesn’t mean that is it impossible to maintain a set of interfaces in our code. As mentioned before, the project I work on is a mobile application where users can download the digital version of a newspaper daily and can navigate through articles. We will demonstrate the applicability of the Strategy pattern in the following example. First, we want to open an edition in our application and click on an article, and then we will check if the article title is displayed.
We have declared the class ArticlePageObjects, which plays the interface role but doesn’t actually implement any logic. Our class will have only method definitions that will raise exceptions, if called. By doing this, you guarantee that anyone extending this class needs to override the methods that they want to call or else they will see an exception at runtime.
The next step will be to create concrete classes that will be subclasses of the ArticlePageObject class and will provide implementations for the methods that exists in the ArticlePageObject class. For our mobile project, we need a specific declaration for Android and for iOS; therefore, we will have a class iOSArticlePageObject and a class AndroidArticlePageObject, and each one has its specific implementation.
The next screenshots present some mock implementations. The methods have a specific implementation for Android, providing the necessary xpaths to find elements for an Android device, as well as doing some logic like clicking on the element or returning something if it’s displayed (typically true or false).
The same goes for an iOS platform.
Using the Factory design pattern, all of the platform choosing conditions and Page objects instance creation is done in one place only; thus, when adding new conditions or new instances we only need to modify in one place and the changes will be observed all over the code where our factory is called.
The factory encapsulates object creation and reduces the dependencies of the application on concrete classes. The Factory method pattern has several advantages:
This is the definition of the Open Closed Principle, which means that you can modify the code that already exists. You can extend the code and use it in different ways, but the old code is still intact. Software entities like classes, modules, and functions should be open for extension but closed for modifications.
The next step will be to create a Factory to generate an object of concrete class based on our mobile application. In our case, the class will be called PageObjectsFactory.
The role of the PageObjectsFactory class is to get the concrete implementations depending on some additional information, like the platform the tests are running on. One example scenario is opening the first article from an edition and checking that the title is displayed.
If, for example, there is the need to add a new platform in our mobile application (eg. Windows phone), then we would implement a class called WindowsArticlePage and modify it in the PageObjectsFactory class in order to make use of this new implementation. The main advantage is that the scenarios and the implementation of the scenarios remain the same.
Additionally, I believe that by using these two design patterns in our automation project, our work as a QA team becomes a lot easier because we can better distribute the work between our members. Once we agree to a common interface, someone can work on different platform implementations (iOS, Android) and another can implement the scenarios using the common interface.
In conclusion, it is useful to implement Strategy along with Factory Method.