January 5, 2016

Avoiding Type I Errors in an Automated Testing Environment

Automated Testing

In software development, software testing plays an important role in delivering a quality product to the client. Testing, by nature, involves repetitive work. Thus, it lends itself naturally to automation.

Automation is the use of special software (separate from the software being tested) to control the execution of tests and to compare the actual outcomes with predicted outcomes.

However, automation is a double-edged sword—if used effectively, it could produce quality products or it could lead to a disastrous product.

What are Type I Errors?

A Type I Error detects an effect that is not present. More simply stated, Type I Errors are tests that fail where they should have passed because there is no breakage/defect in the product functionality that led to these failures.

In the simplest terms, it can be called a false alarm.

Type I Errors show that our automation suites are unstable; therefore, we cannot be confident about the outcome of these tests.

Why do Type I Errors occur?

Writing automation scripts requires different approaches; one automation approach can be that, if we did not structure our test data properly, then it could lead to Type I Errors during the recursive/iterative execution of such tests on multiple data sets due to the current residual data created in the system.

Framework implementation can be a second reason for Type I Errors to occur because if the framework is not implemented according to the current scenario, then it can leave references of failed tests in the system and will result in Type I Errors.

Not writing modular tests could lead to duplicate lines of code in different tests. Type I Errors can then also occur there if an automation engineer does not uniformly make changes in all of the required tests. This brings us to the conclusion that spending the right amount of time upfront while designing the approach and implementing our automation frameworks will help us avoid Type I Errors.

Impact

Every time a test fails or produces an unexpected outcome, it requires investigation.

Repetitive failures can frustrate the engineers, and they eventually start ignoring these failures. Ignoring failing tests increases your risk of overlooking a potential bug, which can go on to production and prove fatal for your release.

To avoid this, automation engineers generally tend to babysit these tests that are producing Type I Errors, which in turn decreases the productivity and momentum of our automation as we continue to spend more time investigating failures.

With all of the uncertainty about application health and the time spent in trying to solve these Type I Errors, the automation maintenance costs tend to increase.

Ultimately, it boils down where you think you should be spending more time—investigating Type I Errors or automating new tests?

Solution

One of the solutions to this problem is that we need to deploy the application on optimal configurations. This decreases the risk of slow performance, which can cause Type I errors, and can also ensure good performance while accessing the application.

Another solution is the need to have controlled environment. If both the manual and automation team shared the application instance, then there is a chance of unwanted script failure during suite execution. This is because a functional testing member may have changed some settings of the application or machine during their testing effort. Due to this, the entire effort of test execution goes to waste.

As a standard practice, only the automation team should have access to these environments where the application instance is deployed so they can execute their comprehensive regression suites corresponding to any new builds.

Keeping tests short is another point to remember. The longer the test, the more brittle it becomes.  The main reason why people tend to write long tests is because they want to cover an entire use case. Moreover, they think to validate a few other things while at this step. Don’t fall for these temptations; instead, break your scripts into multiple tests with as few steps as possible in each of them. Thus, it is extremely important to not only keep tests short, but to make sure that each test contains the important assertions as per the requirement.

Another key is to make sure that every test script has the ability to run in isolation and there are no script sequence dependencies between multiple scripts. For example, suppose you have 4 test cases for creating a user, creating a group, adding the user to the group, and searching for the user from the group. It is a bad practice if you maintain the scripts’ execution sequence in such a way: first create user script and then create the group script call in the test suites. If the test batches are not designed in this order then your tests can fail.

It is always advisable to create a script that can be run in isolation; we can use before method or before class annotations to call methods that can ensure pre-requisite data before the test is run. In this example, you can create the user, then create the group as reusable methods, and invoke them before the test class to run the tests independently. In this way, we can keep our tests independent and run them in isolation when needed.

The next important key to build into the frameworks is to use the right locators for identifying objects. This involves identifying objects with their IDs, naming properties, and CSS locators rather than using indexes, coordinates, or Xpaths.  This is because Xpaths, coordinates, and indexes change every time the developers make changes to their UI and make our tests brittle.  If these IDs don’t exist, then your automation team should work closely with the engineering team to get these hooks or developer IDs implemented to create stable tests.

backToAppointmentsLink = new Link(driver,
By.linkText(“Back to Appointments”));
printableViewButton = new Button(driver,
		By.xpath(“//div[@id=’chromemenu_buttonBar_4’]/a/span[2]”));

newAppointmentButton = new Button(driver,
		By.id(“chromemenu_buttonBar”));
saveAndCloseButton = new Button(driver,
		By.id(“chromemenu_buttonClose”));

The code we have shown here is for clicking on appointments link.

In the code, you will find that elements have been identified using Xpath. This is not a recommended practice because your script is bound to fail whenever there is change in the object location/sequence or Xpath.

We should always make sure to identify the object uniquely using ID->Link Text->CSS.

Similarly, at the end of each script, we dispose of all active objects and remove test data. This is really important because using this approach ensures that each and every script starts from a controlled, fresh state of the application. If we do not perform this tear down, then we might end up facing memory leaks or issues with residual objects/records in the system causing Type I errors.

@Test(dataProvider = “dataProvider”)
public void addNewUserTest(String username, String password,
String firstName, String lastname) {
Loginpage.loginAsAdminToApplication(username, password);
clickTab.navigateToContactsTab();
contactsPage.addNewUser(firstName, lastName);
contactsPage.clickSaveButton();
}

@AfterClass
public void clearData() {
	clickTab.navigateToAdminTab();
	usersPage.selectAllUsers();
	userspage.deleteRecords();
	clickTab.navigateToGroupstab();
	groupPage.selectAllGroups();
	groupPage.deleteAllGroups();
	clickTab.navigateToContactstab();
	contactsPage.selectAllContacts();
	contactsPage.deleteContact();
	loginPage.clickLogoutButton();
}

The next key measure that we built into our frameworks is dynamic object synchronization. The control of execution waits for a defined amount of time for the application to reach to a particular state and then performs the next step, instead of waiting for an arbitrary amount of time. If the state is reached within the allotted time, then the driver mechanism automatically executes the next line of code. Otherwise, it logs the error and fails the test with proper reasons. This not only reduces Type I errors, but also decreases page reloads, provides an edge in controlling Ajax elements, and produces better batch results.

privateWebElementwaitAndFindElement() {
element = new WebDriverWait(driver, maxWaitTimeToFindElement)
	.until(new ExpectedCondition<WebElement>() {
		@Override
		publicWebElement apply(Webdriver driver) {
			returndriver.findElement(by);
		}

		@Override
		public String toString() {
			return “ – searched for element located by “ + by;
		}
	});
highlightElement();
return element;

Conclusion

The most important aspect of all of this is how we can benefit from resolving Type I errors:

  • We will not miss potential bugs, which is an indicator of the improved quality and reliability of our automated tests.
  • The Engineering team will always have the right measure of their build health, which will help them to make decisions faster.
  • There is an increase in productivity because automation engineers can focus on adding more tests instead of spending time on investigating Type I Errors.
  • The time saved can be utilized to further enhance automation framework and test coverage.
  • There is a definite reduction in cost because less/minimal manpower is required for the analysis of Type I Errors.