March 10, 2016

Challenges of Testing Financial Applications

Would financial applications be considered mission-critical? Would they be allowed to fail? And what, in this context, is a failure, anyway?

These are all simple questions with some obviously not so simple answers. Yet, they can be tackled and looked into so as to understand what is truly mission-critical in order to be able to test from there and, beyond, to test other significant, albeit not mission-critical, features.

In short, these are the challenges of testing financial applications. But there’s more to it than meets the eye, even though it goes without saying that even small mistakes can produce substantial losses to companies or users of financial apps. That’s why it’s preferable – especially for one’s first contact with financial apps – to get a glimpse of what lies ahead.

First and foremost, the short path to quality software is really, really long and starts with understanding the market, which in itself a very complex task. The path continues with understanding the business, which is never easy and always very important, and then it goes on to understanding the business needs that the application is to solve. The path continues not with understanding the application, but with understanding the application requirements that are both market and business specific.

And yes, it ends with testing the application. Or, as one may visualize it, it ends with testing the practical convergence of market, business, and business needs.

So, would financial applications be considered mission-critical?

While there’s no generic answer to this question, it’s reasonable to assume financial applications have a very little margin for error. It fact, in most cases, this margin is so narrow that it requires nothing short of perfection; but here it’s where it gets difficult: we are not built for perfection.

Let’s say that we have five people working on a financial application, and that each one has a 1% precision margin. This means that our professionals do perfectly 99% of the time. Do you know what their overall group “precision” would be? Somewhere around 95%. Five times the 1% margin. How do you even mitigate this in a mission-critical application?

Through collaborative processes – that is, helping individuals narrow their precision margins from 1% to 0.1% – and reducing the product risk margin by almost 5%, the overall group precision increases to a more reasonable 99.5%. This is doable because it is very unlikely that the 1% precision margins of five or more individuals overlap, which would lead to precision gaps, the main source of faults in mission-critical applications.

It makes perfect sense and it is no secret that good processes help us work better, allowing us to build better products.

However, there is a downside to leveraging overlap: getting the point of proximity to perfection requires not only knowing – or learning about – the business, but also having a certain degree of operational independence. This, in turn, ensures that testers and quality assurance engineers can properly fulfill their duties, especially since even small errors can produce losses, financial or otherwise.

But knowing the business alone does not suffice, as more often than not testers are required to also understand policies that are specific to industry standards, business and applications, because they do not always overlap. Yes, there is no such thing as a standard for financial apps standards. As such, a company-culture based on knowing contains all or most of the best practices involved.

In short, one can visualize the overall testing experience as a chart in which one axis represents industry-specific testing challenges, say specificity (or, the inverse of generality), where security testing is on the lower part – as it is more generic than the application itself – while the other axis represents complexity, like this in the graph below.


But there is more to the above complexity than just operational practices, there are also standards for security and data confidentiality. For instance, it’s rather unusual to test on real data so, in general, data is created from scratch or anonymized as one of the first steps of data preparation prior to testing. Security testing, in comparison, is an ongoing process that starts as early as having the first specifications agreed upon for the financial app under test and ends when support for the given application ends.

Obviously, these are all real challenges of testing financial applications and, despite their supposed complexity, are actually manageable.

Given the best effort, would they be allowed to fail?

This is, indeed, a tricky question, because failure is imminent. However, for financial applications, of utmost importance is how graciously they fail, when they fail. So, the real question becomes: How do we make financial applications fail graciously and recover swiftly from failures, regardless of their causes? And how do you test that?

It’s just as it is in the calculus above: the product of individual reliability gives us the overall reliability of the system. And, well, besides software crashes that we do test, there are also hardware failures of the servers running the application and even service failures that we need to make sure don’t cascade into the application under test.

So, while financial applications may fail, it’s important not only how deep they fall, but how fast they recover. Due to advanced high-reliability technologies, the recovery times can be as little as a few seconds, which is mostly unnoticeable.

What, in this context, is a failure, anyway?

First of all, it’s a cost. Be it loss of business, loss of profit, or something else, this needs to be taken into account when defining the application parameters. One smart way of mitigating this is to find clever ways to migrate failures from loss of business to loss of profit, by simply using high-availability technologies. They have a cost and, as such, they reduce the profit of the company running the application. But the risk of lost business is greatly decreased.

Secondly, it’s still a cost, but it’s the cost of developing the application to fail in a gracious way, where recovery is possible and fast. It takes both time and money to build applications that, even if a failure occurs, still operate normally or, if this is not achievable, only have a few minutes of downtime.

Finally, a failure is the effect of unforeseeable factors on even the most foreseeable outcome of expertise, technology, process, and people.

So yes, testing financial applications is challenging – just as challenging as building them in the first place. Simply put, if it is challenging to build something, it is at least as challenging to build quality into it.