The Development of a Report Scheduler and Large Report Routing Tool
It was not so long ago that I had the exciting opportunity to build a software product for one of our clients in the financial space. Our team was driven to excel and followed agile best practices to bring this product to fruition. In this blog post, I will look at some of the key highlights from this project experience.
What we developed in the end was a robust reporting tool that allowed users to generate different kinds of reports pertaining to the performance of different asset classes by selecting different types of criteria. These queries were almost always run over historical data, i.e. before the close of the active business quarter. However, in some cases, users were interested in specific reports that were related to the future and expected to be made available only at a specific period of time. This happens when a user’s business decisions are hedged on future results and they want to make instant decisions as soon as such information is made available.
In the previous version of this product, there was no way a user could get hold of the “early morning newspaper” to get to the benchmark details tailored to their business requirements. They would have to wait and try every now and then to check for something interesting coming up. Often this led to the losing of an opportunity for the end user, as well as a client not getting wowed in the process. Every company wants to WOW their clients by presenting something new, extra-ordinary, which can help them make their business decisions in a better way. As an old saying goes: “Build a better mousetrap, and the world will beat a path to your door.” Hence here was an opportunity to introduce a feature to be made available for “Premium” users (so as to distinguish them from standard users) to allow them to upgrade their contracts/SLAs.
So in this intended version 2.2, 3Pillar Global designed a new epic experience for the users of this tool in the form of Report Scheduler.
The Report Scheduler provided a powerful feature in the hands of the Premium users, giving them the ability to define the business criteria and chose when they could get a report. Earlier, they only had the option of saving a criteria set and running it on demand, but with the introduction of this new feature, users could automate the process to run the report on specific triggers.
In business terms, the asset class performance details are made available as two checkpoints once 60% and 100% of the underlying funds are processed (called Preliminary and Published, respectively). Now the user can make the report available as soon as the underlying asset classes are updated as Preliminary / Published or in any state as on a particular date (in which case it will give last public data). By getting an alert at such time, the user can indeed go beyond the need for the “early morning newspaper” and have a Just In Time update for immediate incorporation into business decisions.
Each report that needs to be scheduled will have its criteria saved in normal workflow, but will have additional properties identifying the notification mode and target email. Such reports will be tagged as scheduled and will be processed separately from saved reports.
A back-end engine will keep on polling the list of scheduled reports and publish if the criteria is met. The email engine sends a preset notification email to the intended recipients informing them about report being available for consumption.
The scheduling engine utilizes the Windows Task Scheduler component to have the job setup. Each time the job runs successfully, the next run is set a few (configurable) hours later.
The job itself is similar to other jobs for fund calculation and asset class publishing.
For such a task to run, we must ensure that the certain services are not running at the same time, in order to avoid conflict with the state of asset classes. For example, the service for funds calculation, which computes the IRR status of each fund; asset class publishing which aggregates different funds to have a high level performance benchmark, block the scheduling service as their output is required prior to having the schedule run. So such blocker services, as set in configuration, are checked for their running status and scheduler put on hold for the time being.
Once the scheduler begins, it identifies the current working quarter, based on the asset class publish calendar. Then it searches for any newly scheduled report and any report which was earlier not able to publish due to non-availability of data. The first is done very frequently (typically 15 minutes) and the second is done on typically a half-day basis.
It checks the notification type of the report, and then verifies if there are any already generated reports based on this notification. If there are none, then it checks whether all asset classes for the selected report have the appropriate publish status (preliminary or published) is set or not. All the asset classes have to have the same status; if desired status is preliminary and all asset classes are published, then the report will not be generated.
For the specific date report, the asset classes are first validated against current quarter’s published and if not available, then preliminary. If all fails, then last quarter’s published data is returned.
Large Report Routing
The large report routing that we developed had lots of group-by and child group-by involving a much larger amount of calculations than a simpler report, which just aggregated the fund’s data without any grouping. Performing such calculations often becomes time-consuming and does not fit well into the web architecture needs of the under 5-second result. Hence if a given report contains criteria which involve such grouping, a timing estimate is done for the calculation. If the number of calculations goes beyond a configured threshold the user is advised to remove the groupings from the criteria.
This was often a disconcerting part for the end-user, making them wonder if a criteria is provided for applying, why can't that be applied! The user has the requisite permissions and no business rule stops them. Just the huge performance setbacks prevented the reports from being grouped.
To address this challenge, the large report detection is followed by a report scheduling decision. But with a change in permissible options the user now just needs to tell the name of report and the notification email; the type is automatically taken to be specific date (today’s) and the report format as Excel. The report is scheduled to run immediately (through the 5 min job) and the user is informed of the report getting available.
This facilitation of re-using existing functionality to enable a transformation in user experience is what made it a truly fulfilling project participation for myself and the team. At 3Pillar, we realize that getting the best at the very start is impossible. However, we break this rule, release by release; sprint by sprint, successively. We first build something very basic to get the user to start formulating thoughts. We engineer technology in a very simple way, so that we can continue building not only on top of it, but from all around it. And once the client partner starts feeling that something done a particular way will get a lot more dollar value out of it, we re-engineer the product to suit their needs. And by doing so while re-using existing product it means less learning and more earning.
Another example of how we consistently listen to our clients’ thoughts and also anticipate future needs. We believe this is the fundamental strength of 3Pillar that will eventually chart its course to becoming the “Most Respected Product Development Partner” in the industry.