Early AI Hype Makes it Seem Easy
It’s surprising to think that teams of people are still manually reading and error checking financial reports in a time when AI can write detailed paragraphs or suggest edits as you write. It seems intuitive that applying AI to the burden of financial reporting is straight forward. Amidst the AI hype, many firms are claiming it is. But execution has proven to us how hard it really is.
Financial reports contain different sections, tables, text, charts, and notes. Each section must foot to other sections and to their source records. No two reports are the same. Even reports for the same fund can change formats over time.
Our team at OnCorps AI has learned an important lesson. It is a lesson that only comes from supporting multiple firms and hundreds of reports in production. It is a mistake to claim victory if you have used AI to read one report. The wheels fall off the wagon when you try to use the same technology for different reports. The lesson we learned is that we needed to teach the AI to learn to recognize the differences in a report.
We are now in our fourth year of running financial reporting software and algorithms in production. This includes several hundred reports and dozens of combinations of fund types and domiciles. Moreover, we have made several upgrades to our algorithms which have worked effectively at reducing human configuration and troubleshooting for reporting format changes. The before and after has been dramatic. Our AI can read a 500 page report and run thousands of checks in about 30 minutes. Our AI has identified twice as many issues as humans. At the same time, teams using AI can resolve issues 75 percent faster, boosting worker productivity by 150 percent.
Teaching an AI to Learn to Read a Report
There is a profound difference between instructing software to perform a specific task and training an AI to perform that task under different conditions. Instead of applying rules-based software to parse documents, we realized we needed to teach the AI to read reports. This required more sophisticated algorithms.
One of the problems we encountered was that our software was blind. If a table or section moves, the only thing software can tell you is it can’t find it. To address this gap, our team used a visual AI used in autonomous driving systems: object detection (see below). We then paired this object detection algorithm with an AI that also learns to recognize variations in descriptions for the same thing.
These Named Entity Recognition algorithms allow us to let our system learn all the different ways a table header or security can mean the same thing. Critically, they can also learn when different descriptions mean something different. A number of firms have employed a large workforce to do this manually after implementing a simple non-visual, non-learning technology.
The Complete System
As illustrated in the diagram below, our system performs an end-to-end reading and reconciliation of detailed financial reports. This includes: 1) ingesting PDF reports and source records, 2) running checks between sections of a report and source documents, and 3) presenting breaks both visually and with descriptions to humans. You can see the contrast with the “before” diagram; we have minimized human support in both the front-end configuration and in the back-end resolution of breaks.
As indicated in the chart below, our trained AI has included a variety of types of funds across six domiciles. The expansion of our platform to accommodate the differences in each report has made our algorithms more robust. This is because it provides our AI more samples and variations needed to perform better.
One of the advantages of working with the world's largest asset managers is we can develop code and algorithms meeting their unique specifications. Over several years, we developed a check library that runs checks by report section. The table below indicates some of the nearly 200 checks our system is capable of completing.
For many reports, the AI checks an entire report within minutes. The diagram below shows a typical percent of breaks by section/check type. The chart on the right indicates we have nearly eliminated the third round of checks both managers and service providers spend time completing.
The Outcomes Are Dramatic
We have carefully tested and baselined our reporting AI system. This allows us to compare results once in production. As the chart below indicates, we have significantly reduced the amount of time it takes to resolve an exception. Moreover, the team’s productivity is expected to be boosted by 150 percent by the end of this year. What is even more impressive is that our system typically finds more errors than manual systems. This means we are cutting labor significantly even as we find more issues.
What’s Next?
We are quite excited about the potential for AI in financial reporting and reconciliation. Though early, AI will ultimately provide automatic agents that can learn to improve tasks and decisions to achieve a goal. Much of the data and learnings we are building will likely be useful to power these agents.
There are several new advances in AI that will change the way financial institutions prepare and read reports. These include:
- Rapid report comparisons and trending. By applying retrieval augmented generation (RAG) tools, we will be able to provide analysts and operations teams with the ability to compare reports. RAG tools are simply private sources of information where an AI can look first. As the system matures, the entire process can be improved through fine-tuning and with more specific corpus definitions.
- The continuous tracking and reconciliation of positions against the categories of a report. A report is simply a state in time that describes financial performance. Often, portfolio managers and leaders are unaware of the impact of specific trades and expenses to the end result. Because OnCorps AI provides both reporting and position reconciliation offerings, we see a time when the two AI systems can work together to provide managers a real-time view on changes in performance and key ratios.
- Auto generation of reports. Rudimentary LLM systems are already capable of generating summaries and charts. Some of this work is still early and subject to hallucinations. However, a promising and production-ready capability is asking LLMs to generate simple code. Most of the labor involved in report generation comes when managers request format changes to reporting system templates. This code editing and generation can be more easily handled by AI.