Technology Waves Start as Solutions in Search of a Problem
Leaders navigating the early stages of the AI wave face a dilemma. If they launch the wrong AI solutions, they will see a backlash among employees and customers. If they wait too long to find the right solution, they will miss an opportunity to boost growth or productivity. The best AI solutions will be based on continuous fine-tuning and upgrading of algorithms.
Matching the most reliable AI algorithms with real business problems is actually very hard. Getting the right data is the first obstacle. Defining the business problem in a way that an algorithm can address it is the next challenge. Finally, many firms don’t have baseline data on human performance, making it difficult to prove AI benefits. It’s definitely a marathon and not a sprint.
Generative AI has made it almost too easy to develop applications. Like many early waves, expectations are high and solutions are immature. Some demos are asking LLM models to answer questions beyond their capabilities - either because the body of work they draw from are inadequate or because the questions require deeper computation. Generative AI demos appear to be magic, but these demos only showcase their best features. They are like an engaging movie preview that shows only the best scenes to a mediocre film. Making an algorithm work consistently is challenging.
The AI Pivot Technique
Leaders can, however, achieve high returns by using an AI pivot technique we’ve refined over years of working with large firms. This method assumes you will change algorithms and redefine target problems iteratively. Teams start by solving basic problems then pivot to more sophisticated algorithms which, in turn, help them achieve bigger outcomes. This approach gives teams time to learn the nuances of solving complex problems with limited datasets and dampens expectations of decision makers who may otherwise reject the AI.
AI Pivoting to Reduce Labor Costs in Investment Operations
OnCorps AI started developing algorithms for investment operations five years ago. During that time, we learned we could improve our goal of reducing labor and error rates if we continued to work on new and complementary algorithms. At the same time, much has changed in the AI tool landscape, making it much easier for us to build new data pipelines and test new algorithms.
We started working with one of the premier investors in fixed income with over $2 trillion in assets under management. Their management were early believers in AI, seeing the potential for algorithms to reduce time, labor, and error rates. When we started working with them five years ago, our goal was to spot errors in financial and accounting transactions. Our early algorithms required human expert feedback. We soon realized that any solution that generated too many exceptions would be rejected.
Here is a summary of our pivots.
- First Version. Once we ran algorithms, we realized we needed a more sophisticated way to reduce exceptions. We found success using anomaly detection algorithms. These algorithms allowed us to isolate outliers using many combinations of variables. The algorithm ran hundreds of thousands of samples to isolate outliers more intelligently. After some expert feedback and fine-tuning, we reduced the number of exceptions by 90 percent. While we were successful in reducing exceptions, we believed there was a better way to spot errors. Conventional methods for catching errors are often simple top-down methods like comparing values with benchmarks. This method might catch major errors, but let small errors slip through the net. To address this gap, we developed Incident Detection algorithms, which identify the combination of data patterns that existed during past errors. We currently run over 20 incident detection algorithms in daily production.
- Pivot 1. Once we were successfully running the anomaly and incident detection algorithms, we realized there was more we could do. Even though we had reduced the number of exceptions, we noticed there were redundant and common exceptions. Operations analysts essentially experience “Groundhog Day" each day as they see the same types of issues repeatedly. If an algorithm could match past resolved issues of the same type with current exceptions, we could eliminate the redundant work. We are currently testing a method of finding “nearest neighbors” for exceptions and applying similarity scores to match them. We expect the highest scoring matches will save analysts a considerable amount of time.
- Pivot 2. While still in the R&D phase, we are quite excited to develop AI solutions that autonomously resolve exceptions. We have already developed and tested LLM powered agents, which are “chained” together to perform specific tasks. Our early work shows we can make this team of agents help analysts pinpoint root causes. As these agents become more powerful, we know it will be possible to replace human analysts entirely. If AI agents can draw from a large corpus of past resolved exceptions, it appears likely they can autonomously resolve certain types of exceptions.
Implications for Managing AI
There is no question that addressing complex problems, coupled with the rate of change in AI tools, makes AI pivots an important method for implementation. But there is a challenge in the way AI is being delivered today. Most AI teams are geared toward R&D. While they can develop prototypes, most aren’t funded or charged with maintaining the algorithms they build. To make matters worse, most AI is delivered by the software industry, with a culture of releasing minimum viable products, then minimizing support costs. This business model is not suitable for managing and fine-tuning algorithms.
The most successful algorithms will be developed to solve specific industry solutions, like ours in investment operations. But to make these algorithms perform, firms will need to chart multi-year time horizons, commit to dedicated teams, and expect that these teams will continuously seek better algorithms, data sources, and methods.