The public debate over artificial intelligence, focused on its risks and benefits, tends to overhype both extremes. There is a public perception that AI algorithms can self-automate and make decisions apart from or at the same caliber as their human creators, a process more widely known as artificial general intelligence. Without a mechanism built by a human, this perception is not accurate.
The use of advanced AI is growing exponentially beyond our comfort levels of convenience and daily utility, triggering valid fears of less human control. Responsible human oversight — paired with consistency and reliability of AI models and administered through a lens of common sense, strong ethics and fairness — will manage the conversion of fear into trust. Implementation must have quality and bias controls firmly established as part of human oversight.
For investors to relinquish human control of their investment decisions and rely more on AI, trust and track record must serve as proof. At Rosetta Analytics AI, our eight years of development and deployment of neural-network-based investment strategies shapes our views of the rewards and pitfalls of harnessing the relationship and risk allocation power of deep reinforcement learning. We, collectively, believe that now is the moment to convert fear into trust.
First conceived in the late 1950s, neural networks are algorithms coded to search for sequences or relationships that exist in data. They have surpassed the human brain’s capability to solve certain problems. These algorithms are quite proficient at performing clearly defined tasks, such as predicting the next sentence in a paragraph based on previous word sequences. They filter noise from data more efficiently and more accurately to calculate a prediction or an output than more linear or cluster-based statistical frameworks.
Computer science — democratized by the advent of server clouds and combined with the acceptance of open-source code and code repositories — has exploded the universe with ideas to make mankind “better than he was before. Better. Stronger. Faster." Unfortunately for investors, the investment industry has been slow to adopt this technology.
The characterization of autonomous algorithms is largely negative as portrayed in various novels like Flash Boys. Public mistrust of AI is a natural consequence of high-frequency trading firms physically moving themselves closer to exchanges to obtain an information advantage by accelerating the timing of data they extract from data pipes using autonomous predictive algorithms. A recent example of rogue traders overriding models undermines perception and trust even more.
Now the market is excited by the rapid release of next-generation deep reinforcement learning-based large language models (LLMs) like ChatGPT, Bard [now Gemini], Claude, et al. These are just some of the advancements taking place in advanced artificial intelligence. Debates over issues like copyright infringement are finally spurring meaningful conversations over who has rights to the data and other issues surrounding data privacy. These developments are healthy to increase public trust in the responsible use of this technology.
Market excitement about AI should go beyond natural language processing. The use of neural networks like deep learning, combined with the development of optimization frameworks like reinforcement learning working in concert with neural networks, is revolutionizing the ability for a quantitative framework to extract information from data. These systems simultaneously make a decision that incorporates both sequential and cumulative behavior in data to determine how to allocate portfolio risk.
We believe there is investment alpha to be earned, even in public market data. Since no two models are alike, investors have an opportunity to be rewarded by conducting careful due diligence on investment firms deploying AI directly into the investment process. Instead of pursuing investment alpha, many investment managers seek to maximize operational alpha by focusing their investment in AI on the back office and trade execution to squeeze more costs out of their book.
The expertise required to create, deploy and monitor AI-driven allocation models requires a specification, compliance and governance process that should already be in place for all quantitative models. Allocators historically did not have the resources to evaluate the design and implementation, whether simple common factor models or more opaque models like neural networks. The integration of investment teams with both computer science and financial expertise is solving this knowledge gap.
Dynamically extracting relevant information directly from data, learning from these changes in the underlying market environment from which the data is being extracted, and simultaneously learning to allocate risk is a game changer. Consistency of results and direct experience gained by their human creators is the catalyst that will drive acceptance by turning fear into trust.