The role of artificial intelligence in the financial sector is becoming increasingly important due to improved decision-making and efficiency. According to Insider Intelligence, AI applications will save banks and financial institutions $447 billion in 2023. Due to competitive pressures, AI is being rapidly adopted in finance under a dark cloud of risk.
Applications of AI in Finance
Central to AI are large language models, which are neural network-based models conditioned on volumes of data including text and documents. They power applications like chatbots and virtual assistants, which can provide financial guidance and wealth management solutions.
Furthermore, financial institutions are increasingly susceptible to fraud. However, the path to resistance is not straightforward. Tightening current fraud prevention could obstruct legitimate consumers’ access, while raising the barrier to entry excludes millions of potential customers. By formulating simulated instances of fraudulent transactions, AI can help recognize and differentiate legitimate and fraudulent patterns.
Analysis and decision-making by financial institutions relies on perusing myriad financial documents daily. Given the sheer volume and complexity, this can be tedious. Generative AI can process, summarize and extract valuable information from such documents.
Adoption by Major Players
Morgan Stanley, a U.S. multinational investment bank, has partnered with OpenAI on chatbots to augment financial advisers with insights from proprietary data and research, according to InvestmentsNews. Andy Saperstein, Morgan Stanley Wealth Management co-president, believes this will drive efficiency and scale in new ways, allowing more time on core competencies.
Capital One and JPMorgan Chase have leveraged generative AI to strengthen fraud and suspicious activity detection systems. This has allowed them to reduce false positives by 40% and 75%, respectively, while improving detection rates by 50% and 8%, FineExtra reported.
Wells Fargo is building capabilities for automating document processing and summary reports. “It’s no surprise that processing documents is an important internal use case,” VentureBeat stated. “So analyzing documents and streamlining processes was a prime candidate for implementing AI at scale.”
Risk Considerations
Concerns exist regarding inherent technology risks in the financial sector. Challenges include cybersecurity, robustness problems, bias, privacy flaws and lack of transparency on how results are produced.
The launch of ChatGPT, an AI chatbot, generated fears about potential generative AI risks. Goldman Sachs, Citigroup and Bank of America have reportedly barred employees from using ChatGPT. Chatbots providing adviser insights could lead to “hallucination,” generating plausible but incorrect answers. This could provide inappropriate advice and products to customers.
In addition, synthetic fraud detection data raises quality and bias reproduction concerns, potentially blinding institutions to embedded application risks. For instance, replicating real-world biases may lead to discriminatory practices disproportionately impacting vulnerable communities.
Furthermore, automation technologies like document analysis and summarization raise privacy concerns by using sensitive data for training and fine-tuning. Financial and personal information could be leaked.
While promising, AI should be approached cautiously as inherent risks could seriously damage financial institutions. It is unclear if further developments will reduce these risks. For now, close human supervision appears necessary, perhaps through prudential oversight authorities closely monitoring implementation.