Wall Street is worried it can’t keep up with AI-powered fraudsters
- Consultant Accenture surveyed banks execs on the effect of relational AI on security.
- Eighty percent of respondents said AI is enabling thieves faster than banks does keep up.
- Accenture’s security analyst details why banks are hampered and what’s at play.
Generative AI could be one of the most promising technology improvements on Wall Street— but it may also turn out to be one of the most disturbing.
Bank frontrunners feel like they didn’t defend against what fraudsters can do with generative AI, according to new data from Accenture based on a study of 600 banks cybersecurity executives. Eighty percent of respondents believe conceptual AI is empowering thieves faster than their businesses can answer.
Cybersecurity is an important part of consumer trust, Valerie Abend, Accenture’s financial services security guide, told Business Insider.
” The businesses that really understand how important customer confidence is, as their most valuable asset, and put cybersecurity right there as the base of the enabling that, those are going to be your victors”, she added.
Lenders have touted conceptual AI’s ability to make their employees more productive and efficient. The technology is being used to do anything from helping software developers write script to enabling experts to describe thousands of documents into study reports. But it’s not just Wall Street staff who are using the technology to their advantage.
Armed with conceptual AI, bad actors can absorb more data than before, and use the technology’s ability to mimic humans to do more powerful and practical scams.
These episodes, which are targeting clients, bank employees, and their systems providers, you have far-reaching consequences. When criminals gain access, they can produce false purchases, wire money, and drain consumer accounts of funds. They may also get deeper access into organization tech stacks, steal data, and get malicious software.
Bank leaders are not ignorant of what’s at stake, has said it spends more than$ 600 million each year on cybersecurity, while ‘s cyber spend has surpassed$ 1 billion annually. Some key technical execs, like and , have also left the banking industry altogether to handle the digital threat of AI more directly at technology companies.
But despite the millions of dollars banks spend to shore up their defenses, many IT execs believe the advancement of generative AI is too quick to keep up with. About a third of survey respondents ( 36 % ) said they believe they have a solid grasp of the rapidly evolving cybersecurity landscape.
To be sure, to detect vulnerabilities, offer up more robust threat intelligence reports, and try to get ahead of attacks by analyzing more real-time data, Abend said. They’ve also been using AI to identify so-called toxic combinations, like employees who have access to approve and execute transactions, including wire requests. But those efforts and the speed at which they can be deployed are greatly hampered by the strict regulations banks must follow, Abend said.
Abend, who spent years working at regulators like the Office of the Comptroller of Currency and the US Department of Treasury, said in order to use AI banks need to demonstrate that they can maintain the controls and governance necessary to stay within their risk appetite. They have to be thoughtful about how they adopt AI, the large language models they use, how third parties provide these models, how they’re protecting the data that feeds the models, and who has access to the output of those models.
Cybercriminals are taking advantage of newer models, such as DeepSeek, to write malicious code and identify weaknesses, like identifying weak spots in the cloud security of a given IP address, Abend said. Established generative AI providers, like ChatGPT and Google Cloud, have blocked such activity, but newer models are still susceptible.
Third-party provider risk
There are fintechs and startups that are developing AI-powered tools to help banks thwart cyber attacks. , which works with M&, T Bank and Navy Federal Credit Union, this week released a new product to detect attacks, identify suspicious volume spikes in applications, and reduce manual reviews during attacks.
But bank vendors and technology providers could provide another opening to banks that bad actors are targeting. Over 70 % of breaches at banks come from their supply chain of vendors, Abend said. Cybercriminals use generative AI models to sift through data to find out which companies partner with banks and exploit that vulnerability. Tech providers aren’t held to the same regulatory standards as banks, and banks ‘ third-party oversight management is often manual and with limited and old data, Abend said.
” The reality is, you can outsource the capability as a bank, you don’t outsource the risk”, Abend said. ” Customer trust is basically dependent on the bank to protect that customer’s data and their financial information across the end-to-end supply chain”.
Accenture research has found that maintaining customer trust helps banks achieve 1.5 times higher customer retention rates and 2.3 times faster revenue growth.
” This is not a back-office issue, banking executives really need to stop treating this like a compliance problem”, she said.