Lorica insights
To Address Cybersecurity Risks in the Financial Sector, Leverage AI and Public-Private Partnerships
May 7, 2025
Cybersecurity risks are a costly problem for the financial sector, and the problem is growing, as it is for other critical infrastructure sectors. According to the IBM Cost of a Data Breach 2024 report, the financial sector has some of the highest costs associated with security breaches of any industry. The average data breach now costs financial companies over $6 million.

The financial services sector is large and varied, including depository institutions such as banks, insurance companies, providers of investment products, other credit and financing organizations, and supporting financial utilities and service providers. Every type of company within the sector is vulnerable. Just within the last year, a ransomware attack on EquiLend, a little-known market utility, raised costs by leaving Wall Street traders briefly in the dark about trading risks. One of several high-profile cyberattacks related to mortgage servicing, the loadDepot attack affected nearly 17 million people and forced many home buyers to postpone their transactions. The list goes on.
AI as both threat and tool
To reduce the risk of data breaches in the financial sector, the most impactful IT investment is AI. According to the IBM report, firms that use AI and automation save an average of $1.9 million compared to those that don’t.
A recent U.S. Treasury Department report on cybersecurity risks in the financial services sector highlights both the opportunities and risks of AI, including generative AI, for cybersecurity. Cyberthreat actors may increasingly use AI tools, and may initially have an advantage. Yet firms that counter with AI tools are likely to improve the quality and cost efficiency of their cybersecurity, according to financial institutions interviewed for the Treasury report.
Having used AI for longer than many industries, financial firms mostly recognize that investments in technology and innovation are the way to protect customers and the financial system. The following are some of the most common applications of AI in cybersecurity at financial firms, according to the Treasury report:
- Advanced anomaly-detection and behavior-analysis AI methods can enhance existing endpoint protection, intrusion detection, data-loss prevention, and firewall tools.
- AI tools can help detect malicious activity that manifests without a specific, known signature, which is critical in the face of more sophisticated, dynamic cyberthreats.
- Automation for routine, time-intensive cybersecurity tasks can be augmented by AI that uses more sophisticated analytics on broader and deeper data sets.
- AI-powered training can educate employees and customers about cybersecurity and fraud detection.
- Prevention measures, such as analyzing internal policy documents and communications to identify and prioritize gap,s can be supported by AI.
AI systems, including those used for cybersecurity, are more open to certain threats than traditional software systems. Data poisoning, data leakage, and data integrity attacks all take advantage of AI systems’ reliance on data. AI cybersecurity tools can also help address these specific vulnerabilities.
Public-private partnerships are key
Collaboration is key to effectively implementing AI cybersecurity. Firms interviewed in the Treasury report agreed that managing risks successfully requires collaboration across the sector. Public-private partnerships facilitate information sharing and can also promote consistent, comprehensive regulation.
Effective deployment of AI in cybersecurity depends on the quality and quantity of data, which can be improved through information sharing across the financial sector. Financial firms recognize this: for example, since firms often rely on third-party vendors to manage cybersecurity risks, they are increasingly sharing anonymized cybersecurity information with vendors to improve anomaly detection models that detect intruders across the vendor’s customer base. Domestic and international collaboration among governments, regulators, and the financial sector can further advance the sharing of information and best practices; we explore a few examples in the next section.
Sharing large datasets raises serious concerns about privacy and security. The public and private sectors can work together to create greater clarity around data privacy and other regulations. In another recent Treasury Department report, this one on AI in the financial sector, financial firms expressed support for the government providing further clarification on standards for data privacy, security, and quality for firms developing and deploying AI.
Respondents also wanted to see the government develop consistent federal-level standards. Skyrocketing cybersecurity costs in the financial sector result not only from the growing number of breaches but also from the costs of compliance in this highly regulated industry. The IBM report notes that firms may encounter challenges with regional regulations such as CCPR, GDPR, and the LGPD, not to mention a variety of state-level data privacy laws.
Promising public-private partnerships
The importance of public-private partnerships to improve cybersecurity is increasingly recognized around the world. Various public-private partnerships currently exist to improve standards and share intelligence, although stakeholders generally agree that more such partnerships are needed. Still, current examples of PPPs showcase their potential. Below is a non-comprehensive list of PPPS, from finance-specific to national to regional.
- National Task Force on Fraud and Scam Prevention: This task force is a public-private collaboration that includes the U.S. Financial Crimes Enforcement Network (FinCEN) as well as key stakeholders in the financial services sector, technology companies, and more. The goals of the task force include developing a comprehensive national strategy to fight fraud and enabling secure and seamless knowledge sharing between government and industry.
- CISA and the JCDC: The Cybersecurity and Infrastructure Security Agency (CISA), part of the U.S. Department of Homeland Security, uses AI to improve the security of critical infrastructure. Through the Joint Cyber Defense Collaborative (JCDC), CISA also encourages the sharing of threat intelligence across the public and private sectors. The JCDC enables organizations across the AI ecosystem, including providers, developers, and adopters, to voluntarily share AI-related cybersecurity information.
- EU Agency for Cybersecurity: ENISA works with private-sector partners to develop and promote cybersecurity standards, policies, and best practices across Europe, including specific initiatives to integrate AI into cybersecurity tools. ENISA promotes public-private partnerships to improve cybersecurity.
Global organizations such as the UN and NATO play a pivotal role in advancing international collaboration on cybersecurity. The UN recently published a report highlighting the need for more public-private partnerships across the Americas, Africa, and Asia to combat cybersecurity.
Challenges to effective PPPs
As discussed, the efficacy of AI models, including those used for cybersecurity, depends on the scale and scope of the training data. Concerns about data privacy currently represent a major obstacle to more effective intelligence sharing. The 2024 Treasury report on cybersecurity risks in the financial sector highlights these challenges in the area of fraud.
Compared to other aspects of cybersecurity, financial firms share very little fraud information. Most financial firms interviewed in the Treasury report agreed on the need for better collaboration – especially as fraudsters are increasingly using AI. (The National Task Force on Fraud and Scam Prevention may help address this.) Public-private partnerships that involve sharing fraud data would support the development of AI-powered tools to detect fraud and identify emerging risks. This is especially crucial for smaller firms with less extensive internal historical data to draw on.
However, while analyzing and sharing fraud data can benefit the public and private sectors and their customers, significant privacy concerns arise. Additional complications relate to third-party providers and their software and data supply chains. The risks of collecting, storing, and processing sensitive financial information need to be managed through robust data protection and privacy practices.
The need to support data privacy
In the same way that financial firms are adopting new technologies that leverage AI for cybersecurity, they will need to explore innovative ways to secure data that can support effective AI tools for cybersecurity and fraud prevention. Effective data privacy tools could, for example, pave the way for a clearinghouse for fraud data, allowing swift sharing of data to protect customers and support financial institutions of all sizes.
Our work in AI adoption is focused on helping to realize these possibilities. While AI can be a powerful tool for cybercriminals, public-private partnerships that incorporate robust privacy-preserving technology can leverage AI to protect critical infrastructure and the people who rely on it.