Experian Uncovers Fraud Paradox in Financial Services’ AI Adoption
The technology that financial institutions are deploying is being weaponized against them. This is the core tension running through Experian’s 2026 Future of Fraud Forecast, a tension the company is uniquely positioned to address as it operates on both sides of the issue. According to FTC data cited in the forecast, consumers lost over $12.5 billion to fraud in 2024. Experian’s own data indicates that nearly 60% of companies reported an increase in fraud losses from 2024 to 2025. Experian’s fraud prevention solutions helped clients avoid an estimated $19 billion in fraud losses globally in 2025, highlighting the scale of the problem and how much defense now relies on AI matching the speed and autonomy of attacks.
The most pressing finding in Experian’s forecast is what the company refers to as machine-to-machine mayhem, where agentic AI systems designed to transact autonomously on behalf of users become indistinguishable from the bots that fraudsters deploy for the same purpose. As organizations strive to integrate AI agents capable of independent decision-making, fraudsters are exploiting those same systems to conduct high-volume digital fraud at a scale and speed that no human operation could sustain. The core challenge is that machine-to-machine interactions lack clear ownership of liability; when an AI agent initiates a transaction that turns out to be fraudulent, the question of responsibility remains unsettled.
Kathleen Peters, Chief Innovation Officer for Fraud and Identity at Experian North America, framed the issue: “Technology is accelerating the evolution of fraud, making it more sophisticated and harder to detect. By combining differentiated data with advanced analytics and cutting-edge technology, businesses can strengthen fraud defenses, safeguard consumers, and deliver secure, seamless experiences.” Experian predicts that this will reach a tipping point in 2026, prompting substantive industry conversations around liability and the governance of agentic AI in commerce.
Beyond the agentic AI issue, Experian’s forecast identifies four additional trends that financial institutions need to consider in 2026. Deepfake candidates infiltrating remote workforces; generative AI tools can now produce tailored CVs and real-time deepfake videos capable of passing job interviews. Website cloning overwhelms fraud teams; AI tools have made it easier to create replicas of legitimate sites, making it harder to eliminate them permanently. Emotionally intelligent scam bots; generative AI enables bots to conduct complex romance fraud and relative-in-need scams without human operators. Lastly, smart home vulnerabilities create new entry points for fraudsters.
According to Experian’s Perceptions of AI Report, 84% of decision-makers at leading financial institutions view AI as a critical or high priority for their business strategy over the next two years. However, 73% of respondents express concerns about the regulatory environment surrounding AI, and 65% identify AI-ready data as one of their biggest deployment challenges. Data quality was rated the single most important factor in choosing an AI vendor, underscoring Experian’s data-first positioning.
Experian’s AI-powered Assistant for Model Risk Management addresses one of the most resource-intensive requirements facing institutions deploying AI. A 2025 Experian study found that 67% struggle to meet their country’s regulatory requirements, and 60% still rely on manual compliance processes. The company indicates that over 70% of larger institutions report that model documentation compliance involves over 50 people, highlighting the scale of the automation opportunity.
Data Governance Shapes Autonomous AI Systems Behavior
Exploring How to Handle Classical Data in Quantum Models
Related articles
Governance Challenges of Agentic AI under the EU AI Act in 2026
AI agents face governance challenges under the EU AI Act coming into effect in 2026.
OpenAI unveils new safety blueprint to protect children online
OpenAI has introduced a new safety blueprint aimed at protecting children from online exploitation.
Governance in AI: Ensuring Autonomous Systems Are Controlled
Autonomous AI systems require clear governance and control for safe operation.