AI-Powered Financial Fraud Detection: A Shield and Sword
In the realm of financial fraud, Artificial Intelligence (AI) is emerging as a double-edged sword, serving as both a robust defense mechanism and a potential tool for malicious actors. Financial institutions are increasingly leveraging AI and real-time detection to proactively combat fraud before it inflicts damage.
During a recent Financial Services Summit, Anthony Scarfe, deputy CISO at Elastic, and Ludwig Adam, CTO at petaFuel, discussed the escalating impact of AI on fraud prevention. petaFuel is a prominent MasterCard processor and payment solutions provider.
On the defensive front, Scarfe highlights how “LLMs are going to enable a very fast summarization of those events into more of a story, more of a big picture, so that an analyst confronted with that event has the instructions of what to do.” However, Adam cautions that criminals are also adopting these advanced tools: “The same way we can use large language models to reduce our mean time to react, the fraudsters use the same technology to reduce time and cost while scaling their attacks.”

The Rising Tide of AI Adoption and the Escalating Threat
Expert consultants corroborate this sobering reality. Deloitte estimates that potential fraud losses for Financial Services Institutions (FSIs) in the United States could reach a staggering US$40 billion by 2027. This projection underscores the urgency for financial services to fortify their defenses. The response has been significant: 91% of US banks are currently utilizing AI for fraud detection, and 83% of anti-fraud professionals plan to integrate GenAI into their systems by 2025.
However, Gartner emphasizes that the successful deployment of AI hinges on robust governance and security management. Financial services that prioritize these aspects are expected to achieve higher customer trust ratings and improved regulatory compliance scores compared to their competitors.
The implementation of AI must also be highly considered to avoid violating any existing data privacy laws. Issues of governance of data privacy are vital in the banking industry. AI models require access to massive amounts of data, which must be obtained and processed ethically.
According to our latest fraud study, 90% of US companies reported being targeted by cyber fraud in 2024. Fraud can have significant consequences on organizations - and not only financial losses.
How does AI work to detect financial fraud?
AI in fraud detection typically refers to machine learning models able to spot patterns. It works by applying a set of rules (known as an algorithm) to a scenario to make a decision. But the advantage is that these rules evolve over time, as the technology is fed more information.

The Imperative of Real-Time Detection
Adam emphasizes that the scale and speed of modern fraud necessitate a fundamental shift in detection strategies. The payment ecosystem is inherently complex, demanding that “We need to react in real time; we need to analyze new fraud patterns that pop up instantaneously, within minutes, in order to mitigate the risk.” Traditional batch processing and manual checks are no longer adequate in the face of escalating transaction volumes and increasingly sophisticated attacks.
This challenge was addressed by PSCU - a network of 1,500 credit unions in the United States - in collaboration with Elastic. The organization encountered significant obstacles with its legacy fraud detection system, including delayed data processing and limited data sources. The implementation of Elastic's AI-driven platform yielded remarkable results. “Over the first 18 months, [they] saved about $35 million in fraud across those 1,500 credit unions,” Scarfe reports. “They also reduced their mean time to respond to fraud by about 99%.” This improvement translated to enhanced customer protection, preventing fraud before customers were even aware of the risk. The success was attributed to the ability to process vast datasets in real time and leverage AI for anomaly detection.
AI brings powerful tools, but companies need solutions that are practical and reliable. Financial institutions are increasingly integrating AI solutions into new and existing workflows to improve decision-making, fraud prevention and risk management.
AI-powered machine learning models trained on historical data may use pattern recognition to automatically catch and block possible fraudulent transactions from being executed. They also may require human agents to complete extra authentication steps to verify a suspicious transaction. AI fraud detection systems are not perfect, and some false positives may negatively impact the overall customer experience.
Key elements of AI fraud detection:
- Data collection: continuous data collection is at the core of fraud detection. It enables businesses to set their ‘normal’ range of data.
- Continuous accuracy improvements: since AI models learn from themselves, they are less likely to make the same mistakes over and over.
- Alerting and reporting: when fraudulent threats are suspected, it’s imperative to move to the next stage of fraud prevention: response.
PwC and Bank of England studies found that AI outperforms manual controls in fraud detection. With no controls in place, fraud risks increase sharply. Before AI, fraud detection systems relied on applying static rules to data in order to find the anomalies. Instead, AI learns from itself, setting dynamic rules that change with the circumstances. By applying dynamic over pre-defined rules, companies using AI can benefit from higher accuracy.
Prior to the adoption of AI for fraud detection, companies were faced with employing a full time statistician to continuously analyze for threats. Not only was this a costly venture, but it relied on this individual not making manual errors, and working quickly. But with AI algorithms working in real-time, organizations can benefit from scale without the same costs. This leads to further benefits when suspicious activities are detected - since AI algorithms can instantly block, freeze or protect their accounts, and report back to team members instantaneously.
One of the best use cases for AI in fraud detection is to prevent payment fraud. For example, in a classic case of stolen identity, imagine that a fraudster heads to the ATM to make a withdrawal of their victim’s funds. Once the perpetrator knows the PIN number, they can access the entire account, withdrawing as much as they need.
Technologies like Natural Language Processing (NLP), Captcha, and Graph Neural Networks (GNNs) are advancing AI-powered fraud detection.
Traditional vs. AI Fraud Detection:
| Feature | Traditional Fraud Detection | AI Fraud Detection |
|---|---|---|
| Pattern Recognition | Limited, relies on fixed rules | Improved, ingests massive data to recognize complex patterns |
| Scalability | Limited by human capacity | Massive, through automation |
| Adaptability | Fixed rules | Continuous learning |
| Error Rate | High, prone to false positives | Lower, but can still produce false positives |
Benefits of AI-Powered Fraud Detection:
- Improved pattern recognition
- Massive scalability
- Adaptability
Challenges of AI-Powered Fraud Detection:
- Data dependent
- Complex implementation
- Potential for inaccurate results (hallucinations)
- Bias in data analysis
- Governance of data privacy
AI systems are excellent at ingesting massive amounts of data to recognize complex and obscure patterns. Through automation, AI systems can monitor huge amounts of transactions far greater than humans could ever manage. Once trained, AI algorithms don’t stop learning.
AI models require extremely large amounts of data to train, learn and grow. This data must be either sourced or created (synthetic data), but also curated. AI systems can be challenging to integrate into existing systems.
Using advanced, long short-term memory (LSTM) AI models, American Express was able to improve fraud detection by 6%. Decentralized and considered to be somewhat anonymous, cryptocurrency is favored by fraudsters for its difficulty to trace.
The Future: Combining GenAI with Human Context
The industry is racing to harness AI's potential for real-time fraud detection while grappling with sophisticated criminals equally quick to adopt these technologies. Success, according to Adam, lies in “a mix of technologies: the classical machine learning-based approaches and the GenAI approach,” while always incorporating the human factor.
When integrated into online platforms, AI-powered chatbots can do more than customer service. Banks can use AI systems to protect their clients and prevent fraudulent ecommerce purchases by analyzing customer behavior, purchase history and device information (such as location), flagging any transactions that deviate from historical patterns. As a revolutionary technology, AI fraud detection is already having a dramatic impact on the banking industry, with potentially even greater potential.
AI technology allows computers to behave, learn, adapt, problem solve, and act with autonomy in ways similar to human cognition. Supervised vs. AI systems used in banking fraud prevention are highly tuned for specific tasks. AI models are trained using large amounts of carefully curated data through a process called supervised learning.
In supervised learning scenarios, AI systems are trained on specific fraud tactics to guide pattern recognition. Unsupervised anomaly detection techniques are used to fill in the gaps where supervised training models might be lacking. These techniques empower AI models to recognize previously unpredicted-but still unusual-behavior patterns.
One of the most common applications for AI technology is the social media chatbot, an automated program that can conduct conversations with customers. Beyond customer service, the banking industry uses many other types of programs and software incorporating AI to identify and prevent potential fraud.
Specific Applications of AI in Banking:
- Regulatory compliance: Banks are under major pressure to remain in regulatory compliance. AI programs can help banks implement Know Your Customer (KYC) policies with computer vision by analyzing identity verification documents for any inconsistencies or signs of fraud.
- Anomaly detection: AI systems are particularly useful for any application requiring pattern recognition. Specific types of AI, known as graph neural networks (GNN), are designed to process data that can be represented as a graph, such as the data very common to the banking industry.
- Risk scoring: AI and machine learning models are built on weighted data to assign probabilities to potential actions and assess their most accurate decision or action. As such, they can make assessments based on multiple factors, such as transaction amounts, frequency, location and past behavior, making them very well suited for determining risk.
AI systems are ushering in a new era of fraud detection and security in the banking industry, offering dramatic improvements over traditional methods of fraud detection.
AI systems are getting better every day, but they are not infallible. AI models can frequently generate inaccurate results, known as hallucinations. In banking, inaccurate results may be mitigated by creating hyper-specialized models designed for very specific tasks, but these types of models limit the potential value of AI.
Bias in data analysis has been an issue since the earliest days of science, long pre-dating computer technology. Unfortunately, the issue persists. In the sensitive field of financial services, much work has been done to eliminate bias and discrimination from lending practices and account protections.
It’s no secret that employing AI to prevent fraud can be a costly investment. But with its heightened levels of accuracy, speed, and ultimate success, AI might ultimately provide a long-term saving. It can be compared to insurance - since you never really know if you’ll need it. But with 90% of US companies targeted at least once in 2024, it’s a question of when, not if.
ICG - ממשל תאגידי