
top 10 AI phishing scams 2024–2025
The global cybersecurity landscape is currently undergoing a paradigm shift of historical magnitude, transitioning from an era defined by code-based exploitation to one dominated by AI-driven psychological manipulation. This report, commissioned to provide an exhaustive analysis of the threat landscape in 2024 and 2025, details the mechanisms, impacts, and future trajectories of the top ten AI-powered scams. The findings presented herein are based on a rigorous synthesis of current threat intelligence, forensic case studies, and dark web monitoring data.
The democratization of Generative Artificial Intelligence (GenAI) has fundamentally altered the economics of cybercrime. By lowering the technical barrier to entry and reducing the marginal cost of content creation to near zero, GenAI has enabled threat actors to industrialize social engineering. We are witnessing the rise of “hypered-personalized” fraud—attacks that are at once massively scalable and intimately tailored to the individual psychology of the victim. top 10 AI phishing scams 2024–2025
Key indicators from the 2024-2025 period reveal a threat environment in rapid escalation:
- Proliferation of Synthetic Media: The volume of deepfake files circulating online has surged exponentially, from approximately 500,000 in 2023 to a projected 8 million by 2025. This represents a compound annual growth rate that far outstrips the adoption of defensive AI technologies.
- Explosion in Fraud Attempts: Deepfake-driven fraud attempts spiked by 3,000% in 2023, with North America experiencing a 1,740% increase. The financial impact is equally staggering, with global losses from generative AI-facilitated fraud projected to reach $40 billion by 2027.
- The “Zero Trust” Crisis: The efficacy of human detection has collapsed. Studies indicate that human detection rates for high-quality deepfake video are as low as 24.5%. Consequently, the visual and auditory senses—the primary mechanisms by which humans verify reality—can no longer be trusted in digital channels.
This report analyzes the ten most prevalent and pernicious scam typologies emerging from this new environment. From the $25 million corporate heist at Arup Engineering to the widespread devastation of “Pig Butchering” schemes and the emotional terror of “Virtual Kidnapping,” each vector is dissected to reveal its technical underpinnings and psychological triggers. Furthermore, we examine the “Underground Economy” of Fraud-as-a-Service (FaaS), where malicious Large Language Models (LLMs) like FraudGPT and WormGPT are sold as commodities, fueling a new generation of cybercriminals. top 10 AI phishing scams 2024–2025
1. Introduction: The Industrialization of Deception
The integration of artificial intelligence into the cybercrime ecosystem marks the end of the “spray-and-pray” era of phishing. Historically, digital fraud relied on the law of large numbers: sending millions of low-quality, generic emails in the hope that a fraction of a percentage of recipients would make a mistake. The content was often riddled with grammatical errors, poor formatting, and implausible scenarios, making detection relatively easy for the vigilant. top 10 AI phishing scams 2024–2025
Today, that paradigm has been inverted. GenAI allows attackers to practice “sniper phishing” at scale. An attacker can now feed a target’s LinkedIn profile, recent tweets, and corporate bio into a malicious LLM, which then generates a flawless, contextually relevant email referencing specific colleagues, recent projects, and shared interests. This email is indistinguishable from legitimate correspondence, bypassing both human skepticism and traditional signature-based email filters.
1.1 top 10 2026 best phising scame
The defining characteristic of the current threat landscape is the democratization of sophistication. Capabilities that were once the exclusive domain of state-sponsored Advanced Persistent Threats (APTs)—such as real-time voice cloning, video manipulation, and polymorphic malware creation—are now available to entry-level cybercriminals via subscription models on the dark web.
This phenomenon is driven by the emergence of Malicious AI-as-a-Service. Platforms like FraudGPT and WormGPT provide intuitive interfaces for drafting spear-phishing campaigns, writing malicious code, and creating fraudulent landing pages. Similarly, “face swap” services for creating deepfakes are available for as little as $249 per month, allowing attackers to impersonate CEOs and government officials with frightening realism.
1.2 The Economic Impact of Synthetic Fraud
The financial ramifications of this shift are profound and growing. In 2024, the global cost of phishing attacks reached $17.4 billion, representing a 45% year-over-year increase. For individual businesses, the stakes are existential: the average cost of a deepfake-related incident for businesses in 2024 was nearly $500,000, with large enterprises reporting losses significantly higher.
However, the direct financial loss is only one dimension of the damage. The erosion of trust is perhaps more significant. With 63% of cybersecurity leaders citing deepfakes as a rising threat to digital trust, organizations are facing a crisis of confidence. When a CFO cannot trust that the CEO on a video call is real, or a bank cannot verify a customer’s identity through a video selfie, the friction introduced into the global economy will be substantial.
The following sections detail the specific mechanisms by which these technologies are being weaponized, ranked by their prevalence, sophistication, and impact in the 2024-2025 period.
2. Deepfake Executive Impersonation (The “BEC 2.0”)
The evolution of Business Email Compromise (BEC) into Deepfake Executive Impersonation represents the apex of corporate targeting. This attack vector targets high-value financial transfers by leveraging real-time video and audio deepfakes to impersonate C-suite executives, effectively dismantling the “human element” of security controls.
2.1 top 10 AI phishing scams 2024–2025 .
In early 2024, the cybersecurity community witnessed a definitive “zero-day” event for deepfake fraud: the theft of $25 million (HK$200 million) from Arup, a multinational engineering firm in Hong Kong. This incident serves as the primary case study for the capabilities of modern threat actors.
The Narrative of Deception:
The attack began with a standard phishing lure—an email purportedly from the company’s UK-based Chief Financial Officer (CFO) requesting a confidential transaction. The targeted employee, a finance worker in the Hong Kong office, initially harbored suspicions. To alleviate these doubts, the attackers proposed a video conference call.
When the employee joined the video call, they were met not just by the CFO, but by a panel of senior colleagues and executives. The visual fidelity was high enough to be convincing, and the voices matched the known profiles of the executives. Crucially, the attackers used real-time face-swapping technology to overlay the likenesses of the executives onto actors who were present on the call.
The Psychological Trigger:
The genius of the Arup attack lay in its weaponization of social proof. Had the call featured only the CFO, the victim might have remained skeptical. However, seeing multiple familiar faces interacting in a group setting completely disarmed the victim’s critical faculties. The presence of a “quorum” of authority figures normalized the request, making the victim feel that they were the only one out of the loop. Under the instruction of these digital puppets, the employee executed 15 separate wire transfers to five different bank accounts.
2.2 Technical Mechanism: Real-Time Face Swapping
The technical execution of such an attack requires a sophisticated stack of GenAI tools.
- Data Harvesting: Attackers aggregate hours of public video footage (earnings calls, interviews, conference keynotes) of the target executives to train the model. The more data available, the higher the fidelity of the deepfake.
- Live Injection: Tools like DeepFaceLive or proprietary dark web variants allow for the real-time replacement of a face during a video stream. The software tracks the facial landmarks of the actor (the “puppet master”) and maps the target’s face onto them with low latency.
- Audio Synthesis: Parallel to the video, real-time voice changers (RVCs) modify the actor’s voice to match the target’s pitch, cadence, and timbre.
2.3 Strategic Implications
The Arup incident demonstrates that video evidence is no longer proof of identity. Traditional verification procedures that rely on “hopping on a quick call” to confirm a suspicious email are now vulnerabilities rather than controls. The “vulnerability gap” in defensive technology is significant; while detection tools exist, they are rarely integrated into standard enterprise communication platforms like Zoom, Teams, or Slack, leaving employees exposed.1
3. Virtual Kidnapping and AI Voice Cloning
While executive impersonation targets corporate coffers, Virtual Kidnapping targets the primal fears of families. This scam has evolved from a crude, lucky-guess game into a precision-guided psychological weapon powered by AI voice cloning.
3.1 The Mechanism of Terror
The modern virtual kidnapping scam is a masterclass in emotional manipulation, enabled by the democratization of Text-to-Speech (TTS) technology. top 10 AI phishing scams 2024–2025
- The Harvesting Phase: Scammers scour social media platforms (Instagram, TikTok, Facebook) for short audio clips of the target’s loved ones—usually children or young adults traveling abroad. With current technology, as little as three seconds of clean audio is sufficient to train a voice cloning model with 85% accuracy.
- The Synthesis Phase: Using tools readily available online, the attacker generates a script. This is not a calm conversation; it is a script of terror. The cloned voice creates audio of the victim screaming, crying, and pleading for help (“Mom, help me, they have me!”).
- The Execution: The attacker calls the parent, often spoofing the child’s phone number. The call opens with the synthetic audio of the child in distress, instantly inducing panic. The scammer then takes the phone, claiming to be a kidnapper and demanding an immediate ransom via untraceable means like cryptocurrency or wire transfer.
3.2 Psychological Anatomy: The Amygdala Hijack
This scam exploits the “amygdala hijack”—a physiological response where the brain’s emotional center overrides the logical prefrontal cortex. Upon hearing the visceral sound of their child screaming, the parent enters a state of fight-or-flight. In this state, rational verification checks (like texting the child or checking their GPS location) are bypassed. The brain accepts the auditory evidence as absolute truth because the emotional cost of being wrong is too high.
3.3 Prevalence and Trends
Virtual kidnapping has become a top AI-related concern for families, particularly in the United States and regions with high tourism. The trend is moving towards automation; researchers predict the rise of AI-driven “robocalls” that can dial thousands of numbers, play the distress simulation, and only connect a human operator if the victim responds with sufficient panic. This scalability threatens to turn a targeted crime into a mass-market plague.
4. AI-Enhanced “Pig Butchering” (Hybrid Romance-Investment Fraud)
“Pig Butchering” (or Sha Zhu Pan) is a long-con investment fraud that originated in Southeast Asia but has gone global. It involves building a deep romantic or platonic relationship with a victim (fattening the pig) over months before convincing them to invest in a fraudulent crypto platform (the slaughter). AI has supercharged every stage of this labor-intensive crime. top 10 AI phishing scams 2024–2025
4.1 The AI-Enabled Kill Chain
Traditionally, Pig Butchering required armies of human traffickers and slaves in “scam compounds” to manually type messages to thousands of victims. GenAI has streamlined this gruesome supply chain:
- Persona Generation: Scammers use AI image generators (like Midjourney or Stable Diffusion) to create consistent, hyper-attractive, and non-existent personas. These images are unique, meaning they cannot be debunked using reverse-image search tools like TinEye or Google Lens.14
- Automated Grooming: In the early “The Lure” and “The Bond” phases , chatbots powered by LLMs can engage with thousands of victims simultaneously. These bots are tireless, patient, and can be programmed to use specific psychological hooks (e.g., feigning loneliness, sharing “secrets”) to build intimacy rapidly.
- Real-Time Translation: A major barrier for Southeast Asian scam syndicates targeting Western victims was language. AI-powered real-time translation tools now allow a scammer in Myanmar to converse in fluent, colloquial English, French, or Spanish, erasing the linguistic markers of fraud.
4.2 The Video Frontier
The most dangerous evolution is the use of deepfake video in Pig Butchering. Previously, scammers would avoid video calls, claiming a broken camera or bad internet connection—a major red flag. Now, using the same real-time face-swapping tech seen in the Arup case, scammers can appear on video calls as the attractive AI-generated persona they have been playing. This visual confirmation cements the trust relationship, making the subsequent request for investment nearly irresistible.14
4.3 Financial Devastation
The losses in these schemes are often life-altering, involving the liquidation of retirement funds and the taking of second mortgages. The cryptocurrency sector, the primary vehicle for these scams, saw deepfake-related incidents rise by 654% from 2023 to 2024.17 The fraud creates a “double trauma”—the financial ruin coupled with the psychological devastation of a betrayed romantic connection.
5. Biometric Bypass and KYC Injection Attacks
As the financial sector fortifies its perimeter with biometric security (Know Your Customer – KYC), criminals are using AI to tunnel underneath the walls. Biometric Bypass attacks target the fundamental mechanism of digital identity verification.
5.1 Mechanism: The Injection Attack
Modern banking apps often require a “video selfie” or a “liveness check” (e.g., “turn your head to the left”) to verify identity. AI-powered attacks defeat this via Camera Injection:
- Instead of holding a picture in front of a webcam (which is easily detected by depth sensors), attackers use virtual camera software or hardware emulators to “inject” a pre-rendered deepfake video directly into the application’s data stream. The app “sees” the video feed coming from the camera driver and assumes it is live footage.
- Attackers use harvested biometric data (often stolen via malware like GoldPickaxe or Gigabud) to create these deepfake puppets. The malware intercepts the liveness challenge, the AI generates the appropriate head movement, and the injected feed passes the check.
5.2 Synthetic Identities and “OnlyFake”
Beyond liveness checks, AI is being used to manufacture entirely new identities. Services like “OnlyFake” utilize neural networks to generate hyper-realistic images of physical ID cards (passports, driver’s licenses) resting on real-world surfaces like carpets or tables.
- The “Fluffy Carpet” Technique: To fool fraud detection algorithms that look for digital artifacts, these AI generators add “noise” and context—shadows, texture, and imperfections—that make the ID look like a physical object photographed in a home environment.
- Scale: These generators can produce 20,000 unique documents per day, effectively flooding the financial system with synthetic “mule” accounts used for money laundering.
5.3 Systemic Risk
The rise of these attacks (up 704% in 2023) threatens the viability of remote onboarding. If banks cannot reliably distinguish between a new customer and a deepfake injection, they may be forced to return to high-friction, in-person verification, slowing the digital economy significantly.
6. The “Digital Arrest” Phenomenon
Predominant in India and expanding across the APAC region, the “Digital Arrest” scam is a sinister evolution of police impersonation that leverages AI to create “virtual police stations.”
6.1 The Mechanism of Virtual Custody
This scam is a complex theatre of intimidation designed to isolate the victim and force compliance through fear of state authority.
- The Setup: The victim receives a call, often automated, claiming a parcel addressed to them has been intercepted containing illegal goods (drugs, fake passports) or that their Aadhaar (national ID) has been linked to money laundering.
- The “Digital Arrest”: The victim is transferred to a “senior officer” (often posing as CBI, Narcotics Control Bureau, or Cyber Police) via a video call on Skype or WhatsApp. They are told they are under “digital arrest” and must keep their camera on and line open 24/7 for surveillance.
- AI Scenography: To maintain the illusion, scammers use AI-generated backgrounds to mimic the interior of a busy police station or a government office. They wear uniforms and may use deepfake filters to resemble high-ranking officials whose photos are publicly available.
6.2 The Extortion
Once the victim is psychologically broken—exhausted, terrified, and isolated—they are instructed to transfer their savings to a “secure verification account” (often an RBI escrow account, which is fake) to prove their innocence. They are promised the money will be returned after “verification,” which never happens.
6.3 Impact and Response
The scale of this fraud is massive, with losses in India estimated at ₹3,000 crore (approx. $360 million). The sophistication has reached such a level that the Supreme Court of India has intervened, mandating federal investigations and the deployment of AI-based detection systems by the Reserve Bank of India (RBI). This scam highlights how AI is used not just for impersonation, but for environmental simulation, creating a coercive reality that surrounds the victim.
7. Deepfake Investment Schemes and Market Manipulation
This vector involves the use of deepfake celebrities, politicians, or news anchors to promote fraudulent investment platforms or manipulate stock prices through disinformation. top 10 AI phishing scams 2024–2025
7.1 The Celebrity Clone Army
The most visible manifestation of this threat is the flood of deepfake ads on social media platforms like Facebook, YouTube, and Instagram.
- Prominent Targets: Figures like Elon Musk, Mukesh Ambani, Narayana Murthy, and Tucker Carlson are perennial favorites. The scammers use AI lip-syncing tools (like Wav2Lip) to manipulate existing video footage of these figures. The new audio track features the celebrity endorsing a specific “AI trading bot” (e.g., “Quantum AI,” “InvestGPT”) that promises guaranteed returns.
- Mechanism: The videos are hosted on a vast, rotating network of domains (e.g.,
belmar-marketing[.]online) to evade takedowns. The campaigns are highly localized; a victim in Singapore sees a deepfake of Prime Minister Lee Hsien Loong, while a victim in Canada sees Elon Musk.
7.2 Market Manipulation via Fake News
Beyond consumer fraud, deepfakes pose a systemic risk to financial markets. In May 2023, an AI-generated image of an explosion at the Pentagon went viral on Twitter, shared by verified accounts. The S&P 500 dropped by 30 points in minutes, wiping out billions in market capitalization before the image was debunked. This incident proved that generative AI can move markets, creating a massive incentive for short-sellers and state actors to deploy synthetic disinformation campaigns. top 10 AI phishing scams 2024–2025
8. AI-Driven Recruitment and Employment Fraud
The remote work revolution has created a massive attack surface for employment-related fraud, which AI is now exploiting from both sides of the hiring table.
8.1 The “Deepfake Candidate”
A growing threat involves “fake workers”—often associated with North Korean state-sponsored revenue generation schemes—using deepfake overlays during video interviews to secure remote IT jobs.
- Mechanism: The applicant uses stolen identities and a real-time face swap to match the ID provided. Once hired, they may engage in wage theft (doing no work), or worse, use their insider access to deploy ransomware or steal intellectual property.
- Indicators: These deepfakes often exhibit subtle artifacts: “glitching” when the head turns too quickly, unsynchronized lip movements, or lighting inconsistencies.
8.2 The “Deepfake Recruiter”
Conversely, scammers pose as recruiters from prestigious firms (Amazon, Google) using AI avatars to conduct “interviews.” The goal is twofold:
- Data Harvesting: Collecting sensitive PII (Social Security numbers, bank details) under the guise of “onboarding paperwork”.
- Financial Fraud: Demanding payments for “home office equipment” or “software licenses” that will supposedly be reimbursed.
9. Synthetic Sextortion and “Nudify” Exploitation
This vector represents the darkest intersection of AI capability and human cruelty. Synthetic Sextortion weaponizes Non-Consensual Intimate Imagery (NCII) created by AI.
9.1 The “Nudify” Ecosystem
The engine of this crime is a class of applications known as “Nudify” apps, often built on open-source diffusion models like Stable Diffusion.
- Mechanism: A user uploads a benign, clothed photo of a target (taken from social media). The AI identifies the clothing, removes it, and uses “inpainting” to generate realistic nude anatomy in its place.
- Accessibility: These apps are widely available, often marketed on Telegram or the dark web, and require zero technical skill to operate.
9.2 The Extortion Cycle
Attackers generate these images and send them to the victim—often minors or young adults—threatening to release them to friends, family, or employers unless a ransom is paid or real sexual content is provided.
- Impact: In 2025, 13% of sextortion victims reported being targeted by AI-generated content.28 The psychological impact is devastating; the victim suffers the shame and trauma of exposure, even though the images are fake, because the visual evidence is convincing to the casual observer.29
10. Automated Spear Phishing and Malicious LLMs
The final major category is the industrial engine powering many of the others: the rise of Malicious LLMs that automate the creation of phishing content.
10.1 The “Dark LLM” Market
While commercial models like ChatGPT have “safety rails” preventing them from generating scams, the dark web has produced its own variants:
- WormGPT: Marketed as the “blackhat alternative to ChatGPT,” it has no ethical restrictions. It excels at writing persuasive, emotionally manipulative emails for Business Email Compromise (BEC).
- FraudGPT: A subscription service (approx. $200/month) that acts as an all-in-one toolkit: writing malicious code, creating phishing landing pages, and drafting scam scripts.
- DarkBERT: A model trained specifically on dark web data, making it highly effective at understanding the language and context of cybercrime.
10.2 SpamGPT and Evasion
SpamGPT takes this a step further by focusing on delivery. It automates the management of email infrastructure, rotating IPs and tweaking email content slightly for each recipient to evade spam filters. This allows for “Polymorphic Phishing”—attacks that change their shape constantly to remain undetectable.
11. Autonomous Vishing and The Future of AI Agents
Looking toward the horizon of late 2025 and 2026, the threat is evolving from “AI-assisted” to “Fully Autonomous.”
11.1 The Rise of AI Vishing Bots
Voice phishing (vishing) is transitioning from human-operated call centers to autonomous AI agents.
- Capabilities: These bots can initiate calls, understand spoken responses via Speech-to-Text (STT), query a database, and respond via Text-to-Speech (TTS) in real-time.
- Application: They are currently used to intercept 2FA codes. A bot calls a victim posing as “Bank Fraud Prevention,” alerts them to a fake transaction, and asks them to read back the OTP code sent to their phone to “cancel” the charge. The bot then inputs this code into the real banking site to take over the account. top 10 AI phishing scams 2024–2025
11.2 Future Outlook: Autonomous Kill Chains
By 2026, we anticipate Autonomous Attack Agents that can execute the entire fraud lifecycle without human intervention: identifying a target, generating a lure, engaging in conversation, and laundering the proceeds. This will lead to a volume of attacks limited only by compute power, not human labor.
12. Strategic Recommendations and Conclusion
12.1 The “Zero Trust” Media Paradigm
The central conclusion of this analysis is that media is no longer evidence. Organizations and individuals must adopt a “Zero Trust” mindset toward all digital communications.
- Cryptographic Provenance: The adoption of standards like C2PA (Coalition for Content Provenance and Authenticity) is critical. This technology cryptographically signs media at the point of capture, allowing users to verify if an image or video has been altered.
- Out-of-Band Verification: Procedural controls are the strongest defense. If a CEO requests a transfer via video call, verify it via a text message to their personal phone. If a child calls screaming, verify their location via a family tracking app or a call to their friend.
12.2 Defense in Depth
For organizations, defense requires a layered approach:
- Liveness Detection 2.0: Moving beyond simple “active liveness” to more complex, randomized challenges that injection attacks cannot easily solve.
- Adversarial Training: Security awareness programs must expose employees to deepfakes to inoculate them against the shock value and teach them to spot the subtle artifacts of synthetic media. top 10 AI phishing scams 2024–2025
The era of AI-powered fraud is not a future threat; it is the current operating reality. The mechanisms detailed in this report—from the high-end deepfakes of the Arup heist to the mass-market terror of virtual kidnapping—demonstrate a sophisticated, agile adversary. Defense in this new age requires not just better technology, but a fundamental rethinking of how we establish and verify trust in the digital world. top 10 AI phishing scams 2024–2025
Table 1: Comparative Analysis of Top 10 AI Scams (2024-2025)
| Scam Type | Primary Mechanism | Target | Key Tech Enabler | Financial Impact | Psychological Trigger |
| Deepfake BEC | Real-time Video Impersonation | Corporations | DeepFaceLive, RVC | Very High ($25M+) | Social Proof / Authority |
| Virtual Kidnapping | Audio Cloning / Spoofing | Families | TTS / Voice Cloning | Med-High | Amygdala Hijack (Fear) |
| AI Pig Butchering | Long-con Romance / Investment | Crypto Investors | GenAI Personas, Translation | Very High (Life Savings) | Intimacy / Greed |
| Biometric Bypass | Camera Injection / Synthetic ID | Banks / Fintech | Virtual Cameras, OnlyFake | High (Systemic) | Trust in Tech |
| Digital Arrest | Video Coercion / Env. Simulation | Individuals (APAC) | AI Backgrounds, Filters | High | Fear of Authority |
| Deepfake Investment | Celebrity Endorsement Videos | General Public | Wav2Lip, GANs | High (Aggregated) | Appeal to Authority |
| Recruitment Fraud | Deepfake Interviews | HR / Remote Work | Real-time Face Swap | Med | Desire for Employment |
| Synthetic Sextortion | Nudify Apps / Extortion | Individuals / Minors | Diffusion Models | Low ($) / High (Trauma) | Shame / Reputation |
| Malicious LLMs | Auto-Spear Phishing | General / Corp | FraudGPT, WormGPT | Variable | Personalization |
| AI Vishing | Autonomous Voice Bots | Banking Customers | Conversational AI | Low-Med | Urgenc |




Been playing at phrushcasino for a bit now, and gotta say, I’m enjoying the rush! Games are fun, payouts seem pretty legit. Give it a look if you’re looking for a new thrill! Check it out here: phrushcasino