“Artificial intelligence acts like a performance enhancer for cybercriminals. It effectively levels up mid-level bad guys.” [1]
In the last year, artificial intelligence (AI) has quickly evolved, powering everything from personalized search to advanced medical diagnostics. But while technology races forward, a darker reality looms: cybercriminals are weaponizing those same AI breakthroughs to refine scams, impersonate victims, and amplify illegal enterprises.
Government agencies—from the FBI’s Internet Crime Complaint Center (IC3) to the Department of Homeland Security (DHS)—have flagged AI-driven cybercrime as a fast-growing problem. [2][8]
We researched U.S. government findings and credible third-party analysis on how AI is being used for cybercrime in 2024. From terrifying deepfake scams to potent phishing campaigns, here’s what the data—and the fact checks—reveal.
Key Takeaways:
- AI is a contributing factor but not the sole cause of every cyberattack. Traditional hacking persists at scale.
- Deepfake impersonation is no longer science fiction; it’s happening now.
- Sextortion and CSAM generation using AI is an urgent priority for law enforcement and child-protection agencies.
- Disinformation is a national security threat, with AI fueling more believable fake personas and narratives.
- Stiffer enforcement is on the horizon, as DOJ and FBI vow to treat AI-assisted crimes with heightened seriousness.
- 56% of Americans surveyed said they were concerned about Deep Fakes, AI posing a threat to human life, or unlawful use. [9]*
1. AI-enhanced phishing and Social Engineering
A New Era of Convincing Phishing
- What’s happening?
Criminals use generative AI to craft perfect, personalized phishing emails and texts. Gone are the days of laughable grammar errors. Now, these emails appear polished and hyper-targeted, duping even vigilant recipients. [2][3] - Official Warnings
The FBI’s San Francisco division warns of an “increasing threat of cybercriminals using AI-powered tools to orchestrate highly targeted phishing campaigns.” [2] In 2024, the IC3 reported a continued rise in phishing-related losses—though they don’t provide an exact tally of how many are AI-driven. [3] Qualitatively, however, investigators see a notable uptick in more sophisticated phishing attempts that bypass typical red flags. - Fact-Check Highlight
While phishing complaints are skyrocketing overall, the FBI has not published an exact percentage attributed solely to AI. Instead, they emphasize the growing realism and success rates of these AI-enhanced scams. It’s a prime example of a threat that is real—even if precise numbers are elusive.
Impacts on Businesses and Individuals
- Business Email Compromise (BEC):
AI helps scammers replicate executive emails or tailor messages in multiple languages, increasing success in BEC fraud. Losses from BEC scams exceeded $2.9 billion in 2023, according to the FBI, and 2024 is expected to outpace that—though the AI slice is still being monitored. [3] - Personal Losses:
People face well-crafted phony bank alerts or “romance scam” letters that appear heartbreakingly real. This has led to stolen credentials, drained bank accounts, and identity theft [2]. Elderly people remain especially vulnerable, as criminals can clone voices or produce caretaker-like messages.
2. Deepfake Impersonation and Fraud
Fake Voices, Real Money
Imagine getting a frantic call from a loved one’s voice begging for ransom money. In reality, AI used samples from social media to generate that exact vocal signature.
According to the FBI’s IC3 bulletins, voice cloning has emerged as a new tool for extortion and financial scams. [2][5] Incidents of “fake CEO” calls tricking employees into wiring funds have also been reported.
Never Worry About AI Detecting Your Texts Again. Undetectable AI Can Help You:
- Make your AI assisted writing appear human-like.
- Bypass all major AI detection tools with just one click.
- Use AI safely and confidently in school and work.
Case in Point: In 2023, an Arizona mother was nearly scammed when she believed she heard her teenage daughter’s sobbing voice in a fake “kidnapping” call. Fortunately, she verified her daughter’s safety before wiring money.
Deepfake Videos and Impersonation
Criminals also produce AI-generated video calls featuring realistic “CEOs” or “business partners,” instructing employees to authorize large transfers. The Department of Justice (DOJ) warns these deepfake video calls are unnervingly credible, especially in remote or hybrid work environments. [4][7]
Fact-Check Note:
Law enforcement has confirmed multiple incidents but has not published an official count of deepfake cases. The reported examples, however, align with consistent FBI/DOJ warnings about a “sharp uptick” in the use of deepfake technology for scams. [2][4][7]
3. AI-Generated Sextortion and Synthetic Media Abuse
Altering Innocent Images into Exploitation
- What’s happening?
Criminals take benign photos (from social media or otherwise) and manipulate them into explicit material. This so-called “deepfake sextortion” thrives on AI’s ability to generate disturbingly convincing images or videos. [5] - Victims Include Minors
The FBI, in a 2023 PSA, noted a rise in sextortion cases where minors’ pictures were turned into fake pornographic material. Offenders then threaten to share or publish these fabricated images unless victims pay or provide real sexual content. [5] - Child Sexual Abuse Material (CSAM)
DHS and DOJ confirm that AI tools are being used to create AI-generated child sexual abuse content. In August 2024, the DOJ arrested an individual for using AI to produce child pornography. [6] Law enforcement stresses that “CSAM generated by AI is still CSAM.”
Fact-Check Reminder:
Official sources unequivocally treat AI-manipulated CSAM as illegal. Precise prevalence data is scarce, but multiple enforcement actions confirm its real and growing threat.
4. AI-Driven Disinformation and Political Interference
- Election Meddling 2.0
In July 2024, the DOJ disrupted a covert Russian bot farm that used AI-enhanced fake personas to spread disinformation [4][7]. These AI-generated social media accounts pushed divisive narratives with a sophistication that automated filters struggled to detect. - Domestic and Foreign Influence
FBI Director Wray notes that “generative AI…lowers the barrier to entry” [1], making it easier for both foreign adversaries and domestic groups to craft plausible “fake news,” deepfake videos, or impersonate politicians. - Not Just Spam
This content can undermine public trust, sow confusion during elections, or encourage malicious financial schemes. [4][8] DHS classifies AI-driven disinformation as a “cyber-enabled” crime—recognizing that while it might not involve direct hacking, it exploits digital platforms for illegal or deceptive ends.
Fact-Check Context:
The DOJ has well-documented the “bot farm” takedown. [4][7] Independent security researchers corroborate that AI-generated persona creation is on the rise. However, official agencies seldom release exact metrics (e.g., “30% of disinformation is now AI-based”). Instead, they focus on publicizing the seriousness and sophistication of these campaigns.
5. Emerging Threats: AI-Assisted Malware and Hacking
WormGPT and Beyond
On underground forums, criminals are touting AI models—like “WormGPT”—fine-tuned for writing malware. DHS warns these tools “fill knowledge gaps such as computer coding,” enabling relatively unskilled hackers to launch advanced attacks [8]. While large-scale AI-written malware is still limited, experts predict it will be a major concern moving forward.
Adaptive Malware?
Security researchers have demonstrated that AI-driven malware can morph its code (“polymorphism”) to evade antivirus detection. The DHS has also theorized about “self-learning” malware that autonomously tests new exploits, but real-world examples remain rare in 2024. [8]
Fact-Check Note:
Most high-profile attacks (like ransomware) still rely on conventional methods, but the FBI, NSA, and cybersecurity experts see signs that AI is increasingly part of cybercriminals’ arsenal. Expect more developments—and possibly more official stats—by 2025.
Impacts & Key Statistics (Fact-Checked)
- Overall Cybercrime Complaints
- The FBI’s IC3 received 880,000+ cybercrime complaints in 2023, with reported losses exceeding $12.5 billion—a 22% increase in losses from 2022. [3]
- While the exact portion involving AI is not tracked, experts believe AI’s realism drives higher success rates in phishing, BEC, and impersonation scams. [2][3]
- Business Email Compromise (BEC)
- Remains one of the costliest forms of cybercrime, surpassing $2.9 billion in reported losses in 2023. [3] AI’s role? More polished emails, better localization, and occasionally deepfake calls or videos.
- Deepfake Sextortion
- The FBI does not publish a definitive count of deepfake-based sextortion incidents, but “multiple new reports each month” have surfaced. [5]
- Victim Impact: Personal humiliation, financial extortion, and emotional trauma.
- Disinformation Operations
- The DOJ disrupted an AI-powered Russian bot farm in July 2024 [4][7]. Federal agencies emphasize the threat of AI-propagated disinformation to U.S. elections and democratic processes.
- Fact-Check on “Third-Largest Economy” Myth
- Some media headlines compare global cybercrime costs to the GDP of major nations, but the $10–12 billion in annual reported U.S. losses are obviously not on that scale [3]. Analysts do estimate the global economic impact of cybercrime could reach trillions, placing it theoretically among the top economies if one includes all indirect costs.
- “One in Five Crimes Online” Context
- In certain regions, about 20–25% of reported crimes are now cyber-related. This figure does not necessarily apply worldwide but highlights a rising share of total crime being digital.
Law Enforcement and Policy Responses
- DOJ’s Tougher Stance
- Deputy Attorney General Lisa Monaco pledged “stiffer sentences” for criminals who exploit AI to amplify offenses—treating AI as an aggravating factor in fraud, child exploitation, and more. [6]
- Homeland Security AI Task Force
- DHS is expanding efforts to detect AI-driven scams, manage deepfake detection, and protect minors from AI-generated exploitation. [8] It’s also exploring how to use AI for defense, scanning networks for anomalies.
- Corporate Compliance
- The DOJ now asks companies to show how they handle AI risks in their compliance programs. Failing to prevent employees from misusing AI, or ignoring AI security vulnerabilities, can raise corporate liability. [6]
- Public Awareness Campaigns
- The FBI, FCC, and FTC all launched consumer alerts on AI-based “grandparent” scams and deepfake kidnapping calls. The message? Always verify suspicious calls or messages—seeing (or hearing) is no longer believing in the AI era. [2][5]
56% Of Americans Say They’re Worried About Harmful AI
In November 2024, Undetectable AI surveyed 1,000 Americans across the United States. [9] The respondents, aged 18 to 27, indicated that 23% believed AI poses an existential threat to humanity, 18% expressed concerns about the unlawful use of data or data privacy, and 15% worried about AI technology being used to create deepfakes.
Looking Ahead: 2025 and Beyond
Despite limited hard data on the exact prevalence of AI-driven attacks, the trend is clear: AI is lowering barriers for criminals, enabling more realistic scams and intensifying disinformation. Meanwhile, defenders—law enforcement, cybersecurity experts, and regulators—are ramping up. Expect:
- More AI in Defense: Tools for deepfake forensics, AI-based phishing filters, and anomaly detection in networks.
- Expanded Regulatory Oversight: DOJ, SEC, and others will likely crack down further on AI misuse, imposing heavier penalties on individuals or organizations that facilitate AI-related crimes.
- Broader International Cooperation: Cybercrime is borderless, and joint operations—like the DOJ’s July 2024 takedown of a Russian bot farm—will become more frequent.
Conclusion: A Force Multiplier for Cybercrime
The core theme emerging from official sources and expert analysis is that AI supercharges existing cybercriminal methods. From hyper-realistic phishing to “fake CEO” voice calls, criminals use generative AI to vastly improve their success rates—and to exploit victims who once relied on simple “gut checks” (misspellings, off accents, etc.) to spot scams.
Stay Vigilant and Informed
Public awareness and verification are critical in this rapidly growing environment. Whether you’re a C-suite executive receiving a last-minute wire request, a retiree fielding calls from a “grandchild in crisis,” or a social media user confronted with a sensational video of a politician, remember: if something feels off, verify it. AI means criminals can mimic real people or create illusions with startling accuracy.
Methodology
We conducted this research by examining official government resources, (such as published documents from the Department of Justice and the annual IC3 Internet Crime Annual Report.) We also examined trusted third-party commentary and coverage of emerging cybercrime threats related to Artificial intelligence. We also surveyed over 1000 correspondents in the United States to understand their concerns about AI. Finally, we fact-checked this report to guarantee all cited sources are accurate and up to date.
Fair Use
You are welcome and encouraged to feature this data in a story or article. Please be sure to provide attribution to Undetectable AI and include a link to this page so readers can view the full report.
References
- FBI: Prepare for an Election Year with Fat-Paced Threats, Powered by Bad Guys with AI – Defense One
- FBI Warns of Increasing Threat of Cyber Criminals Utilizing Artificial Intelligence (May 8, 2024)
- Internet Crime Complaint Center (IC3) – 2023 Internet Crime Report & Related PSAs
- Office of Public Affairs | Justice Department Leads Efforts… to Disrupt Covert Russian Government-Operated Social Media Bot Farm (July 9, 2024)
- Internet Crime Complaint Center (IC3) | PSA: Malicious Actors Manipulating Photos and Videos to Create Explicit Content and Sextortion Schemes (June 5, 2023)
- Man Arrested for Producing, Distributing, and Possessing AI-Generated Images of Minors Engaged in Sexually Explicit Conduct
- DOJ & Partners Disrupt Russian Bot Farm – Enforcement Details (July 2024)
- DHS: “Impact of Artificial Intelligence on Criminal and Illicit Activities” (Sept. 27, 2024)
- 2024 Survey Conducted By Undetectable AI