Generative AI’s Impact on Cybercrime
Artificial Intelligence (AI) has become a double-edged sword in cybersecurity. On one hand, organisations can deploy AI for defence; on the other, cybercriminals are increasingly leveraging AI to amplify the scale and sophistication of attacks. In the last two years, threat actors worldwide (including in New Zealand) have embraced AI tools to launch more convincing phishing campaigns, create realistic deepfake impersonations, build malware, and even automate hacking tasks. What used to take time, effort, and technical skill is now faster, cheaper, and more convincing thanks to AI.
For New Zealand organisations, especially in professional services, this shift introduces new risks. You may no longer spot an attack by broken English or crude attempts at impersonation. AI can now generate perfect grammar, copy your comms style and fake a colleague’s voice with a short audio sample.
How Generative AI is Being Used in Cybercrime
At this stage, AI is predominantly amplifying existing cybercrime methods rather than generating new ones. Phishing, fraud and malware attacks remain the core risks. However, cybercriminals are now using generative AI tools to enhance almost every stage of an attack and supercharge their effectiveness. Some key trends include:
Phishing and social engineering: Some reports indicate greater than 80% of phishing emails in 2024 utilised AI in some way. Additionally, research suggests that AI-generated phishing emails are more successful (greater open and click rates) and can be deployed up to 40% faster than traditional campaigns.
Chatbots like ChatGPT have guardrails to prevent misuse, but dark web versions such as “WormGPT” and “FraudGPT” remove those restrictions. These tools are built specifically for criminals. They can produce tailored phishing emails, fake legal notices, and business email compromise (BEC) scams in seconds.
Code generation and malware: Generative AI can also write software code. Criminals have started using it to create malware, refine ransomware payloads, and scan for software vulnerabilities. One proof-of-concept known as “BlackMamba” showcased malware that changes itself in real-time to avoid detection.
AI-driven password cracking tools are also proving to be highly effective.
Voice and video deepfakes: With just a few seconds of audio, criminals can now clone someone’s voice. There have been several high-profile cases where a CEO’s voice was faked to approve fraudulent payments. Deepfake video is also emerging: AI-generated avatars of real CFOs have been used in video calls to trick employees into transferring funds.
Synthetic documents and fake identities: AI image generators can now produce fake IDs, doctored invoices, or even realistic LinkedIn headshots. These are being used to bypass Know Your Customer (KYC) checks and give scam accounts a more professional appearance.
NZ context: AI showing up in scams AI-enabled fraud is no longer hypothetical and New Zealand has not been spared. In one report New Zealand has actually been identified as one of the nations most affected by AI enabled threats.
CERT NZ has warned that phishing emails are getting harder to spot due to polished language and personalisation. While not all are confirmed to be AI-generated, the trend is consistent with generative AI being used. NZ banks have also reported attempted fraud involving synthetic identities and forged voice calls.
Global Examples: Here are just a few real-world example incidents:
- $25M deepfake CFO scam (2024) In Hong Kong, a finance worker at a multinational firm was tricked into transferring NZ$40 million after attending a video call with what appeared to be their CFO and colleagues. The video and voices were fake, generated using AI. The scam used real footage and internal knowledge scraped online.
- $243k CEO voice clone (2019) An energy firm executive in the UK received a phone call from someone sounding exactly like his German parent company’s CEO. He followed instructions to wire funds to a supplier. The voice was AI-generated. Though the case is several years old, similar attacks are now far more common.
- Bypassing Security Filters (2023) AI defeats AI. In 2023 Check Point Research uncovered a malware strain that included an automated prompt intended to disable a security AI agent.
- FraudGPT subscription models (2023) Security researchers uncovered FraudGPT for sale on cybercrime forums. This tool is trained to assist in writing scams, malware, and guides for targeting businesses. It costs around $200 a month and was reportedly used in targeted phishing attacks against professional services firms.
Where This Is Heading?
AI-enabled cybercrime is likely to evolve in the following ways:
- Hyper-personalised phishing: AI can analyse public LinkedIn profiles or leaked email threads to tailor scam messages. That “Hi Sarah, following up on Monday’s board papers…” might be entirely fake.
- Real-time deepfakes: With more computing power, deepfake video and audio will be able to run live. Expect to see more fake Zoom calls, fake verification interviews, or impersonated clients.
- Autonomous malware: Future malware may adapt its behaviour automatically using AI. This makes traditional signature-based detection tools even less effective.
- Cybercrime-as-a-service: Criminals don’t need to be skilled anymore. AI lowers the barrier to entry. They can now subscribe to toolkits that handle everything from scam writing to malware deployment.
All of these developments point to increasingly autonomous cyberattacks allowing cybercriminals to scale up their attacks without the need for a skilled army. For professional services firms, this introduces reputational and client trust risks. If a staff member falls for a scam that compromises sensitive documents or misdirects client funds, the impact can be immediate and severe.
How to Spot and Prevent AI-Assisted Scams
The best protection is a mix of scepticism, verification, and technology. Here’s what we recommend:
- Always verify requests out-of-band
If you get a financial request by email or phone – even if it sounds or looks right – confirm it using a different channel (e.g. a known phone number or in person). - Watch for deepfake warning signs
Lip sync slightly off? Odd background noise? Unusual urgency? Teach staff to spot the signs of AI-generated audio or video. Training should now include deepfakes, not just email scams. - Limit executive exposure
Consider how much video, audio and personal info your leaders are publishing online. Webinars, podcasts and team bios can be used to train deepfakes. - Build fraud-resistant processes
Enforce multi-person approvals for payments and vendor changes. Even a perfectly faked voice shouldn’t be able to override a proper process. - Use anomaly detection and modern email filters
AI-enabled scams often break historical patterns. Use security tools that flag unexpected behaviour – such as finance staff receiving a high-risk invoice on a Sunday night from a personal Gmail address. - Encourage fast reporting
The earlier you catch a fraud attempt, the better your recovery chances. Normalise reporting of anything suspicious, even after the fact.
Generative AI isn’t going away and it is changing the way cybercrime works. New Zealand firms can’t rely on outdated clues to spot scams. That means leadership teams must evolve their response too: rethink verification steps, tighten internal controls, and raise awareness that a video call, email, or voice message might not be what it seems. If AI is being used to trick us, we’ll need a smarter playbook to stay ahead.
About the Bulletin:
The NZ Incident Response Bulletin is a monthly high-level executive summary containing some of the most important news articles that have been published on Forensic and Cyber Security matters during the last month. Also included are articles written by Incident Response Solutions, covering topical matters. Each article contains a brief summary and if possible, includes a linked reference on the web for detailed information. The purpose of this resource is to assist Executives in keeping up to date from a high-level perspective with a sample of the latest Forensic and Cyber Security news.
To subscribe or to submit a contribution for an upcoming Bulletin, please either visit https://incidentresponse.co.nz/bulletin or send an email to bulletin@incidentresponse.co.nz with the subject line either “Subscribe”, “Unsubscribe”, or if you think there is something worth reporting, “Contribution”, along with the Webpage or URL in the contents. Access our Privacy Policy.
This Bulletin is prepared for general guidance and does not constitute formal advice. This information should not be relied on without obtaining specific formal advice. We do not make any representation as to the accuracy or completeness of the information contained within this Bulletin. Incident Response Solutions Limited does not accept any liability, responsibility or duty of care for any consequences of you or anyone else acting, or refraining to act, when relying on the information contained in this Bulletin or for any decision based on it.
