Module 1: Why This Matters Now
In January 2024, a finance worker at Arup — the British engineering firm that designed the Sydney Opera House — joined a video call with the company’s CFO and several senior colleagues. The CFO explained a confidential transaction that required urgent wire transfers. The employee was initially sceptical. Then he looked at the screen and saw the familiar faces of people he knew.
Over the following week, he authorised 15 wire transfers totalling $25.6 million to five Hong Kong bank accounts.
Every person on that call was an AI-generated deepfake. The attackers had scraped publicly available footage from YouTube and LinkedIn and used it to fabricate a multi-person video conference. No nation-state resources were involved. No sophisticated technical infrastructure. Just tools that anyone with an internet connection and modest budget can access today.
That is what changed.
The Old Warning Signs Are Gone
Security training used to tell you to look for spelling mistakes. Awkward phrasing. Generic greetings. The “Dear Sir/Madam” opening that signalled a mass-produced scam. Those signals worked because creating convincing, personalised attacks at scale used to require skilled writers, which cost money and time.
Since ChatGPT launched in November 2022, the economics collapsed. Attackers can now generate 1,000 personalised phishing emails in under two hours for approximately $10. The grammar is perfect. The tone matches the sender. The context is specific to you. Research published in 2024 found that 60% of people fall victim to AI-generated phishing — the same rate as falling for messages crafted by expert human social engineers who previously spent 16 hours on each attempt.
The thing that previously protected you — the attacker’s cost and skill barrier — is gone.
Three Incidents That Shaped How I Think About This
Samsung, April 2023. Three employees in the semiconductor division independently uploaded sensitive materials to ChatGPT within weeks of each other. One pasted faulty source code to ask for debugging help. Another submitted code for optimising equipment processes. A third transcribed an internal meeting and asked ChatGPT to write the minutes. Proprietary source code, equipment testing sequences, and strategic discussions from internal meetings all went into OpenAI’s systems. Samsung banned all generative AI tools company-wide the same week. The intellectual property damage is impossible to quantify.
MGM Resorts, September 2023. Attackers called the MGM IT help desk, used social engineering techniques — and possibly AI-enhanced voice synthesis — to convince staff to grant administrator access to their systems. The attack triggered a 10-day operational shutdown. Slot machines went dark. ATMs stopped working. Digital room keys failed across properties including the Bellagio and Mandalay Bay. MGM refused to pay the ransom. The total loss: $100 million. The attack group, Scattered Spider, consisted of young people aged 19 to 22. They did not need nation-state resources. They needed AI tools and a phone.
Arup, January 2024. Described above. The finance worker authorised $25.6 million based on a fabricated video call. The most striking detail: he was initially suspicious. The deepfake video call resolved his doubts. The very thing that security training tells you to trust — seeing someone’s face, hearing their voice — had been weaponised.
Why “Just Be Careful” Is Not a Strategy
The standard advice — “don’t click suspicious links,” “verify before you send money” — was built for a threat landscape that no longer exists. It assumed that attacks would be detectable by the same humans they targeted.
That assumption is broken. Research from 2024 found that humans correctly identify deepfake voices only 54% of the time — essentially a coin flip. The best commercial video deepfake detectors achieve 78% accuracy on real-world examples. Your intuition, which was developed to detect human deception, is not calibrated for synthetic media produced by models trained on millions of examples.
Multi-factor authentication, which was supposed to be the definitive fix for account takeover, has the same problem: 84% of compromised accounts in documented 2024 incidents had MFA enabled. Attackers have developed sophisticated bypass techniques — tools like EvilProxy generate over a million MFA bypass attempts monthly according to Proofpoint.
None of this means you are helpless. It means the tools need to change alongside the threat. Understanding how these attacks work is the first step, because you cannot defend against something you cannot recognise.
Who This Is For
This is written for people who already use AI tools — ChatGPT, Claude, Copilot, Gemini, or similar — and want to use them without creating avoidable risk for themselves or their organisations. Not for security professionals (this will cover ground you already know). Not for people avoiding AI entirely (different conversation). For the consultant who pastes client context into Claude to draft proposals. For the manager whose team is experimenting with AI tools without any clear policy. For the small business owner using ChatGPT to handle tasks that used to take hours.
The goal is not to make you paranoid. It is to give you the understanding that turns vague anxiety into specific, manageable decisions. You do not need to understand cryptography. You need to understand tradeoffs — which is a skill you already have.
What the Numbers Actually Mean
I want to give you a sense of scale before we get into specifics, because the statistics in this space are genuinely striking.
From 2022 to 2024:
- Voice phishing (vishing) attacks increased 442%
- Malicious phishing emails increased 1,265%
- Business email compromise attacks increased 1,760%
- Deepfake fraud losses in the first half of 2025 alone reached $410 million — more than all of 2024
These are not projections. They are documented incidents, breach reports, and law enforcement data. The acceleration is real and it is ongoing.
The good news: most of the protective measures that actually work are not technically complex. They are conceptual — understanding which threats are plausible, which signals still mean something, and what should never go into an AI tool regardless of how convenient it would be. That is what the remaining modules cover.
The next module gets specific: what each attack type actually looks like, and what makes them work.
Check Your Understanding
Answer all questions correctly to complete this module.
1. In the Arup deepfake incident, what convinced the initially sceptical employee to authorise the wire transfers?
2. Why does the chapter argue that traditional advice like 'just be careful' is no longer adequate?
3. What happened to the economics of phishing attacks after ChatGPT launched?