The Security Checklist
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 4: The Security Checklist
This module is different from the previous three. Less explanation, more action. Work through this with your actual tools in front of you — the goal is to leave with specific things changed, not just understood.
I have organised it into five areas: what stays out of AI tools, your accounts and credentials, evaluating new tools, recognising AI-generated content, and your verification protocol for social engineering attempts.
Part 1: What Never Goes Into AI Tools
The simplest framework: if you would be uncomfortable seeing this information appear in a data breach report — or if a regulator, client, or employer would be uncomfortable seeing it there — it does not go into a public AI tool.
Never enter:
- Client names, contact details, or anything that identifies a specific client or engagement
- Source code from proprietary systems, especially authentication code or anything involving credentials
- Employee information — salaries, performance reviews, personal details, HR matters
- Financial data that is not public — internal forecasts, unreported results, acquisition discussions
- Health information about any individual
- Legal communications protected by privilege
- API keys, passwords, access tokens, or any credential
- The contents of contracts before they are signed and public
- Competitive intelligence and internal strategy documents
What is generally fine:
- Public information — anything already on your website, in press releases, in published filings
- Generic tasks using fictional or anonymised examples — “write an email about a delayed project” with no real names
- Your own writing and work that does not contain the above categories
- Questions about concepts, techniques, or general knowledge
The sanitisation test: Before pasting anything into an AI tool, ask whether you could read it aloud in a public place without concern. If the answer is no, either anonymise it — replace real names with placeholders, remove identifying details — or use a local model.
Part 2: Accounts and Credentials
Audit your AI tool tiers today. For each AI tool you use regularly:
- Find the privacy settings — specifically, whether your conversations are used for training
- Check your subscription tier and verify what the data handling terms actually say
- Opt out of training data collection if you are on a tier that uses conversations by default
- If you use AI for work with sensitive data, check whether your organisation has enterprise agreements with any of these tools
For OpenAI (ChatGPT): Settings → Data Controls → “Improve the model for everyone” — turn this off if it is on. Note this applies to your account; if you use a shared workspace account, check workspace settings separately.
For Anthropic (Claude): Consumer accounts do not use conversations for training by default on paid tiers. Check the privacy centre for your current tier’s specifics.
For Google Gemini: Review Google’s data settings in your account — Gemini activity is stored in your Google Activity by default. This can be turned off.
API key hygiene:
If you use any AI via API access (this includes tools like the Claude API, OpenAI API, or any tool that gives you an API key):
- Do not put API keys in code files you commit to any repository, even private ones
- Use environment variables or a secrets manager (1Password, AWS Secrets Manager, Doppler) instead
- Rotate API keys at least quarterly — monthly if you use them heavily
- Set usage limits on your API accounts so a compromised key has a spending ceiling
- Check your API provider’s dashboard for any usage spikes you did not cause
If you have committed an API key to a repository at any point: assume it is compromised, revoke it immediately from the provider’s dashboard, and generate a new one. Key-scanning tools can process public GitHub repositories in minutes, and private repositories that were ever briefly public have likely been scanned.
Password and account security:
- Use a password manager. If you do not have one, this matters more than anything else in this checklist. 1Password and Bitwarden are both solid options. Browser-saved passwords are specifically targeted by the infostealer malware that swept through AI platform credentials in 2023 — 664,000 ChatGPT credentials were compromised in that year alone because they were stored in browsers.
- Enable two-factor authentication on all AI platform accounts. Use an authenticator app rather than SMS where possible.
- Regularly review browser extensions. The number of malicious ChatGPT-related extensions proliferated rapidly in 2023 — from 11 to over 200 in three months. Check what extensions you have installed, verify each one is from a legitimate source, and remove anything you do not actively use or recognise.
Part 3: Evaluating New AI Tools
Before adopting any new AI tool for work purposes, answer these questions:
1. Who made it and what is their business model? Free tools with no clear revenue source often monetise through data. A free AI tool with no enterprise offering and no obvious way to pay for it is probably selling something you are providing — your data, your queries, or both.
2. Where does the data go? Find the privacy policy. Specifically look for: what is stored, for how long, and whether it is used for training. If this information is not clearly stated, that is itself a red flag.
3. What permissions does it request? Browser extensions that request access to “all websites” or “your tabs and browsing history” have more access than they typically need to do their stated job. Any tool requesting access to your email, calendar, or files should have a clear, minimal-scope justification.
4. What is the data handling for your specific use case? If you are in healthcare, finance, legal, or any regulated sector, “data is stored securely” is insufficient. You need to know whether the tool has certifications relevant to your sector (SOC 2, HIPAA BAA, ISO 27001) and whether it can meet your organisation’s contractual requirements.
5. What happens if the company is acquired or goes under? Many AI startups have been acquired. Data handling commitments from the original company may not survive acquisition. If a tool is central to your workflow, consider what your exit plan looks like.
Part 4: Recognising AI-Generated Content
You will receive AI-generated content — emails, reports, proposals, messages. Some will be benign. Some will be phishing. The old grammar-based detection is gone. Here is what still works.
Check the context, not the text. Does this message make sense given the relationship and recent history? A perfectly-written email from a contact who normally writes in short informal messages is a signal. The writing quality tells you little. The appropriateness to the relationship tells you more.
Look for the urgency pattern. Artificial urgency — “you must act within the hour,” “this cannot wait until Monday,” “I cannot be reached by phone right now” — is the most consistent red flag in social engineering attacks whether AI-generated or not. Urgency prevents verification.
Verify through a channel you initiated. If an email, message, or even a phone call contains an unusual request, verify by contacting the supposed sender through a channel you already have and trust. Call the phone number from your own contacts. Send a separate email. Visit in person. Do not reply to the original message and do not use contact details provided in it.
For financial requests specifically: Any financial request that arrives outside your normal process — regardless of how credible the sender seems, regardless of a video call you just had — requires verification through a second independent channel. This is the single most protective habit in this checklist. The Arup employee authorised $25.6 million through a process that skipped this step.
Part 5: Your 10-Minute Setup
If you do nothing else from this module, do these five things before closing this tab:
- Open your ChatGPT settings and check whether “improve the model” is enabled. If yes, turn it off.
- Open your most-used AI tool’s privacy settings and read the data retention section. Note what tier you are on.
- Check your installed browser extensions and remove any related to AI that you did not deliberately install or no longer use.
- Set a spending limit on any API key account you have. This caps damage from a compromised key.
- Write down the “separate channel verification” rule somewhere you will see it: for any unusual financial request or request for system access, you call back on a number you already know. Not a number in the message.
These five steps take less than 10 minutes and address the most common failure modes in the incidents documented in this course.
Module 5 addresses the team dimension: how to build these habits into a shared working environment without creating policies so restrictive that people route around them.
Check Your Understanding
Answer all questions correctly to complete this module.
1. What is the 'sanitisation test' the chapter recommends before pasting content into an AI tool?
2. Why does the chapter recommend against storing passwords in browsers?
3. What is the single most protective habit identified for financial security?
Pass the quiz above to unlock
Save failed. Please try again.