Module 1: The New Prompting
Here’s the thing about prompt engineering: it was always solving the wrong problem.
The original insight — that you could get better output by structuring your request carefully — was correct. But the field that grew up around it mostly optimised for a chat interface that was never going to be the primary way serious people worked with AI. You got elaborate prompting frameworks, “the perfect system prompt”, guides about the exact words that would unlock better responses. Some of it was useful. Most of it was noise built for a paradigm that’s already shifting.
The shift is from conversation to context.
What Changed
In the old model, you opened a chat window and started from scratch. Every session was stateless. The AI didn’t know who you were, how you worked, what you cared about, or what you’d asked it yesterday. So prompting was about cramming as much of that into the beginning of each conversation as possible. Hence the elaborate templates, the PAST and SHAPE frameworks (both of which I wrote and stand behind — they were genuinely useful for their era), the saved prompts people would paste in before asking anything.
Claude Code works differently. It can read files. It can be given persistent instructions that load every time you start a session. It can reference your Obsidian vault, your code, your previous work. The context doesn’t have to live in the prompt — it can live in files that are always present.
This is not a minor convenience. It changes the whole approach.
CLAUDE.md: The File That Replaces Your Saved Prompts
When you run Claude Code in a directory, it looks for a CLAUDE.md file and reads it automatically. Every session starts with that context. You don’t have to paste anything in.
What goes in CLAUDE.md? The things you’d otherwise explain every single time:
- Who you are and how you work
- What tools are available and how to use them
- Rules and preferences you want consistently applied
- References to other files that contain deeper context
My own CLAUDE.md references several other files. There’s a SOUL.md that defines my values, voice, and decision-making principles. There are platform-specific files that document the services I use. There’s a file that lists all my active projects with their current status.
None of that goes in CLAUDE.md directly — that file would become unmanageable. Instead, CLAUDE.md acts as an index: “if you need to know about X, read this file.” Claude Code follows those references and builds up a picture of your context before you type a single word.
The Prompting That Still Matters
I want to be careful here: context engineering doesn’t mean individual prompts don’t matter. They do. A sloppy request still produces sloppy output.
What changes is what you’re optimising for in a prompt. You’re no longer trying to re-establish who you are and what you care about — that’s in your context files. The prompt itself can be shorter and more direct because it doesn’t have to carry all that weight.
The craft of prompting in 2025 is less about elaborate templates and more about:
Being specific about the task. Not “write something about automation” but “write a module introduction for PDA 2.0 that covers skills, hooks, and real examples from morning-brief and content-pipeline. Aim for 800 words.”
Being honest about constraints. What’s the deadline? What format does the output need to be in? What should it definitely not include? The more constraints you specify, the less work you do in the review cycle.
Knowing when to be vague on purpose. Sometimes you want Claude to fill in the gaps creatively. Sometimes you want it to follow a pattern exactly. The difference matters and you have to signal it.
Iteration as the method. The best prompts aren’t the ones you get right on the first try — they’re the ones where you read the output, spot what’s wrong, and refine. Three rounds of “do this, but less X and more Y” often beats trying to specify everything upfront.
The Context Ladder
Think of context as having layers, and think about which layer each piece of information belongs in.
Permanent context — things that are always true and never change much. Your name, your voice, your core preferences, your professional identity. This lives in CLAUDE.md or SOUL.md and loads every session.
Project context — things that are true for a specific body of work. The tech stack you’re using, the constraints of the project, decisions already made. This lives in a project-level CLAUDE.md in the relevant directory, or in a dedicated project notes file.
Session context — things that are true for this particular piece of work right now. The specific task, any recent decisions, relevant background for this session. This goes in the prompt or in a file you reference explicitly at the start of the session.
Task context — the immediate request. What you want, in what format, with what constraints. The prompt itself.
Most prompting advice conflates all four layers and tries to put everything in the task context. That’s why prompts get long and unwieldy. When your permanent context is properly established in files, your prompts can be short and precise.
A Practical Example
Before context files, starting a writing session looked like this: open chat, paste in a block of text explaining my voice and style preferences, paste in the relevant project background, then ask for what I needed. Then the next day, repeat the whole thing because the session was gone.
Now it looks like this: open Claude Code in the project directory. The context files load automatically — voice, style, project background, all of it. Type the actual request. Get output that already sounds like me, already follows the project conventions, already applies the constraints I care about.
The difference isn’t just convenience. It’s that Claude Code is working from the same accumulated understanding that I am, not starting from zero.
That accumulated understanding — your context files, your vault, your documented preferences and decisions — is the actual asset you’re building when you work with AI properly. Not a library of clever prompts. A system that knows you.
The next module gets into what happens once you’ve got context established: what you can actually hand off to agents, and what that delegation looks like in practice.
Check Your Understanding
Answer all questions correctly to complete this module.
1. What is the key shift described in this module about how we work with AI?
2. What is CLAUDE.md's primary function in the context engineering approach?
3. Which layer of the 'Context Ladder' contains your name, voice, and core preferences?