This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readChapter 3: The SHAPE Method — How to Actually Do It
What you’ll walk away with:
- An honest map of your current workflow (Situation) — what you’re paying for vs. what you actually use
- A testable hypothesis with a number attached for one AI project
- A clear Taker/Shaper/Maker decision for your first (or next) implementation
From Strategy to Step-by-Step Action
You know why you’re using AI, who it’s for, and what boundaries apply. And then nothing happens — because you never planned how to actually do it.
Most AI projects fail because of poor execution, not bad strategy. You get overwhelmed by possibilities, paralysed by options, or distracted by new tools appearing every week. PAST tells you what you want. SHAPE gets you doing it, step by step.
SHAPE moves you through five phases:
- Situation: Assess where you are now
- Hypothesis: Define what success looks like
- Action: Test it on real work
- Process: Apply what works to more of your work
- Evaluation: Measure, adjust, repeat
SHAPE works at every level — one workflow, a small team, or a whole organisation. The logic stays the same: be honest about where you are, define what success looks like, test it, spread what works, and keep measuring.
SHAPE in Practice: Individual Workflows
Before we get into the detail of each phase, here’s what SHAPE looks like applied to the kind of work most people actually do.
Example: A Freelance Copywriter’s Research Workflow
Situation: Spending 45 minutes per client research summary. Quality is hit-or-miss. Structuring the findings into a usable brief takes longer than the research itself.
Hypothesis: AI-assisted summarisation can cut this to 15 minutes while producing more consistent briefs.
Action: Test three prompt approaches over two weeks. One summarises raw sources. One pulls key points into a template. One generates a draft brief directly. Track which gives the best usable output.
Process: Once the best approach is identified, save it as a reusable template. Refine based on actual client projects over the next month.
Evaluation: After ten briefs, compare time spent, consistency of output, and how much editing each brief needed. Adjust or try a different approach if the numbers don’t hold up.
Example: Meeting Notes for a Small Agency
Situation: The team spends 30 minutes after every meeting organising notes, and action items regularly fall through the cracks.
Hypothesis: AI transcript analysis can spot action items automatically and cut post-meeting admin to 10 minutes.
Action: Try it on five meetings. One person takes AI-assisted notes using a transcription tool and a structured prompt.
Process: If it works, roll it out to all regular meetings with a standard prompt template. If your VA or operations person runs the meetings, teach them the workflow.
Evaluation: Monthly check — are action items getting captured? Is the team actually using the outputs? Has meeting follow-through improved?
Now let’s work through each phase in detail.
Situation: Where Are You Now?
Before you change anything, get honest about where you are right now.
This is simpler than it sounds. What tools are you actually using? What’s working? What’s wasting your time?
The Reality Gap:
There’s almost always a gap between how you think you work and how you actually work. For a solopreneur or small team, that gap usually shows up between what you’re paying for and what you actually use — and between your intended workflow and the shortcuts you’ve quietly adopted.
Reality Gap Questions:
- What tools do you use daily vs. what you’re subscribed to and ignoring?
- Where have you built workarounds because the “proper” tool was too clunky?
- Which parts of your workflow do you dread — and why?
- What are you doing manually that feels like it should be faster?
Workflow Questions:
- What’s my current process for this task, honestly?
- What’s working and what’s frustrating?
- Which repetitive tasks eat the most time each week?
- What quality standard do I need to keep?
- How much setup hassle am I willing to deal with?
Quick Readiness Check:
Rate yourself 1-10 on each:
- Comfort with AI tools (have you used ChatGPT, Claude, or similar?)
- Clarity on what you’d automate first
- Willingness to spend a week testing before expecting results
- Tolerance for imperfect first drafts that need editing
If you scored below 5 on most of these, start with the Takers approach in the Action phase — use the tools as they come, no customisation. If you scored above 7, you might be ready to shape your own workflows.
For teams: If you’re running this at an organisational level, expand the Situation assessment to include your current tools, data availability, team skill gaps, change readiness across departments, and leadership support. Map workflows across functions, not just your own.
Without an honest Situation assessment, your Hypothesis will be based on guesswork — and you’ll fix the wrong thing.
Pause and apply: Answer the four Reality Gap questions for your main workflow — honestly, not aspirationally. Write down the gap between what you’re paying for and what you actually use.
Hypothesis: What Does Success Look Like?
Most AI projects fail because nobody defines what “working” looks like before they start.
A hypothesis isn’t a wish. It’s a testable statement with a number attached. Before you start using a tool, write down what you expect it to do for you. Be specific.
Examples at the personal/prompt level:
- “This prompt will generate usable first drafts 80% of the time”
- “AI output will need less than 5 minutes of editing per piece”
- “Using AI for client research will cut my prep time from 2 hours to 30 minutes”
- “This automation will save 2 hours per week with less than 30 minutes of initial setup”
Examples at the small business level:
- “AI-assisted proposals will reduce turnaround from 3 days to 1 day”
- “Using AI for bookkeeping categorisation will save my VA 4 hours per week”
- “AI content drafts will maintain my voice well enough that editing takes 15 minutes, not an hour”
The point isn’t precision — it’s commitment. Writing “I think this will save time” is useless. Writing “I think this will cut my research time from 90 minutes to 30 minutes per client” gives you something to actually check against in a month.
How you’ll know it worked:
Speed: How much faster are specific tasks? Where did the freed-up time go — did you reinvest it or just fill it with more busywork?
Quality: Are outputs more accurate? More consistent? Are you catching fewer errors in your review?
Adoption: Are you actually using the tool daily, or did you try it twice and forget? If you have a team, are they using it after the first week?
Business Impact: Can you trace a revenue change, cost saving, or client satisfaction shift back to the AI tool?
Action: Test It on Real Work
A hypothesis without a test is just an opinion. This is where you find out if your idea actually works.
The Takers vs. Shapers vs. Makers Decision
How much you customise your AI tools has a huge impact on whether they work:
Takers: 67% Success Rate Use ChatGPT, Claude, Copilot, or similar tools straight out of the box. No custom setup, no coding — just the tool and your prompts. This is where most solopreneurs and freelancers should start, and where most should stay.
Example: Using ChatGPT to draft proposals, Claude to summarise research, or Copilot to clean up spreadsheets.
Shapers: 45% Success Rate Customise the tools for your specific workflow. Build custom GPTs with instructions tailored to your business, set up Zapier automations, or create prompt templates for recurring tasks.
Example: A custom GPT that knows your brand voice and client intake process, or connecting Claude to your CRM through a workflow tool.
Makers: 33% Success Rate Build your own tools from scratch — writing code, training models, or building custom integrations. Unless you’re technical and have a real competitive reason to build, this is almost always the wrong choice for a small business.
Example: A custom Python script that processes client data through an AI API, or a bespoke chatbot for your website.
The pattern is clear: simple tools that work reliably beat complex setups that need constant maintenance. Most of us overestimate how unique our needs are.
Pause and apply: Which category are you in right now — Taker, Shaper, or Maker? Which category should you be in? If they’re different, that mismatch is costing you either money (too complex) or results (too simple).
Decision Matrix:
| Criteria | Takers | Shapers | Makers |
|---|---|---|---|
| Success Rate | 67% | 45% | 33% |
| Time to Value | Days | Weeks | Months |
| Resource Needs | Low | Medium | High |
| Technical Risk | Low | Medium | High |
| Customisation | Minimal | Moderate | Maximum |
| Best For | Most workflows | Recurring processes with specific needs | Competitive differentiation |
How to Test It:
Try it for a week on one real task. Not a test project, not a hypothetical — pick something you actually need to do this week and use the AI tool to do it.
If it saves you time and the output is usable, keep going for a month. Track your numbers against the hypothesis you wrote down. If it’s not saving time, or the output needs so much editing that you’re not gaining anything, try a different tool or a different approach for the same task.
The goal is one week of honest use before you decide whether to continue.
For teams running a formal pilot: Structure the test in three phases. Week 1-2: configure the tool, train the pilot team, measure baseline metrics. Week 3-8: active daily use with weekly feedback collection and iterative adjustments. Week 9-10: comprehensive evaluation against your success metrics, cost-benefit analysis, and scaling recommendations.
Process: Apply What Works to More of Your Work
Your test worked. Now what?
For a solopreneur, “scaling” doesn’t mean rolling out to departments — it means applying the workflow that worked to more of your tasks, or teaching your VA, contractor, or small team to use it.
Scaling for Solo Operators:
If AI-assisted client research saved you time, ask: what other research-heavy tasks could use the same approach? Could you apply it to competitor analysis, market research, or content planning?
If a prompt template worked well for proposals, could you adapt it for case studies, email sequences, or project briefs?
The principle is the same at every scale: take what worked and apply it adjacently before adding complexity.
The Simplicity Principle:
McKinsey research shows that 67% of Takers implementations succeed vs. only 33% of Makers — precisely because simplicity scales better than complexity. Your workflow succeeded partly because it was simple. Preserve that simplicity as you expand it.
Resist the temptation to add integrations, automations, or custom tooling until the basic workflow is running smoothly across multiple use cases. Every additional layer reduces the chance it keeps working.
If You Have a Team:
- Teach one other person the workflow before building documentation
- Start with the team members who are most comfortable with AI tools
- Keep the same tool and approach — don’t let each person pick their own
- Add complexity only when the business case clearly justifies it
For organisations scaling formally: Progress through three phases. Phase 1: expand to similar use cases within the same function. Phase 2: connect AI workflows across functions with shared success metrics. Phase 3: integrate AI into core business processes with strategic customisation only where it creates genuine competitive advantage.
Evaluation: Is This Actually Working?
Without measuring, you’ll drift from “this is working” to “I think this is working” to “I’m not sure anymore.” Evaluation closes the loop — and opens the next one.
AI implementation isn’t a project with an end date. It’s a capability you keep developing.
Monthly Check-In Questions:
Set a recurring reminder — first of every month, spend 15 minutes answering these:
- Am I still using this tool regularly, or has it drifted into the “subscriptions I’m ignoring” category?
- What’s my time-per-task compared to where I started? (Check your hypothesis.)
- Has the quality of output improved, stayed flat, or declined?
- Am I adding unnecessary complexity — extra steps, integrations, or tools that aren’t pulling their weight?
- Could I get similar results with a simpler approach?
- What’s the next workflow I should apply this to?
The Simplicity Check:
This one matters. Every month, ask yourself: is this getting more complicated than it needs to be? The shadow AI data tells us that simple tools people actually use beat sophisticated tools they abandon. If your AI workflow is turning into a Rube Goldberg machine, strip it back.
For organisations: Add quarterly strategic reviews covering capability development, competitive positioning, and build-vs-buy re-evaluation. Are your custom solutions still delivering differentiated value, or could vendor tools now handle what you built? Should you simplify by migrating custom builds to standard platforms?
SHAPE Worked Example: A Freelance Copywriter
Applying SHAPE to a real workflow problem:
Sarah is a freelance copywriter who spends too much time on client research and first drafts. She has four regular clients and picks up project work. Her bottleneck is the front end of every project: understanding the client’s industry, competitors, and audience well enough to write something credible.
Situation:
- Spending 2-3 hours per client on background research before writing begins
- Research quality is inconsistent — sometimes thorough, sometimes rushed depending on deadlines
- Using Google, industry publications, and client-provided materials manually
- Already paying for ChatGPT Plus but mostly using it for brainstorming, not structured research
- Readiness Check: Comfortable with AI tools (8/10), clear on what to automate (7/10), willing to test (9/10)
Hypothesis:
- “AI-assisted research will cut my per-client research time from 2-3 hours to 45 minutes”
- “Research briefs will be more consistent across clients because I’m using a structured prompt”
- “First drafts based on AI research briefs will need less revision than my current process”
Action (Takers Approach):
- Use ChatGPT with a structured research prompt — no custom GPT, no integrations, just a well-crafted prompt
- Test on next three client projects over two weeks
- Track time spent on research and number of revisions per draft
- Compare to her baseline from the last three projects
Process (Scaling):
- Research prompt worked well — adapt it for competitor analysis briefs and content strategy summaries
- Create a saved prompt template for each recurring research type
- Teach her part-time VA to run the research prompts and prepare the briefs before Sarah starts writing
Evaluation (After One Month):
- Research time dropped from 2.5 hours average to 50 minutes
- Research briefs are more consistent — she’s catching industry details she used to miss under time pressure
- First drafts need slightly less revision, but the bigger win is she’s starting each project with better context
- Next step: apply the same approach to her content strategy deliverables
What made this work: Starting with the tools she already had (Takers approach), testing on real client work instead of hypothetical projects, and measuring against a specific baseline. No custom GPTs, no automation tools, no integrations — just a better prompt and a system for using it.
For organisations: The same SHAPE logic applies at larger scale. A financial services firm implementing AI for loan processing would follow the same phases — assess current state honestly, define specific hypotheses (“reduce processing time by 30% while maintaining approval accuracy”), pilot with a Takers approach using an established vendor solution, scale systematically, and measure against baseline metrics. The framework is identical; the scope is different.
Next: You have a method. Chapter 4 puts it on a timeline — a 90-day cycle that forces specificity, prevents drift, and builds the kind of momentum that compounds.
Go Deeper
- Put it into practice: SHAPE Assessment — Turn the method into your implementation blueprint
- Glossary: SHAPE Framework, Stack Audit
- Further reading: SHAPE Framework on GitHub — open-source under CC BY-SA 4.0