Steve Martin and the Logic of Comedy
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 4 · Section 2 of 6
Steve Martin and the Logic of Comedy
In the 1960s, a young Steve Martin was sitting in a college logic class wondering what any of this had to do with comedy. He opened his textbook expecting dry academic exercises. Instead he found Lewis Carroll — the author of Alice in Wonderland — writing formal logical arguments about crocodiles and soap bubble poetry.
That discovery changed how Martin thought about humour. And it contains something directly useful for working with AI.
What Carroll Actually Did
Carroll was a mathematics professor at Oxford. He brought the same systematic thinking he used for logic to his comedy writing. His arguments were not random nonsense — they were precisely structured:
- Babies are illogical.
- Nobody is despised who can manage a crocodile.
- Illogical persons are despised.
- Therefore, babies cannot manage crocodiles.
The conclusion follows perfectly from the premises. The logic is valid. And the result is absurd — because the premises are absurd.
Carroll understood that rigid logical structure could coexist with ridiculous content. The form was sound. The inputs were nonsense. The output was inevitably, correctly nonsensical.
Martin later closed his shows with: “I’m not going home tonight; I’m going to Bananaland, a place where only two things are true: One, all chairs are green; and two, no chairs are green.”
He was not being random. He was constructing a logical contradiction and presenting it as if it made perfect sense. The humour came from the precision of the framework, not despite it.
The Insight for Working With AI
Carroll’s syllogisms are a near-perfect model for how AI reasoning works.
AI does not generate conclusions by thinking. It generates conclusions by completing patterns learned from training data. Those patterns can produce structurally valid logical chains. If you press it on a topic, it will give you premises, and those premises will lead to a conclusion, and the connection between them will look sound.
The problem is the premises. AI has no way to verify whether its starting points are true. It only knows whether they are consistent with what it has seen before. A premise that appears frequently in training data feels as solid to the model as a premise that is actually correct.
This means AI is, in Carroll’s terms, a syllogism machine. It can produce valid chains from wrong premises and deliver the result with the same confidence it uses for correct ones. The logic holds. The conclusion is wrong. And nothing in the output tells you which situation you are in.
How to Apply This
When an AI gives you a recommendation, a diagnosis, or an explanation, do not start by evaluating the conclusion. Start by finding the premises.
Ask yourself: what would this conclusion have to be true of for it to follow? Then check those premises directly — not by asking the AI to confirm them, but by going to a primary source or applying domain knowledge you already have.
If a premise is wrong, the conclusion fails regardless of how clean the logical chain looks. A valid argument from false premises is still a false conclusion. Carroll proved you could make that funny. In professional settings, it tends not to be.
The skill is not scepticism about AI outputs. The skill is knowing which part of the chain to inspect. Constraints do not block good reasoning — they guide it. Carroll used logical form to produce comedy. You can use the same structure to find where AI reasoning breaks.
The bridge: AI outputs are Carroll syllogisms. The chain from premise to conclusion is often valid. The question is always whether the premises are true. In the next section, we look at a problem where the premises are hidden — and where missing them leads to the wrong answer even when you feel certain you have it right.