Hubble's Flaw and Corrective Thinking
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 6 · Section 2 of 6
Hubble’s Flaw and Corrective Thinking
In 1990, NASA launched the Hubble Space Telescope — years in development, over a billion dollars in cost, and designed to be the most powerful eye humanity had ever pointed at the sky. The first images came back blurry. Not slightly off. Visibly, embarrassingly wrong.
The cause turned out to be a flaw in the primary mirror, ground incorrectly by 2.2 micrometres — about 1/50th the width of a human hair. An almost invisible error producing a very visible failure.
The engineers faced a problem with no obvious fix. You cannot bring a telescope down from 340 miles up for a do-over. You cannot grind the mirror again. The flawed component was unreachable and irreplaceable.
Their solution was to add corrective optics: a set of instruments that compensated for the mirror’s flaw rather than eliminating it. The engineering community called it “contact lenses for Hubble.” It worked — and the repaired telescope eventually produced images far beyond what the original specification promised.
What this teaches about debugging
When a system has a flaw you cannot remove, you build around it. The debugging mindset is not always about eliminating the root cause. Sometimes it is about understanding the flaw precisely enough to design a correction.
This matters when you are diagnosing AI failures. Most professionals cannot retrain the model. They cannot change how it was built or what it was trained on. What they can do is understand the failure precisely enough to compensate for it.
If a model consistently misinterprets a particular type of request — strips context from long documents, confuses similar product names, formats outputs in ways your workflow cannot use — the fix is rarely “try again.” It is diagnosing what is actually going wrong and building a corrective layer into how you work with it.
Corrective prompting is the contact lens approach. You cannot fix the mirror. But once you understand the specific distortion it produces, you can design an input that accounts for it.
Applying this to AI
When an AI output disappoints you, resist the instinct to re-run the same prompt and hope for different results. Instead:
- Describe precisely what the output did that you did not want. Not “it was wrong” — but how it was wrong. Too generic? Wrong format? Missed the point of the request? Made up a specific detail?
- Ask what in your input might have caused that. Was context missing? Was the request ambiguous? Was there a conflicting instruction?
- Design a correction that targets the specific failure — not a complete rewrite of the prompt, but a targeted adjustment.
The goal is to move from “this didn’t work” to “this specific thing produced this specific failure, and I am adjusting for it.” That is corrective thinking. That is what NASA did in 1993, and it is exactly what you need when AI sends back a blurry picture.