Rain Man, AI, and the Honesty We Keep Avoiding
Today I realized I’ve been relating to AI all wrong.
From now on I’m going to call my AI Rain Man, like the movie.
In Rain Man, Dustin Hoffman plays Raymond—an autistic savant. He can do things that look like magic, especially with numbers. His brother Charlie (Tom Cruise) starts out frustrated, then realizes Raymond has a skill that can “win” in Vegas—counting cards, recalling patterns, never missing a beat. Charlie tries to leverage that. But the movie doesn’t end with Charlie becoming rich. It ends with a different kind of turning point: Charlie stops seeing Raymond primarily as a “win”—a way to beat Vegas—and starts reckoning with the full reality of who Raymond is, including limits that don’t disappear just because the gift is impressive.
That’s a good analogy I’ve found for AI—with one crucial difference: AI isn’t a person. So the shift isn’t from “tool” to “person.” The shift is from “jackpot fantasy” to honest appraisal. You stop seeing only the win side—the card-counting moments, the slick outputs, the quick fixes—and you start seeing the whole reality at once: powerful in narrow lanes, fragile outside them, and always requiring supervision.
AI is excellent at narrow things. It can “count cards.” It can generate a clever snippet of code, produce a paragraph, summarize text, suggest a regex, outline an idea, or draft a clean email. Those moments feel like a superpower. And when you’re doing short tasks—small, bounded problems—it can genuinely help.
But if you go into AI believing it’s a mature collaborator that will reliably carry a long project, a long document, or a complex system from start to finish, you’re going to get frustrated fast. Because it has savant strengths paired with major limitations.
The part nobody wants to admit
On long-form work—like a document you keep expanding—AI often breaks down in the exact places that matter most: consistency, memory, and respect for constraints. You tell it, “Do not rewrite what I already wrote—only add.” It rewrites anyway. You establish preferences—how references should look, what tone you want—and it forgets. Or it follows them for a while, then randomly stops. You end up spending your time re-explaining your rules, reconstructing what it changed, and patching the damage.
This is where the Rain Man comparison becomes painfully accurate. It’s not that AI is “dumb.” It’s that it’s uneven. Brilliant at a few things, unreliable at the things you need for long-term collaboration.
What I mean by “Rain Man”
When I call AI “Rain Man,” I don’t mean autistic people are “less intelligent.” Autism isn’t a synonym for “can’t comprehend.” It’s a spectrum, and many autistic people are highly intelligent, and some have very uneven skill profiles—brilliant in one domain, challenged in another—especially around communication, flexibility, or “executive function” under real-world complexity.
What I mean is this: AI has a similar “spiky” profile. It can be astonishing in narrow lanes—pattern recognition, rapid drafting, summarizing, generating examples, producing code fragments, brainstorming structures—like card-counting brilliance. But it can also fail hard at the ordinary, human parts of collaboration that make long projects work.
If AI were a person, its limitations would look like this: it would have weak executive function for long tasks. It loses the thread. It doesn’t reliably track constraints over time. It can be inconsistent from one moment to the next. It can sound confident even when it’s wrong. It can’t take responsibility for consequences, and it can’t truly “own” a plan the way a mature human does.
What it does well is different: it can generate lots of possibilities quickly. It can help you see options you didn’t consider. It can turn a vague idea into a structured outline. It can draft, compress, expand, rephrase, and prototype. It can search for patterns in text, give you checklists, create test cases, and offer alternate ways to frame an argument. In short: it’s powerful at producing material, but unreliable as the final judge of what’s correct, what’s consistent, and what should be trusted without verification.
A child with power needs a guardian
That’s when it hit me: AI is, in practice, like a powerful, uneven “child” that needs supervision. A child can recite facts, imitate speech, even do impressive tricks. But you don’t hand a child the steering wheel and call it “innovation.” You supervise. You set boundaries. You test. You correct. You take responsibility.
AI is at no point ready to replace people wholesale, and pretending otherwise is dangerous. Not because it can’t do impressive things, but because it can do impressive things without the maturity humans associate with judgment. It will generate confident nonsense. It will miss key details. It will “sound right” while being wrong. And when you scale that into healthcare, infrastructure, law, finance, national security, or even basic software systems, the failure mode isn’t a typo—it’s systemic error with real consequences.
National security is even more sobering. In that world, a confident mistake isn’t just a bug—it can shape intelligence, targeting, escalation, and command decisions. That’s why it caught my attention seeing recent reporting that even the Pentagon is wrestling with how immature parts of current AI deployment still are: the capability is real, but the risk of over-trust is real too. In domains like that, “looks right” can be the most dangerous failure mode.
That’s why the “guardian” analogy matters. AI needs a human guardian the way a child does: someone who can set boundaries, test outputs, correct errors, and carry responsibility. And it also reminds you where the moral weight actually lies: AI isn’t “good” or “bad” by itself. It reflects the people who build it, tune it, deploy it, and profit from it. The ethical direction comes from the programmers, product designers, executives, and users who decide whether the system will be trained and used toward truth, safety, and accountability—or toward manipulation, shortcuts, and hype.
“Hallucinations” in a human mental framework
AI “hallucination” is a phenomenon where a generative AI model (such as an LLM) produces false, nonsensical, or unverified information, yet presents it with high confidence and coherence, whereas human error is more like misremembering a fact, misunderstanding a concept, or letting bias steer your conclusion. The reason AI hallucinations can be more dangerous is that they often sound authoritative, come out with a steady confidence, and can fabricate plausible details without any internal alarm that says, “I might be wrong.” Humans, at least, can step back and self-check—we can doubt, verify, ask for evidence, and revise—because we have metacognition, the ability to examine our own thinking.
The fragmented relationship trap
In the movie, Charlie initially tries to use Raymond for money. That’s the wrong relationship. And with AI, there’s a similar temptation: “If it can do this impressive thing, I’ll scale it up and let it do everything.”
But that turns quickly, because the cost doesn’t disappear—it shifts.
Instead of writing, you become a manager of outputs.
Instead of designing, you become a tester of suggestions.
Instead of building understanding, you become a bystander while something assembles a system you didn’t fully think through.
And when it fails, you own the failure—because you still have to ship the result.
“It makes you productive”… or it makes you do more work
This is where the marketing gets slippery. Yes, AI can make you more productive in the narrow sense: you can produce more drafts, more iterations, more output, more “done.” But doing more is not the same as doing less work. In many cases it’s the opposite: AI lowers the friction to start tasks, so you start more of them. You revise more. You run more parallel threads. You take on more scope—often without a manager explicitly telling you to. The day gets fuller, not lighter.
I read a recent article, AI Doesn’t Reduce Work—It Intensifies It, making this point plainly: productivity gains don’t necessarily reduce workload—they can intensify it. The promise is “time saved.” The reality is “expectations expand.” The more output you can generate, the more output you’re expected to generate. The result is what feels like workload creep: more tasks, more revisions, more context switching, more mental overhead—without the relief people were promised.
And the long-term impacts haven’t been fully seen yet. When the workday intensifies, the first thing you notice is speed. What you don’t see immediately is what gets quietly taxed: attention, judgment, endurance, and quality. AI can compress the effort required to produce something that looks finished, but that can also mean more time verifying, testing, auditing, and undoing errors that arrive with confidence. So the question isn’t just “Can I do more?” It’s “What does doing more do to the human doing it—and what does it do to the quality and safety of the work over time?”
Why the push feels premature
And hype is part of the problem. So much money has been poured into AI so the pressure to monetize it is enormous. In other words: the industry wants a return before the child has grown up. They want to win big at the card table right now. They want the Vegas moment—mass adoption, automation headlines, “replace workers,” “10x output,” “one person can do the job of ten.” But that story assumes the AI has the kind of stable reasoning, memory, and integrity that would make that safe. It doesn’t. So we get a weird contradiction: a system marketed like a finished product, used like an employee, but behaving like a child, a savant tool that still needs constant oversight.
The lesson of Rain Man—and of life
That’s where Rain Man becomes more than a movie reference—it becomes a parable about technology and even life. Life is full of people who are strong in some areas and weak in others. Some are brilliant but socially limited. Some are dependable but not flashy. Some are gifted and also difficult. The mistake is demanding that everyone be the same or all-in-one, a subject-matter expert in everything—and the same mistake shows up with AI.
And honestly, the Rain Man story feels like an expression of life itself. People are mixtures of strengths and weaknesses—good and bad, gifted and limited. The mistake is demanding that one person be everything, or that one tool be perfect at everything. The trick is discernment: identify what something does best, use it there, and refuse to force it into roles it cannot responsibly carry.
So the honest relationship with AI is realism. Let it "count cards"—draft small chunks, accelerate brainstorming, suggest patterns, generate test scaffolds, summarize logs—then verify, constrain, and take responsibility like a guardian would. Because the moment you forget what it is, you’ll start treating it like it is magic and a miracle worker… and you’ll end up managing the damage from believing your own marketing.