Week 4 · Lesson

When AI Is Wrong: Hallucinations, Bias, and Trust


In 2023, a New York attorney used ChatGPT to write a legal brief. The AI cited six court cases to support his argument. Sounded great. Very convincing.

None of the cases existed. ChatGPT invented them. Complete with fake judges, fake rulings, and fake citations.

The lawyer filed the brief without checking. The judge was not amused. The attorney was sanctioned and fined.

This is the most important email in the course. Everything we've covered only works if you understand where AI breaks down. And it will break down. Regularly.

Why AI makes stuff up

Remember Day 2? AI predicts the next most likely word. It's not looking up facts. It's not reasoning from a database. It's generating text that sounds right based on patterns.

When it doesn't know something, it doesn't say "I don't know." It generates the most plausible-sounding answer. Sometimes that answer is completely fabricated.

This is called a hallucination. Not because the AI is confused. Because it's doing exactly what it was built to do: produce fluent, confident text. The confidence is the dangerous part.

The models are getting better. But this problem will never fully go away. If you use AI long enough, it will lie to you with a straight face. Count on it.

How to catch it

Build verification into your workflow. Not sometimes. Every time.

Ask for sources. When AI makes a factual claim, ask where it came from. If it can't point to something specific and real, treat the claim as unverified.

Cross-check the important stuff. Any number, date, name, or quote that matters should get a quick search. Takes 30 seconds. Saves you from looking like the lawyer in New York.

Use the "explain your reasoning" trick. Ask the AI to walk through its logic step by step. Hallucinations are easier to spot when you can see the chain of reasoning. Bad logic shows up faster than bad conclusions.

Try Perplexity for research. Unlike ChatGPT and Claude, Perplexity shows its sources inline. You can click through and verify. It's not perfect, but the sourcing makes fact-checking much faster.

What about privacy?

Everything you paste into ChatGPT or Claude gets processed on their servers. For most business use, this is fine. Both companies have data policies and enterprise tiers with stronger protections.

But use common sense. Don't paste sensitive customer data, passwords, proprietary source code, or anything covered by NDA into a free-tier AI tool.

If your company has an enterprise AI setup, use that. If not, stick to information you'd be comfortable putting in an email to a consultant. That's roughly the right threshold.

The trust framework

Here's how to think about it:

Trust AI for speed. First drafts, brainstorming, summarization, structure, idea generation. Let it go fast.

Trust yourself for accuracy. Facts, figures, final decisions, anything public-facing, anything with consequences. That's your job.

The best workflow: AI generates quickly, you verify and refine. That combination is faster than doing everything yourself and more accurate than trusting AI blindly.

The people who get burned by AI are the ones who skip the second step. Don't be the lawyer.

AI is a power tool. Power tools are dangerous if you don't respect what they can't do.

Save your place and mark lessons complete