Week 2 · Lesson

When NOT to Use AI

We've spent this whole week making you better at using AI. Today I'm going to tell you when to stop.

Because the people who get the most out of AI aren't the ones who use it for everything. They're the ones who have great instincts about when it helps and when it doesn't.

And those instincts? They're simpler than you think.

The 80/20 rule of AI

Here's the mental model I use every day: AI gets you 80% of the way there, fast. You do the last 20%.

That last 20% is taste. Judgment. Accuracy. The things that make work actually good instead of just "fine."

A first draft that takes you 10 minutes instead of an hour? That's the 80%. Reading it, tightening the argument, fixing the parts that sound off, adding the specific detail only you know? That's the 20%.

This is not a shortcut to skip the thinking. It's a shortcut to skip the blank page.

Where AI is high-leverage

Some tasks are obvious wins. These are the places where AI consistently saves time without much risk:

First drafts. Emails, memos, presentations, proposals. Starting from something is always faster than starting from nothing.

Brainstorming. "Give me 15 ideas for X" followed by "now rank them." AI doesn't get creative block. It'll throw out more ideas in 30 seconds than your team will in a 45-minute meeting.

Summarizing. Long reports, meeting transcripts, research papers. Paste it in, get the highlights. Then read the parts that matter.

Research. "What are the pros and cons of entering this market?" or "Summarize the latest thinking on X." Good starting point. Always verify.

Structuring your thinking. This is the sleeper hit. "Here's what I'm thinking about X. Help me organize this into a clear framework." AI is shockingly good at taking your messy thoughts and giving them structure.

Where AI is dangerous

Now the other side. These are the moments where AI can actively hurt you if you're not careful:

Anything you'll publish or send without reviewing. AI slop is real and people can smell it. If you send an AI-generated email without editing it, the recipient will know. It has a certain blandness, a certain "too polished" quality that reads as impersonal.

Facts and figures. We covered this in Week 1. AI hallucinates. It will confidently cite statistics that don't exist and reference studies that were never published. If accuracy matters, you verify. Every time. No exceptions.

Legal, medical, financial advice. AI is not a lawyer, a doctor, or an accountant. It can help you understand concepts. It can help you draft questions for your actual lawyer. But don't make real decisions based on AI's interpretation of regulations or contracts.

Emotionally sensitive conversations. Letting someone go. Navigating a conflict with a colleague. Delivering bad news to a client. These require human judgment, empathy, and nuance that AI simply cannot provide. Use AI to think through your approach. Don't use it to write the actual words.

Building the habit

Here's what I want you to walk away with: a default, not a rule.

The default: when you sit down to do a task, ask yourself "could AI get me 80% of the way on this?" If yes, start there. If the answer is "this requires my judgment, my relationships, or my emotional intelligence," then it's all you.

Over time, this becomes instinct. You stop thinking about whether to use AI. You just know.

It's like any tool. A carpenter doesn't debate whether to use a power saw for every cut. Some cuts need precision. Some need speed. You just know.

This week you learned the core skills: how to prompt, how to set roles, how to think in steps, how to provide context. Those are the 80/20 of AI usage. The techniques that deliver most of the value.

Tomorrow in the lab, you're going to put all of it together on a real task. It's the most important exercise of the course so far.

The goal isn't to use AI for everything. It's to know instinctively when it'll help and when you're better off on your own.

Save your place and mark lessons complete