Week 1 · Lesson
Your First Real AI Conversation
I want to give you a mental model that will save you months of trial and error.
Think of AI as the smartest intern you've ever met. This intern read every book, memo, article, and report ever written. They can synthesize information at superhuman speed. They write clean first drafts. They never get tired, never complain, and work at 3am without overtime.
But they've never had a real job. They've never sat in a room and felt the tension when a deal is falling apart. They don't know your company's culture, your boss's unspoken priorities, or why that one client always pays late. They have zero life experience and zero common sense about your specific world.
That mental model will serve you well. Here's how it breaks down in practice.
What AI is great at:
Drafting. Give it a rough idea and it'll hand you a polished first draft of an email, a report, a presentation outline, a proposal. You'll need to edit it, but you're editing instead of staring at a blank page. That's a massive time savings.
Summarizing. Paste in a 30-page document and ask for the key points. It'll pull them out in seconds. This alone is worth the price of admission if you deal with long reports, contracts, or research.
Brainstorming. Ask for 15 ideas for your next team offsite, or 10 ways to approach a pricing problem, or 20 subject lines for a campaign. AI doesn't have creative blocks. It'll give you volume, and you pick the gems.
Analysis. Give it data and ask it to find patterns, compare options, or build frameworks. It won't replace a real analyst on complex stuff, but for quick-and-dirty analysis, it's shockingly capable.
Translation and reformatting. Take technical content and make it accessible. Take bullet points and make them a narrative. Take a long email and make it short. This stuff is bread and butter for AI.
What AI is terrible at:
Facts it hasn't seen. If it happened after the model's training cutoff, or if it's niche enough that it wasn't well-represented in the training data, AI will either get it wrong or make something up. Always verify specific facts, dates, statistics, and claims.
Math. This surprises people, but language models are not calculators. They can get basic arithmetic right most of the time, but anything complex and you should be skeptical. Use AI to set up the analysis, then check the math yourself or use an actual spreadsheet.
Nuance in sensitive situations. Layoffs. Legal disputes. Anything involving real human emotion or high-stakes consequences. AI will give you something that sounds reasonable, but it lacks the judgment to navigate these situations well. Use your own brain for the hard calls.
Anything requiring real-world verification. "Is this restaurant still open?" "Does this law apply in my state?" "Is this vendor reliable?" AI can't check. It can only go off its training data, which might be outdated or incomplete.
Knowing when it's wrong. This is the big one. AI doesn't flag its own uncertainty. It presents everything with the same confident tone, whether it's drawing on solid patterns or making things up entirely. That means the burden of verification is always on you.
Here's the practical rule: use AI for speed and volume, then apply your own judgment for accuracy and taste. Let the intern do the heavy lifting. You're still the boss.
The people who get burned by AI are the ones who treat it like an oracle. The people who get the most value treat it like a tool with clear strengths and known limitations.
Knowing what AI can and can't do is the difference between it making you faster and it making you wrong.