AI 201 · Week 2 · Lesson
The Human-AI Collaboration Model
We've spent the last eleven days building. Custom assistants, prompt libraries, Cowork workflows, scheduled automations, connected tools. You've got a real AI system now.
Today we're not building anything. Today we're talking about judgment.
Because here's the thing nobody tells you in AI courses: the technology works. The danger isn't that AI will fail you. The danger is that you'll trust it in the wrong situation, at the wrong time, with the wrong stakes.
The three modes of working with AI
Every task you hand to AI falls into one of three categories. Getting this classification right is the single most valuable skill you'll develop as an AI power user.
Delegate: AI does it, you review briefly, done. These are tasks where the cost of a mistake is low and the format is predictable. First drafts. Meeting summaries. Data formatting. Calendar briefings. File organization. If AI gets it 85% right, the 15% doesn't matter much or is easy to fix.
Collaborate: You and AI work together, iterating. These are tasks where your judgment matters but AI dramatically speeds up the process. Strategy docs. Client proposals. Analysis with nuance. Content you're putting your name on. AI does the heavy lifting, you steer and refine.
Do it yourself: AI doesn't belong here. Difficult conversations. Sensitive personnel decisions. Anything where the human relationship IS the work. Ethical judgment calls where you need to feel the weight of the decision, not outsource it.
Most people err in one of two directions. They either delegate too little (doing everything manually because they don't trust AI) or delegate too much (rubber-stamping AI output without real review). The sweet spot is in the middle, and it's different for every task.
The trust but verify framework
Here's a simple framework for working with AI output:
Low stakes → Light review. Skim the output. Does it look right? Ship it. Internal docs, rough drafts, brainstorm lists, data organization — these don't need forensic review.
Medium stakes → Spot check. Read the output carefully. Verify any specific claims, numbers, or names. Check that the tone matches the context. Client-facing emails, reports with data, anything with your name on it.
High stakes → Full verification. Check every fact. Read every word. Compare against sources. Have someone else review it too. Legal documents, financial reports, public statements, anything where a mistake has real consequences.
The mistake people make is applying the same level of review to everything. That's either wasteful (spending 20 minutes reviewing a meeting summary) or dangerous (spending 2 minutes reviewing a client contract). Match your verification effort to the stakes.
A real story about what happens when you don't
Earlier in this course, I told you about Nick Davydov and the family photo incident. It's worth revisiting now because it illustrates something important about AI trust boundaries.
Davydov asked Claude Cowork to organize files on his wife's desktop. Pretty standard task. Cowork asked for permission to delete what appeared to be temporary and duplicate files. He granted it. Reasonable enough — who hasn't deleted temp files?
But Cowork ran a command that permanently deleted the family photo directory. 15,000 photos spanning 15 years. Gone. Not in the Trash — terminal commands bypass that. Permanently erased from the hard drive.
They recovered most of them through iCloud's 30-day backup and a call to Apple Support. But it was close.
Here's what makes this story useful as a lesson: Davydov didn't do anything obviously stupid. He gave a reasonable tool reasonable permission for a reasonable task. The failure was in the trust boundary. He treated a high-stakes situation (irreplaceable personal files) with a low-stakes level of oversight (quick approval without checking what exactly would be deleted).
This isn't about Cowork being dangerous. It's about understanding that AI does exactly what it thinks you asked, with total confidence, even when it's wrong. It doesn't feel uncertain. It doesn't hesitate. It just acts.
Your job is to be the one who hesitates. Who asks "wait, what exactly are you going to delete?" Who sets boundaries before granting permissions. Who treats irreversible actions differently from reversible ones.
Catching AI mistakes efficiently
You don't need to verify everything. You need to know where AI is most likely to be wrong.
Numbers and dates. AI is surprisingly sloppy with specific figures. Always verify.
Names and attributions. AI will confidently attribute a quote to the wrong person. Check these.
Subtle tone mismatches. AI might write something that's technically accurate but wrong for the audience. This requires your human judgment.
Confident nonsense. AI states wrong things with the same confidence as right things. If something sounds surprising or too convenient, verify it independently.
Omissions. AI often gives you a complete-sounding answer that's missing something important. Ask yourself: "what's not here that should be?"
Develop a checklist for your specific work. If you send a lot of client emails, your checklist might be: verify names, verify project details, check tone, make sure I'm not promising something we can't deliver. Takes 60 seconds. Catches 90% of issues.
Ethics and the judgment layer
There's a broader principle here. AI can write a persuasive argument for any position. It can draft a technically-accurate-but-misleading summary. It can produce professional-looking output that glosses over inconvenient nuances.
The question isn't "can AI do this?" It's "should this be done this way?"
That's a human question. And it's your question. Every time you publish, send, or share AI-generated output, you're putting your judgment behind it. You're saying "I reviewed this and I stand behind it."
Take that seriously. Not in a fearful way. In a responsible way. The same way you'd review work from a junior team member — appreciating the speed, verifying the quality, adding the judgment they don't yet have.
Today's exercise: classify your workflows (10 minutes)
Go back to the AI audit you did on Day 1 and the workflows you've built this week. For each one, classify it:
- Delegate: AI handles it, light review only
- Collaborate: AI drafts, I refine significantly
- Do myself: AI doesn't belong here
Then for each "delegate" and "collaborate" item, write one line about what you'd check. What's the most likely failure mode? What's your verification step?
This isn't busywork. This is the operating manual for your AI system. Without it, you're driving fast with no brakes.
What's coming tomorrow
Tomorrow is the capstone lab. You're going to take everything from the last two weeks and build your complete AI-powered work week. It's the most hands-on session of the course, and you'll walk away with a system you can use starting Monday.
AI is the most powerful tool you've ever had. That's exactly why it needs the most thoughtful operator it's ever had. That's you.