Week 2 · Lesson

Why Your AI Results Are Inconsistent

Last week I asked ChatGPT to help me write a client proposal. The result was generic, fluffy, and honestly worse than if I'd just written it myself.

Ten minutes later, I asked the same AI the same question. Different approach. The output was so good I sent it with barely an edit.

Same tool. Same task. Completely different result.

The difference? How I asked.

This is the thing nobody tells you about AI: the quality of the output is almost entirely determined by the quality of the input. That sounds obvious. But most people treat AI like a Google search bar. They type a few words and hope for magic.

That's not how this works.

The gap between bad and good

Let me show you what I mean.

Bad prompt: "Write a proposal for a marketing agency."

You'll get something that sounds like it was written by a college student who skimmed a textbook. Vague. Generic. Useless.

Good prompt: "You're a marketing strategist pitching a mid-size DTC skincare brand doing $5M in revenue. They're struggling with customer acquisition cost on Meta ads. Write a 500-word proposal for a 90-day engagement focused on creative testing and landing page optimization. Tone: confident but not salesy."

That gives you something you can actually use. Something that sounds like it was written by someone who understands the business.

Same AI. The only thing that changed was the input.

A framework you can use today

You don't need to become a "prompt engineer." That phrase needs to die. You just need a simple mental checklist. Here's one that works for almost everything:

Role + Context + Task + Format

  • Role: Who should the AI pretend to be? (A CFO? A copywriter? A skeptical customer?)
  • Context: What's the situation? What does the AI need to know?
  • Task: What specifically do you want it to produce?
  • Format: How should the output look? (Bullet points? A memo? A 3-paragraph email?)

That's it. Four things. You don't need all four every time, but the more you include, the better the output gets.

Think back to Week 1 when we talked about AI being a "brilliant intern with no life experience." This is how you give the intern the life experience. You fill in the gaps it can't fill on its own.

Why this matters more than any feature update

Every week there's a new AI model, a new feature, a new tool. Most of that noise doesn't matter. What matters is this skill right here. The ability to clearly communicate what you need.

Because here's the thing: people who are great at prompting will get great results from every AI tool, current and future. People who type vague requests will keep getting vague results no matter how advanced the technology gets.

This is a human skill, not a technical one. It's about clarity of thought. Knowing what you want. Being specific.

If you've ever managed people, you already know this. The best managers give clear briefs. They set context. They define what good looks like. That's all prompting is.

This week, we're going to break down the specific techniques that make the biggest difference. Today was the overview. Tomorrow, we'll start with the single most powerful trick in the playbook.

You don't need to learn "prompt engineering." You just need to be specific about what you want.

Save your place and mark lessons complete