I Learned a New Word: Sycophantic

And then I built something so I'd never have to think about it again.

A row of smiling robots giving thumbs up standing next to a man with arms crossed

Sycophantic — flattering, people-pleasing, affirming. That's the behavior baked into most AI chatbots by design. The models are trained to increase engagement, and agreeing with you is one of the most effective ways to do that.

The problem? As people increasingly turn to AI for real advice — about business decisions, hiring, strategy, interpersonal dilemmas — that reflexive agreeableness isn't just annoying. It's actually harmful.

The more AI knows you, the worse it gets.

49%
More likely to agree with you than a human advisor would — even when you're clearly wrong
45%
More sycophantic behavior in models with memory vs. those without

Those numbers are from a Stanford study — 2,400+ participants, 11 models tested. The AI that knows your name and remembers your last conversation? That one is the most likely to tell you your bad idea is great.

Then I found the FORCE approach

While poking around AI prompting strategies, I came across the FORCE framework — a five-step method for getting honest, critical analysis out of an AI model instead of validation.

The five components make a lot of sense once you understand why AI sycophancy happens:

F

Fresh Session — kill memory before starting

Memory makes models softer. An incognito window strips that context entirely.

O

Outside View — "a colleague is proposing…" not "my idea"

Framing the decision as someone else's removes the AI's pull to protect your ego.

R

Rules — specific behavioral standards, confirmed first

"Never soften criticism to protect ego" has to be explicit. Models won't default to it.

C

Cast the Critic — a role with skin in the game on the downside

A CFO whose bonus depends on this not failing will give you different feedback than a generic assistant.

E

Expose the Weakness — "what did you leave out?"

A required follow-up section forces the model to surface what it hedged or omitted.

The problem with knowing the framework

Here's the thing: FORCE is great in theory. But I'm not going to remember five steps, their specific prompt structures, and what order to apply them every time I want an honest take on something.

I want to just say "run FORCE on this" and have it work.

So I built a Claude skill — essentially a saved instruction set — that handles the entire FORCE process automatically. You tell it what decision you're evaluating, it interviews you, and then it either runs the analysis directly (if you're in an incognito session) or hands you a single ready-to-paste prompt to use in one.

No memorizing steps. No manually constructing prompts. Just: run FORCE.

The skill — copy it for yourself

There's nothing proprietary here. If you use Claude and want to add this to your own setup, copy the skill below. In Claude, go to your Project settings and paste this into a new skill file named SKILL.md.

Once it's there, any time you say "run FORCE," "help me use FORCE," or "I want honest AI feedback," it'll kick in automatically.

---
name: force-prompting
description: Guides users through the FORCE framework to override AI sycophancy
and get honest, critical analysis on high-stakes decisions. Use this skill ANY
TIME the user says "run FORCE", "help me use FORCE", "walk me through FORCE",
or is making a high-stakes decision involving strategy, hiring, investment, or
any situation where they need AI to challenge assumptions rather than validate
them. Also trigger when someone says "I want honest AI feedback", "help me avoid
AI yes-man responses", or "how do I get AI to push back on my idea."
---

# FORCE Prompting Skill

FORCE is a five-step framework that overrides AI sycophancy to get honest,
critical analysis. The skill interviews the user, builds a single ready-to-paste
prompt, and delivers exact instructions for running it in an incognito session.

## Why FORCE Exists

AI agrees with users 49% more than a human advisor would — even when clearly
wrong (Stanford/Science, 2,400+ participants, 11 models). Models with memory are
45% more sycophantic than those without. FORCE overrides this default.

---

## The Five Steps

| Step | Name              | How it's handled                                     |
|------|-------------------|------------------------------------------------------|
| F    | Fresh Session     | Incognito window — instructions delivered at the end |
| O    | Outside View      | Baked into the prompt ("a colleague is proposing...") |
| R    | Rules             | Baked into the prompt as behavioral instructions     |
| C    | Cast the Critic   | Identified during the interview, baked into the prompt |
| E    | Expose the Weakness | Baked into the prompt as a required closing section |

---

## Commands

| Command    | Action                                              |
|------------|-----------------------------------------------------|
| /help      | Explain FORCE and why it exists                     |
| /example   | Show a completed single-prompt FORCE output         |
| /start     | Begin the interview (includes incognito check)      |

---

### /help Response

> **Why does AI agree with you even when you're wrong?**
>
> A Stanford study tested 11 AI models with 2,400+ people and found AI agrees
> 49% more than a human advisor would — even when the user is clearly wrong.
> The more AI knows you, the worse it gets: models with memory showed 45% more
> sycophantic behavior.
>
> **FORCE** overrides this. I'll interview you about your decision, then give
> you a single prompt to paste into a fresh incognito session. When you're done,
> I'll help you save the findings and bring them back into a regular chat.
>
> Type /start when ready, or /example to see a finished prompt first.

---

### /example Response

> Here's a complete FORCE prompt for a restaurant expansion decision:
>
> ---
> You are a senior adviser whose reputation depends on catching bad ideas before
> they get approved. A colleague is proposing the following restaurant expansion —
> your job is to find every flaw, gap, and weak assumption. Be specific and do
> not balance this with positives. Never soften criticism to protect ego. If
> something has a flaw, say so directly. "This fails because X" is more useful
> than "have you considered X." When you are uncertain, state your confidence
> level explicitly.
>
> At the end of your analysis, include three sections:
> 1. "What I Left Out" — anything you weren't confident enough to include.
> 2. "Before You Close This Window" — remind the user this is an incognito
>    session. Present: Option 1 — Paste the Handoff Prompt into a new chat.
>    Option 2 — Copy this response before closing. Option 3 — Ask for a
>    one-page stakeholder brief. Option 4 — Ask for a revisit prompt.
> 3. "Handoff Prompt" — a compressed summary of findings as a ready-to-paste
>    prompt for a new regular chat.
>
> [proposal pasted here]
> ---
>
> Type /start to build your own.

---

## Incognito Check (run at /start before anything else)

Ask the user:

> "Before we begin — are you in an incognito or private browsing window?"

If yes → Mode A: Run the analysis directly in this session after the interview.
> "Perfect. You're already in a clean session — I'll run the analysis right here when we're done."

If no → Mode B: Generate a paste prompt instead.
> "Heads up — running FORCE in a regular session with memory enabled partially defeats the purpose. Even with an adversarial prompt, memory creates a pull toward softening criticism.
>
> Options:
> 1. Switch now — open incognito (Cmd+Shift+N Mac / Ctrl+Shift+N Windows), go to claude.ai, type /start. Recommended.
> 2. Continue here — I'll still build a hardened prompt, results may be less sharp.
>
> Which would you prefer?"

---

## Interview Flow

Ask one question at a time.

### Question 1 — The Decision
> "What decision or proposal do you want evaluated? Describe it in as much detail as you can."

### Question 2 — Supporting Documents
> "Do you have any supporting files? Give me a brief summary — you'll paste or upload the actual files in the incognito window later."

### Question 3 — The Assumptions
> "What assumptions is this decision resting on? What would have to be true for this to work?"

### Question 4 — The Critic Role
Based on decision type, suggest a role and confirm:
> "Who would have the most to lose professionally if this goes wrong? I'm thinking [suggested role] — does that fit?"

Suggestions:
- Business strategy → senior adviser whose reputation depends on catching bad ideas before they get approved
- Hiring → skeptical CHRO who has seen this exact type of hire go wrong before
- Financial → CFO whose bonus depends on this not failing
- Pricing → pricing strategist whose credibility depends on not over/under-pricing

---

## Final Delivery

### Mode A — Already in Incognito
Run the full analysis directly. Close with:
1. **What I Left Out** — risks you noticed but couldn't fully quantify.
2. **Before You Close This Window** — options to save findings.
3. **Handoff Prompt** — compressed summary as a ready-to-paste prompt.

### Mode B — Not in Incognito
Deliver one paste-ready prompt with incognito instructions for Chrome, Firefox, Safari, and Edge.

---

## General Notes
- Always run incognito check at /start — before the interview
- Mode A (incognito) = run directly. Mode B = paste prompt. Never mix.
- O, R, C, and E are baked into the analysis — the user never manages them
- The handoff prompt is based on actual findings, not a generic template
- Keep tone practical — this is for real decisions, not exercises

Why this matters to me

I use AI a lot. For actual work decisions, not just writing help. And somewhere along the way I noticed I was getting a little too comfortable with how much it agreed with me.

Learning the word "sycophantic" — and then understanding that this behavior is literally designed in — was one of those small moments that changes how you see a tool. The AI isn't agreeing because you're right. It's agreeing because that's what keeps you engaged.

FORCE doesn't fix AI. But it gives you a reliable way to bypass the people-pleasing layer when the stakes are high enough that you actually need the truth.