Panel: Skeptic

Last updated Mar 02, 2026

Panel Review: The Skeptic

Reviewer Profile: Senior VP of Operations, mid-market company ($500M revenue, 2,000 employees). 20 years in leadership. Veteran of FranklinCovey, Crucial Conversations, DDI, executive coaching, and every other program HR has thrown at me. Daily AI user. Highly allergic to consultant frameworks.

Document Reviewed: COURSE-SPEC-UNIFIED.md — Leading Through AI™, LeaderFactor's 7th signature course.


1. First Reaction

I'll give it this: the first two minutes didn't lose me. That's rare.

"AI is the great distiller" — that sentence made me stop scrolling. Most AI leadership content opens with fear or hype. This opened with an identity claim that felt genuinely provocative. The idea that AI doesn't threaten leadership but reveals it — that's a move. I leaned forward.

I also leaned forward at the "front door vs. foundation" framing. Selling the CLO on adoption pain while actually delivering a leadership transformation? That's honest about how enterprise buying works. I've been in enough rooms where the pitch and the product are two different things. At least these guys know they're doing it and designed around it.

Where I rolled my eyes: the "Participant Transformation" table. Before/After tables are the consulting equivalent of stock photography. "Before: I'm confused. After: I'm enlightened." Every course has this table. It always looks the same. It always promises the same thing. Cut it or earn it.

I also caught a whiff of overselling in the revenue projections — 99% margins, $8.8M by Year 3, AI costs under 0.5% of revenue. That's not a course spec, that's a pitch deck for investors. Doesn't affect whether the course is good, but it tells me someone's excited about the business model, which always makes me wonder if the product got the same level of rigor.

Overall gut: This is better than 80% of what I've sat through. It's not another 2x2 with a TED Talk stapled to it. There's actual architecture here. Whether it survives contact with real leaders is a different question.


2. The Great Distillation

Here's my honest reaction as someone who's been leading for two decades: the insight is right, but it's not as new as the spec thinks it is.

"Focus on the work that only you can do" is a message I've heard from Peter Drucker, from Stephen Covey, from every executive coach I've ever paid. The Great Distillation is a better version of it — the AI framing is genuinely sharper than "delegate and elevate" — but let's not pretend this is a revelation that will shake senior leaders to their core.

What IS different: the mechanism. Previous versions of this insight said "you should focus on higher-order work." This one says "a machine is about to do the lower-order work whether you focus on higher-order work or not." That's not aspiration. That's physics. The urgency is real in a way that Covey's "important vs. urgent" matrix never was.

The "cognitive automation" framing is where this earns its keep. The distinction between automating tasks and automating thinking — that lands. I've automated plenty of tasks. I haven't grappled with what it means when a machine can do the thinking I was hired to do. That's a different conversation and I'd pay attention to it.

Verdict: The insight is an evolutionary upgrade, not a revolution. But it's a meaningful upgrade. I'd respect the facilitator who delivered it well. I'd also respect them more if they acknowledged that experienced leaders have heard versions of this before and explained what makes this one structurally different — in the room, not just in the spec.


3. Own / Augment / Automate

Would I use this language in a real meeting?

Actually... maybe. And that surprises me.

Here's why: "Own, Augment, Automate" passes the meeting test because it's verb-based and immediately actionable. When someone on my team asks "what should we do with AI for this process?" — I could actually say "let's map it: what do we Own, what do we Augment, what do we Automate?" and people would get it without a training course.

Compare that to something like "The Four Demands of AI Leadership" (which I gather was the previous version). I would never say "I need to Clarify, Calibrate, Cultivate, and Configure" in a meeting without feeling like I'm reading from a consultant's slide.

The "Three Zones" language also works because it mirrors how operational leaders already think. We already categorize work into keep/change/eliminate. Own/Augment/Automate is just a smarter version of that for the AI context.

The "Augment" zone as the "contested middle" — that's the real insight. The spec is right that the interesting decisions aren't in Own or Automate. They're in the gray area where you're deciding how much AI involvement is appropriate. That's where I spend my actual time.

What wouldn't I use: "The Great Distillation" as a phrase. "Partnership Audit." "Capability Fog." These are course vocabulary — useful inside a training room, weird in a Tuesday ops meeting. "Architecture Debt" I might actually steal, though. That one maps cleanly to how I already talk about organizational problems.


4. The Exercises

Let me go through each one honestly.

The Partnership Audit (Module 1) — I'd engage. Reluctantly.

"List 10 things you spent time on last week and categorize them." Would I do this? Yes, but only because I'm slightly compulsive about tracking my time anyway. The exercise itself is pedestrian — it's a time audit with a twist. What makes it work is Step 3: "Look at the ratio." The reveal that most of my week is in the Augment/Automate zone would be genuinely uncomfortable for me. I'm not sure I'd like what I see, which means it's doing its job.

The risk: If a facilitator lets this become a checkbox exercise ("just quickly categorize your week"), it dies. The power is in the discomfort of the reveal. That requires a facilitator who can sit in silence while people process.

The Discovery Sprint (Module 2) — I'd phone this one in.

"Push three workflows through Efficiency → Augmentation → Transformation." This feels like a structured brainstorm, and I've done a thousand structured brainstorms. The three dimensions are sensible but the exercise feels like it was designed for leaders who haven't thought much about AI. For someone who uses AI daily, being asked to brainstorm "what if AI could..." for 12 minutes feels like being told to color inside the lines.

The pair expansion saves it somewhat. Having someone push me past my first ideas is where the real value lives. But in on-demand mode with just the AI? I'd speed through it.

The Design Sprint / Partnership Map (Module 3) — This is the one.

This is where I'd actually work. Deconstructing a workflow into component tasks and making explicit allocation decisions with a "why" column — that's real operational design. I do versions of this when we redesign processes. The AI framing forces a rigor I usually skip.

The unscaffolded second map is the cleverest design choice in the whole course. Making me do it alone after doing it with help — that's how you build a skill, not just deliver an experience. I'd respect the course for this.

The Readiness Diagnostic (Module 4) — Depends entirely on my mood.

Rating my team on four readiness gaps could be powerful or it could be navel-gazing. If I'm in a reflective state, I'd find value here — especially the Identity Integration gap, which I've never seen named before. That's the team member who can use the tools but doesn't know who they are in the new model. I have people like that. Naming it matters.

If I'm in an impatient state (which, let's be honest, is most states), I'd rate my team quickly, write three obvious actions, and move on.

The 90-Day Demonstration Plan (Module 5) — High potential, high risk.

See section 6 below. This is either the best exercise in the course or the most ignored. No middle ground.

Summary: I'd genuinely work on Modules 1, 3, and possibly 4. I'd coast through Module 2. Module 5 depends on whether I believe the plan will actually get used.


5. The AI Thinking Partner

Let me be direct: the meta-awareness is either genius or insufferable, and the line between those two is thinner than a razor blade.

Having an AI coach me through a course about AI leadership while occasionally saying "notice what just happened — you're doing the thing" — I get why they designed it. The medium IS the message. Conceptually, it's elegant.

In practice? First time the AI says "Notice that you just partnered with an AI to make a better decision — that's cognitive partnership, that's what this course is about," I will either have a genuine insight or I will close the laptop. It depends entirely on whether the AI has earned that moment by actually challenging my thinking, or whether it's congratulating itself for existing.

What I'd actually do: I'd test it. Not maliciously — I'd give it real answers about my real work and see if its challenges are substantive or generic. If it says "Walk me through your reasoning on that" and then follows up with something specific to what I actually wrote, I'm in. If it gives me the same follow-up it would give anyone, I'm out.

The spec's error recovery protocols are smart. "Fair enough — you know your work better than I do. Tell me what I'm missing." That's the right posture. An AI that admits its limitations is infinitely more trustworthy than one that pretends to be omniscient.

The "Respect for Dissent" protocol is the thing that would actually earn my trust. If I push back on the Great Distillation and the AI says "you may be right — let's work with your actual numbers and see what they tell us" instead of trying to argue me into the framework — that's when I'd start taking it seriously.

The scaffolding removal (AI does less as the course progresses) is the best structural choice in the entire spec. By Module 5, I'm building my own plan with minimal AI input. That's how you avoid creating AI dependency in a course about AI leadership. Someone was thinking clearly when they designed that.

Verdict: I'd approach it as a skeptic, but if it's built to the spec, it would probably win me over by Module 3. The critical test is whether the AI challenges me with specifics from my own context or with generic frameworks.


6. The 90-Day Plan

This is the section that will determine whether this course is different from every other one.

Every leadership course ends with some version of "now make a plan." They all sound great in the room. They all go in a drawer. I have a drawer. I know what's in it.

What's different here — potentially:

  1. Kill criteria. I have never seen a leadership course ask "what would cause you to stop?" That's an operational discipline, not a training exercise. It signals that the course designers understand how real decisions work. You don't just measure success — you define failure and design for it.

  2. Scale triggers. "What must be true before I expand" is the right question. Most AI initiatives fail not because the pilot fails but because someone scales a pilot that worked in one context into a context where it doesn't. Explicitly designing the scale decision is genuinely valuable.

  3. Three layers of evidence. Leading indicators, lagging indicators, and story indicators — this is how I actually report to my CEO. Numbers and narratives. If the 90-day plan teaches leaders to build both, they'll be more effective communicators, AI or not.

What's NOT different:

It's still a plan made in a training room, which means it's made with training-room energy and training-room optimism. Monday morning, my inbox has 47 unread emails, three fires are burning, and the 90-day plan is already competing with everything else for my attention.

What would make it stick: The 8-week reinforcement system. If someone actually emails me on Tuesday morning with a specific prompt tied to a specific action from my plan — not "how's your journey going?" but "you committed to checking your 30-day leading indicators this week; here are the ones you defined" — that changes the math. The reinforcement system is the structural answer to the drawer problem.

The accountability pair is either the best idea or the most ignored feature. In my experience, peer accountability works when both people are engaged and fails when either one isn't. I'd want to choose my own accountability partner, not be assigned one.

Verdict: The 90-day plan architecture is genuinely better than anything I've seen in a leadership course. Kill criteria alone puts it ahead. Whether it survives Monday morning depends on the reinforcement system, which I won't know until I experience it.


7. What Would Actually Change My Behavior?

Forget the spec's promises. Here's what would stick, six months later:

  1. The Three Zones language. I'd actually use Own/Augment/Automate when thinking about process design. It's a better lens than what I currently use. Six months from now, when someone proposes an AI initiative, I'd instinctively ask "is this an Augment play or an Automate play?" and the answer would shape the design.

  2. The "why" column in the Partnership Map. Making the reasoning explicit for every allocation decision — I'd carry that forward. Not as a formal exercise, but as a discipline. "Why is a human doing this? Because ___." If I can't answer that, it tells me something.

  3. The Four Readiness Gaps as a diagnostic. Specifically, the Identity Integration gap. I'd look at my team differently. The person who's resisting AI adoption — maybe their problem isn't skill. Maybe it's identity. I'd ask different questions.

  4. Kill criteria for AI initiatives. I'd build this into every business case going forward, AI or not.

What would NOT stick:

Net assessment: Three durable behavior changes and one diagnostic framework. That's actually more than most courses deliver. Most courses give me zero things I'm still doing at six months.


8. What I'd Tell My CEO

"Better than I expected. Not life-changing, but genuinely useful. The framework is practical — I'd actually use the language back here. The best part is it doesn't try to teach AI tools; it teaches how to think about designing work around AI, which is the part we're actually bad at. I'd recommend it for our director-level and above. The people leading AI initiatives need this more than another tool demo. Don't roll it out to the whole company — start with the leaders who are actually making allocation decisions about AI."

30 seconds. That's my honest answer.


9. Red Flags

Things that would make me check out:

  1. The "you're doing $50,000 work when you're capable of $500,000 work" line. I get what they're going for — reframe the ratio as opportunity, not threat. But telling a senior leader that most of their week is $50,000 work is one wrong inflection away from condescending. A good facilitator saves this. A mediocre one makes me fold my arms for the rest of the afternoon.

  2. The Provocation Essay as pre-work. Asking me to read a 1,200-word essay and write three sentences about my feelings before I've even shown up? That's homework. I'm a senior VP. I'll do it, but I'll resent it, and the course starts with a small deficit of goodwill.

  3. "The medium IS the message" self-congratulation. The spec mentions this idea multiple times — the AI coaching you about AI is itself the lesson. Yes, I understand the concept. If the actual course beats this drum more than twice, it will feel like the course is more impressed with itself than I am.

  4. The pair sharing in every single module. "Turn to the person next to you." Five times in four hours. I know pair sharing is pedagogically sound. I also know that by Module 4, people are performing their pair shares, not engaging with them. Vary the format or reduce the frequency.

  5. Nothing about the political reality of AI adoption. The course assumes the leader's challenge is design and team readiness. In my actual life, the biggest challenge is navigating organizational politics — getting budget, managing up, dealing with the executive who thinks AI is a fad and the one who thinks it's magic. Module 5 touches on this (proving value to stakeholders) but it's underweight. The course would land harder if it acknowledged that the leader's biggest obstacle might not be their team — it might be their boss's boss.

Things that are NOT red flags but could become them:


Summary Assessment

Dimension Rating Notes
Intellectual rigor 8/10 Framework is tight. The 5D progression is logical. Not groundbreaking but solid.
Practical utility 7/10 Three Zones and Partnership Map are genuinely useful tools. Discovery Sprint is weak.
Respect for the audience 7/10 Mostly treats leaders as adults. A few moments of over-explaining.
Durability 8/10 Tool-agnostic design is smart. Framework should age well.
Differentiation 7/10 Module 5 (Demonstrate) is genuinely different. Modules 1-4 feel familiar in places.
Likelihood I'd recommend it 7/10 I'd send my directors. I'd tell my peers it's worth the time. I wouldn't call it transformational.

Bottom line: This is a well-built course that respects leaders' intelligence more than most. It's not going to change my life, but it would change how I think about three or four things, and that's more than I can say for the last five programs I attended. The test will be execution — the spec is strong, but specs don't deliver courses. Facilitators and AI interactions do. If the AI Thinking Partner is as good as designed, and the facilitator can resist reading slides, this could be genuinely good.

I walked in with my arms crossed. I'm leaving with them uncrossed. That's not nothing.


Review completed March 2, 2026 The Skeptic — Senior VP of Operations