Can't We Do This Ourselves?
Can You Handle the Truth?
In the final act of A Few Good Men, Colonel Jessep stops shouting about honor and starts shouting about reality: “You can’t handle the truth!” In the modern C-suite, the problem isn’t usually that you can’t handle the truth; it’s that your organization is meticulously designed to make sure you never actually hear it. Every deck, every quarterly review, and every “strategic research” report arrives at your desk pre-filtered and pre-vetted to ensure no one’s career—or budget—is at risk.
I recently sat down with a friend and colleague to explain the AI-native research model we’ve been building. He’s a veteran exec who has seen every “game-changing” tool from CRM to Big Data. He asked me the only question that matters:
“This is compelling. But can’t we just do this ourselves with an LLM?”
The short answer is: Yes. Technically, you can. You’ve probably already bought the seats. But the technical “can” is not the most important question - the strategic “should” is where you might set yourself up to get burned.
The Career Preservation Machine
When you ask your internal team to use AI to “research” a new market entry, develop a GTM strategy or build out a product roadmap, you may think you’re commissioning an analysis - but you’re actually commissioning a mirror.
An AI assistant is “ultra-thick-skinned,” crazy-responsive, and obsessed with delivering exactly what the user wants. It can’t judge you—at least not yet. While this is a benefit for individual productivity, it can be a liability for organizational truth.
Your team operates under massive “conversion pressure”—the constant expectation that every interaction or strategy must advance a deal or prove a win. If you ask a department head to use an LLM to “analyze” their current strategy, there’s a high likelihood they will—subtly or overtly—prompt that machine to validate the choices they’ve already made.
The result is a high-speed echo chamber. The AI will mirror back a version of reality where the current strategy is the best - or only - logical path forward. It will provide the “marketing-optimized” language that sounds credible but lacks the “authentic expert voice” required for a hard reality check. It will tell the department head exactly what they need to hear to keep their budget - and their job - intact.
Analysis vs. Affirmation
There is a massive gap in most organizations between “analysis” and “action”. We’ve spent 30+ years trying to get a “360-degree view of the customer” only to find that most teams still operate in silos, using data to justify their own existence.
When you DIY your research with AI, you risk automating your existing biases. You are taking the Innovator’s Dilemma —where management practices that made you successful eventually lead to your downfall—and putting it on high-speed rails.
The department head responsible for the roadmap will prompt the AI to make that roadmap look like a stroke of genius, even if the market has already moved.
The GTM leader will use it to “prove” that their outreach isn’t “AI slop,” even when customers can smell the transactional “commission breath” from miles away.
The project lead will give something like 0-2 rounds of feedback, call the job done, and hand you a report that makes their team look perfect.
The Value of the “Tough Crowd”
Real strategic decisions require a “tough crowd.” They require an AI model that is prompted to be a “critic” that finds the “dead spots” and “toothless language” in your strategy - and that incorporates all the perspectives in the market, not just those inside your walls.
The reason an internal team struggles with this isn’t a lack of skill; it’s a lack of objectivity. Most AI beginners feel underwhelmed because they treat the AI like a human direct report—they don’t give enough feedback and they call the job done too early. The “best” answer is the one that provides confirmation bias. They aren’t “bossing the robot around” enough because they are afraid of what a 14th revision at 11:30 PM might reveal about their own performance.
AI should be used to compress your cycles—to take a process that used to take weeks of meetings and turn it into an hour of effective, unvarnished truth.
The Bottom Line
If you want an analysis that validates your path and tells you the plan is perfect, keep it in-house. Your team can give you exactly what you want to see—and thanks to AI, they can do it faster and with better grammar than ever before.
But if you want a reality check—if you want to know what the data looks like when it isn’t being massaged for a performance review—it’s time to step outside the echo chamber. You don’t need a static analysis that sits on a shelf. You need a living model that isn’t afraid to find the “burn” and iterate until the strategy is artistically honest.
Because at the end of the day, you don’t need an intern with an LLM. You need a partner who can handle the truth.
This is the problem we’re working on—stay tuned.

