The AI Preparation Pattern That Changes High-Stakes Meetings

Published on 11/14/2025

When the Patient Builds Better AI Than the Hospital

TLDR: Steve Brown spent two hours with AI before each ten-minute oncologist appointment, rehearsing conversations and testing hypotheses. This preparation pattern caught a misdiagnosis, surfaced a treatment alternative that led to complete remission, and became CureWise—a system now used by other cancer patients. The same pattern works for any high-stakes meeting where time is short and decisions matter.

Summary:

The setup is stark: you walk into important meetings unprepared because you had seventeen other things to handle first. Ten minutes with your VP to pitch a new process. Fifteen with a client on scope changes. Twenty with your boss for quarterly check-in. You're winging it. Steve Brown faced a worse version: ten minutes per month with his oncologist to make cancer treatment decisions. No do-overs. His solution was spending two hours with AI before each appointment, rehearsing until he knew exactly which questions mattered.

The core insight is that shallow engagement is the default when decision-makers are overbooked. Quick check-ins, deferred hard questions, nothing resolved. Brown couldn't afford that—cancer grows exponentially, and delaying the right decision by three months changes survival odds. His pattern: don't show up asking vague questions like "Do you think this treatment will work?" Instead, test specific hypotheses: "My genomic report shows these three mutations. Literature suggests these drugs target them better than standard protocol. What am I missing?" This shifts the conversation from explanation to validation.

Brown's AI prep involved parsing his 15-page genomic report, identifying drug alternatives based on tumor mutations, and surfacing questions doctors didn't have time to research. One appointment, his prep uncovered a drug alternative Mayo Clinic agreed with. Protocol switch followed. Complete remission. That doesn't happen if you show up asking "what should we do next?"

The five-step adaptation for business meetings is practical: (1) Dump full context into ChatGPT—project background, past decisions, constraints, failures, stakes. (2) Ask for three conflicting recommendations to see the decision space. (3) Flip perspective—"I'm leaning toward option two, now argue against it" to catch blind spots. (4) Identify knowledge gaps—"What information am I missing? What should I ask?" (5) Rehearse the conversation—"I have ten minutes, here's what I know and recommend, how should I structure this?" Brown did this in two hours; you can do it in thirty minutes.

For architects and teams, this reveals a leverage pattern: AI doesn't replace judgment, it shifts preparation time investment. Six hours manually researching vendors becomes forty minutes with AI synthesis plus validation with your network. The compounding effect is critical—better questions lead to better answers, better decisions, and eventually getting invited to more important meetings. The person who asks the right questions becomes the one people want in the room, not because you have all answers but because you've framed decisions clearly and identified what's uncertain.

Key takeaways:

  • High-stakes meetings with limited time default to shallow engagement unless one party does deep preparation beforehand
  • Testing specific hypotheses ("My data shows X, literature suggests Y, what am I missing?") is far more effective than asking vague questions
  • The five-step pattern: dump context, get conflicting recommendations, flip perspective to find blind spots, identify gaps, rehearse structure
  • AI preparation provides leverage previously requiring a team of analysts, shifting from six-hour manual research to forty-minute synthesis
  • Companies scaling AI aren't buying enterprise tools—they're individuals who prepared better, moved faster, asked smarter questions, then that behavior spread

Tradeoffs:

  • Investing 30 minutes in AI-assisted preparation before meetings improves decision quality but requires upfront discipline that most people skip for immediate tasks
  • Testing multiple conflicting recommendations reveals blind spots but takes longer than jumping to the first plausible answer
  • Showing up with prepared hypotheses makes meetings more productive but exposes your reasoning to scrutiny, which feels riskier than vague questions

Link: When the Patient Builds Better AI Than the Hospital