Published on 11.01.2026
TLDR: Most organizations have two AI camps - secret users and nervous avoiders - and both are failing. The path forward requires a small task force, an amnesty audit of current usage, clear one-page guidelines, and pilot projects chosen by frustration, not features.
Let's be honest about what's actually happening inside most companies right now. There's a group using AI tools secretly, hoping IT doesn't notice. There's another group avoiding AI entirely, convinced they'll break something or look stupid. Neither approach is working, and the gap between AI-confident and AI-anxious teams has become visible in promotions, budget allocations, and measurable results.
The shift that matters in 2026 isn't another breakthrough model or dramatic demo. It's that the tools got boring - and that's actually the point. Boring means reliable. Boring means your finance team can use it without a computer science degree. Boring means it's ready for actual work, not just impressive demos.
This framing cuts through the hype fatigue that's set in after three years of "this is the year of AI." The question isn't whether AI is transformative. The question is whether your organization has figured out how to capture that value or whether you're still "exploring options" while competitors ship.
TLDR: Committees produce documents. You need 3-5 people who produce experiments. Staff it with practitioners who are already tinkering, not just strategists who theorize.
The instinct to form a committee is strong but wrong. Committees optimize for consensus and documentation. What you actually need is a small group of people who can run experiments, learn quickly, and report back.
The recommended composition is pragmatic: find the person already using Claude or ChatGPT on their lunch break (they exist and they're hiding it), the ops lead who's been complaining about manual data entry for two years, someone from legal or compliance who's curious rather than paralyzed, and an executive sponsor with enough authority to remove blockers and celebrate wins publicly.
The key insight here is about commitment level. Thirty minutes a week is the starting expectation. This isn't a full-time initiative - it's a learning loop with minimal overhead. The goal is experiments, not comprehensive strategy.
For architects and technical leaders, this framing is useful when socializing AI initiatives internally. Don't pitch a major transformation program. Pitch a small team running quick experiments with clear reporting cycles.
TLDR: Before you can move forward, you need to know what's already happening. Frame your survey as amnesty, not investigation, and people will tell you everything.
Here's the uncomfortable truth: AI is already being used in your organization in ways you don't know about. People aren't disclosing it because they're not sure if they're allowed. Running a quick survey to surface this usage is essential, but the framing matters enormously.
Three questions are enough: What AI tools are you using right now? What tasks are you using them for? What's working, and what's frustrating?
The critical element is positioning. If this feels like a compliance investigation, people will lie or stay silent. If it feels like contributing to organizational learning, they'll tell you everything. The word "amnesty" does real work here - it signals that the goal is discovery, not punishment.
What you'll find is two types of insights: use cases you can formalize and potentially scale, and security gaps you need to close. Both are valuable, but you won't get either without creating psychological safety around disclosure.
TLDR: You don't need a 50-page policy. You need clear guardrails covering what not to upload, what needs human checking, and who to ask when uncertain.
The temptation is to produce exhaustive documentation. Resist it. What people need is clarity, not comprehensiveness. A one-page guideline that covers the essentials will get read. A 50-page policy will gather dust.
Three categories matter: What not to upload (client names, deal terms, financials, PII - assume anything you type could leak unless you're using enterprise-grade tools with proper data handling). What needs checking (AI outputs must be human fact-checked before external use, no exceptions). Who to ask (name a real person who can answer questions in 24 hours, not 24 days).
There's a crucial principle embedded here: "You can't automate chaos. Document before you delegate." Before any workflow becomes an AI pilot, it needs to be written down. Not for bureaucracy's sake, but because you can't teach AI to do something that three people do three different ways. Standardization is a prerequisite for automation.
TLDR: Don't ask "How can we use ChatGPT?" Ask "What takes too long? What's tedious? What do we keep getting wrong?" The best AI projects start with frustration, not technology.
This is the most practical piece of advice in the entire article. The question "How can we use AI?" leads to solutions looking for problems. The question "What frustrates people?" leads to problems with built-in adoption motivation.
The department-by-department breakdown is useful. Finance isn't looking for AI to be smarter - they want to stop copying numbers between spreadsheets. HR doesn't need a robot recruiter - they want first drafts of job descriptions that don't take 45 minutes. Marketing wants AI-assisted speed on the parts that don't require human creativity. Sales wants to spend more time selling and less time on admin. Ops wants the better system they've been begging for.
Pick one workflow per department - specifically the one everyone hates - and that's your pilot. The pre-existing frustration becomes the adoption engine. People will actually use the solution because it solves a problem they genuinely care about.
TLDR: If you can't prove it worked, you're not running a pilot - you're playing with software. Establish baselines before starting, even if they're ballpark estimates.
Most teams skip measurement because it feels like homework, then wonder why leadership won't fund the next phase. The solution isn't precision - it's having something rather than nothing.
Before starting, answer three questions: How long does this task take right now? How many people touch it? How many revision cycles does it go through? Ballpark is fine. If you think lease abstracts take 2-4 hours, start with 3 hours as your baseline.
After three weeks, measure again. Time saved? Errors reduced? Fewer revision cycles? That's your story for leadership.
There's wisdom in measuring qualitative factors too. Ask how frustrating the old process was. Ask how confident people feel about the new one. Numbers convince executives. Feelings drive adoption. You need both.
TLDR: The efficiency gains are nice, but the real win is organizational confidence - when trying something new stops feeling risky and starts feeling normal.
This section elevates the discussion from tactical adoption to cultural transformation. The goal isn't AI fluency or tool mastery. The goal is creating an environment where "I tested this and it didn't work" becomes an acceptable sentence in a team meeting.
The recommendation to celebrate experiments rather than only successes is essential. Run "show and tell" sessions where people share what they tried, even if it flopped. Make visible what used to be invisible: the curiosity, the tinkering, the willingness to look stupid for five minutes in exchange for learning something.
For leaders, this reframes AI adoption as a leadership problem dressed up as a technology problem. The organizations that succeed won't be the ones with the biggest AI budgets. They'll be the ones where people feel safe to experiment, fail, learn, and try again.