When Leadership Says 'Go' But Means 'Figure It Out Yourself'
Published on 21.01.2026
When Leadership Says "Go" But Means "Figure It Out Yourself"
TLDR: A construction firm's AI initiative stalled not because employees resisted change, but because leadership gave enthusiasm without structure — no approved tools, no clear budget, no real ownership. The result: shadow AI, fragmented adoption, and an initiative nobody officially killed but nobody was driving.
Kamil from AI Adopters Club shares a case study that illustrates a pattern far more common than dramatic AI failures: the quiet suffocation of initiatives that started with genuine executive enthusiasm.
The construction firm wanted AI. Leadership was convinced they needed it. Competitors were moving fast. They asked for training. What they didn't provide: tools to train on. Budget wasn't clearly approved. Different leaders gave contradictory answers about what employees could use. When people asked "what AI tools are we allowed to access," the response depended on who they asked.
So employees did what employees always do when official paths are unclear. They used personal ChatGPT accounts, free trials, shadow AI — whatever they could access without asking permission. Two months in, a hands-on AI clinic was scheduled for 25 people. One person showed up. That single attendee told the whole story: AI had become something people avoided, another unclear initiative that nobody was sure leadership actually supported.
The result wasn't zero adoption — it was worse. Scattered adoption across silos. Small wins nobody else could see. No shared learning. No visibility into what was working. The person in operations speeding up reporting had no idea procurement had built something similar. Duplication everywhere, momentum nowhere.
The diagnosis is uncomfortable but clear: saying "we need AI" is not approving a budget. Approving a budget is not provisioning tools. Provisioning tools is not establishing data governance. And none of it matters without assigning real ownership to people with actual authority. The statistic that 63% of organizations cite "human factors" as a primary challenge in AI implementation sounds like employee resistance — it's not. It's leadership sending mixed signals and expecting adoption anyway.
Two things would have changed everything. First, a clear budget and implementation plan before training: specific platforms approved, monthly cost per seat defined, data processing boundaries established, sign-off authority clarified. Second, real ownership assigned to internal champions — not enthusiastic volunteers doing it on top of existing workloads, but people whose job descriptions include AI adoption and who have authority to answer questions and surface blockers.
The diagnostic questions for executives are pointed: Have you approved specific tools with specific budgets, or just "AI" in the abstract? If an employee asked three different leaders what AI tools they can use, would they get the same answer? Who owns AI adoption as part of their actual job, not a side project?
For internal champions, the questions to leadership reveal the real state: What specific tools are approved? What data can and cannot be processed? Who has authority to make decisions when questions arise? What budget exists for team accounts, and when will it be available? If you can't get clear answers, you've found the problem. It's not adoption. It's alignment.
Key takeaways:
- Enthusiasm from leadership is not the same as commitment with structure
- Shadow AI emerges when official paths are unclear, creating fragmentation and security risks
- Small wins locked in silos produce duplication and zero organizational momentum
- Clear ownership means people whose actual job includes AI adoption, not volunteers
- The diagnostic: would three different leaders give the same answer about approved tools?
Tradeoffs:
- Waiting for perfect governance delays value capture but prevents fragmented adoption
- Assigning dedicated AI ownership creates accountability but requires actual budget and authority
- Allowing shadow AI accelerates individual productivity but creates security and consistency risks
Link: When leadership says "go" but means "figure it out yourself"
This article was compiled from the Substack newsletter. The opinions and summaries presented are interpretations of the original sources — always read the linked articles for complete context.