Why Luck in Business is Overrated: Solopreneur Reality Check
Published on 17.11.2025
Why the Role of Luck in Business is Overrated
TLDR: A consultant's year-long experiment with "increasing surface area for luck" through 13 different initiatives revealed that solopreneurship success depends less on lucky breaks and more on sustained focus, managing limiting beliefs, and recognizing that data-driven experimentation requires scale most solopreneurs lack.
Summary:
The author's confession is brutally honest: they spent 2025 hopping between 13 major initiatives—from a 70-page ebook (made EUR 35, cost EUR 200) to consulting engagements, conference talks, and coaching programs nobody signed up for. The strategy was deliberate: increase surface area for luck by trying many things. The result was predictable: exhaustion, scattered attention, and little to show for it.
This challenges the Silicon Valley narrative that lucky breaks determine success. The author argues the opposite—building a successful business is "like running a marathon" when you keep treating it like "a series of sprints." That metaphor captures something crucial about solopreneurship that corporate consulting experience doesn't prepare you for: time is your only real resource, and fragmenting it across many initiatives guarantees mediocre results everywhere.
The most interesting admission is about applying corporate strategy to solo ventures. The author tried to replicate data-driven experimentation—run experiments, gather data, evaluate, iterate. This is standard advice in product development and works beautifully at enterprise scale. But it catastrophically fails for solopreneurs because they lack the fundamental ingredient: enough data volume to learn from. Running 13 small experiments doesn't generate statistical significance. It generates noise.
What the article doesn't fully explore is why experienced consultants fall into this trap. The answer is probably uncomfortable: consulting trains you to solve other people's problems with their resources. You're optimizing systems at scale with budgets and teams. When you become a solopreneur, you're suddenly resource-constrained in ways that invalidate your entire mental model. The instinct to diversify and experiment—which mitigates risk in corporate contexts—becomes the risk itself when you're the only resource.
The pivot the author describes is fascinating: "without the capacity to collect enough data on your products or services to learn through trial and error, the only thing you can do is listen and intuit." This is essentially advocating for qualitative research over quantitative—paying attention to "random people on the internet" through Reddit, X, Substack, and YouTube. Greg Isenberg and Pieter Levels get mentioned as exemplars who "tilt the odds" by curating the right sources and systems.
But this substitutes one hard problem for another. Curating signal from internet noise requires taste, judgment, and pattern recognition that only develops over time. You can't shortcut it with systems, though the author is trying with "content intelligence systems for Substack and YouTube." The deeper issue is that listening to internet strangers is how you identify potential demand, but it doesn't solve the execution problem. You still need sustained focus to build something valuable enough that people pay for it.
For technical founders and architects, there's a parallel trap: building tools because you can, not because there's real demand. Engineers are pattern-matching machines. We see repetitive tasks and immediately think "I could automate that" or "there should be a framework for this." Sometimes we're right, but often we're solving problems that don't actually hurt enough for anyone to pay to solve them. The author's checklist is useful: Are you building because you can, or because there's real demand? Will service delivery scale to revenue levels needed for profitability? How's your timing—early or late?
The limiting beliefs section touches on something crucial but doesn't push far enough. The author identifies "fail fast" as a strategy that only works for solopreneurs "if you have the right priors: when your initial offer is close enough to real demand that you can get there in one or two iterations." This is a polite way of saying: you need taste and judgment before you start, or fail-fast just means burning time and money rapidly.
The uncomfortable truth is that most successful solopreneurs had years of grinding before their "overnight success." Greg Isenberg and Pieter Levels both spent years building in public before achieving financial success. There's no shortcut to developing judgment about what people want and how to deliver it. The promise of data-driven experimentation is that you can bypass judgment with systematic testing. But that only works at scale.
Key takeaways:
- Increasing "surface area for luck" through multiple simultaneous initiatives fragments the only resource solopreneurs have—time—guaranteeing mediocre results across all efforts rather than success in any
- Data-driven experimentation requires scale to generate statistically significant results; solopreneurs lack sufficient volume and must instead develop judgment through qualitative research and pattern recognition
- Corporate consulting experience can be misleading for solopreneurs because it trains optimization of existing systems at scale with resources, not building from scratch under severe resource constraints
- Successful solopreneurs like Greg Isenberg and Pieter Levels spent years developing judgment and building in public before achieving financial success—there's no shortcut to developing taste for real market demand
Tradeoffs:
- Diversifying across multiple initiatives reduces risk of complete failure but guarantees insufficient focus to succeed at any single venture
- Qualitative research through internet signals is faster than quantitative testing at small scale but requires taste and judgment that only develops through extended practice
- "Fail fast" iteration accelerates learning when starting close to product-market fit but wastes time and capital when initial positioning is far from real demand
Link: Why the role of luck in business is overrated
AI Market Signals: Bubble Anxiety and Model Upgrades
TLDR: Major market anxiety emerged as SoftBank sold $5.8B in Nvidia stock to fund OpenAI investment, Michael Burry warned about inflated AI profits through accounting manipulation, and OpenAI shipped GPT-5.1 with improved conversational capabilities while Anthropic committed $50B to American AI infrastructure.
Summary:
The confluence of financial signals and technical progress this week reveals deep tension in AI markets. SoftBank's sale of its entire $5.8 billion Nvidia stake to fund a $30+ billion OpenAI bet spooked markets despite assurances this showed confidence in AI's future. Two days later, Michael Burry—who famously predicted the 2008 housing collapse—broke two years of silence to warn that Big Tech's AI profits rely on accounting manipulation.
Burry's specific claim is striking: Big Tech will understate depreciation by $176 billion between 2026-2028, artificially inflating reported profits by up to 26.9%. This isn't abstract criticism. He's arguing that the infrastructure spending is being amortized over unrealistically long periods, making current profitability look much better than economic reality. The math that's "getting harder to ignore": Big Tech is on track to spend nearly $400 billion on AI infrastructure in 2025 alone, yet for that spending to make economic sense, AI revenues must grow from $20 billion to $2 trillion annually by 2030.
That's a 100x revenue increase in five years. For context, cloud computing—which did transform enterprise IT—took roughly 15 years to reach similar scale. The bull case says AI will be bigger and faster. The bear case says this is delusional extrapolation from early adopter enthusiasm to mass market inevitability.
What's missing from this analysis is the unit economics story. Are current AI products getting more profitable per user over time, or are margins compressing as compute costs and competition increase? OpenAI's reported $5 billion loss in 2024 on $3.7 billion revenue suggests the latter. If you're losing money on every customer but planning to make it up in volume, you don't have a business—you have a subsidy that eventually ends.
Meanwhile, technical progress continues. OpenAI shipped GPT-5.1 with two variants: Instant and Thinking. The Instant model "can decide when to think before responding, making it faster on simple queries while maintaining thoroughness on complex ones." This is actually sophisticated—meta-cognitive awareness about when deep reasoning is needed versus when pattern matching suffices. They also launched a generally available Realtime API with the gpt-realtime model that processes and generates audio directly, reducing latency and preserving nuance.
Anthropic announced $50 billion in American AI infrastructure investment, partnering with Fluidstack for custom data centers in Texas and New York. They also open-sourced a political bias evaluation framework on GitHub, encouraging other developers to adopt shared standards on neutrality. This is interesting positioning—while OpenAI races toward AGI, Anthropic is building infrastructure and establishing norms around AI safety and bias.
The market tension is between technical capability—which is genuinely improving—and economic viability—which remains unproven at current spending levels. xAI founder Elon Musk announced that Grok 5 is delayed to early 2026, citing resource constraints. When it ships, the model will have 6 trillion parameters—double the size of Grok 3 and 4. The parameter count arms race continues, but unclear whether larger models translate to proportionally better commercial outcomes.
For architects and technical leaders, the strategic question is: where in this stack do you place your bets? Building on foundation models means dependency on providers whose unit economics may not work long-term. Building infrastructure means competing with companies spending hundreds of billions. The pragmatic middle ground is probably building application-layer value that can switch providers if necessary—but that also means you're in the most competitive part of the market with the least defensibility.
Key takeaways:
- Michael Burry warns Big Tech will understate AI infrastructure depreciation by $176B through 2028, artificially inflating profits by up to 26.9% while actual economic returns remain unclear
- AI revenues must grow from $20B to $2T annually by 2030 to justify current infrastructure spending—a 100x increase in five years that exceeds cloud computing's 15-year trajectory
- Technical progress continues with GPT-5.1's meta-cognitive awareness (deciding when to think deeply vs. pattern match) and Realtime API for direct audio processing
- Market tension exists between improving technical capabilities and unproven economic viability at current spending levels, with OpenAI losing $5B on $3.7B revenue in 2024
Tradeoffs:
- Building on foundation models provides immediate capability but creates dependency on providers with uncertain long-term unit economics
- Larger parameter counts (Grok 5's 6 trillion) increase model capabilities but unclear if improvements translate to proportionally better commercial outcomes that justify training costs
Link: AI market signals and updates
This summary was generated from the Metacircuits newsletter. The content reflects editorial analysis and interpretation of the original articles. Opinions expressed are those of the summarizer, not necessarily the original authors.