Why “The Sputnik Moment” for AI is Actually Excellent News for All (Even the Losers)
A brighter future for AI: more progress, likely sooner, likely better, and with more distributed gains - driving adaptation among big players and growth for startups.

The DeepSeek R1 release caused a more than trillion-dollar decline (thereafter mostly recovered) in market value last week. If you just emerged from a silent retreat, please read here, or more nuanced analysis here, as I’m attempting to do my part to reduce the overabundance of unnecessary information in the universe (sadly not in the slightest due to AI which will make the volume infinitely worse, but rather due to people’s incessant need to share mostly unoriginal takes) → one sentence: release of very very good open source foundation model out of China made at much lower cost and with clever innovations that also reduce inference cost.
Why did people think it was bad news?
Argument 1: Western world is losing, China is winning
Argument 2: We overestimated energy demand; overspent on AI capex
Argument 3: Leading AI model companies have no moat
Let’s take each one quickly, then explain our take and why we (1) DO think it was a watershed moment, but (2) are very bullish rather than bearish on its implications.
Argument 1: Chinese domination:
Although it’s hard to know the facts and the accuracy of the figures bandied around, it is indisputably true that R1 is almost as good as the best models we have today, it shows true innovation that leads to lower training and inference costs, and it used many non-top-of-the-line chips. To those who mistakenly believed we were in a dominant lead and we could easily decouple and win an AI war, it was in fact negative news. However, we do not believe the smart money was pricing in such assumptions. China has become a scientific juggernaut and a leader or extraordinary fast follower in most fields. Given the importance of the tech arms race to all and the dissemination of information globally in this day and age, it would be unreasonable to think they couldn’t catch up. To be clear, no one thinks they have surpassed the US. And the fact that DeepSeek was trained on less compute is not totally surprising, as many in the AI community had advocated for the methods it employed. There is simply a smaller gap, and hopefully, an understanding that an insurmountable gap is unlikely to develop, even if we do everything right with respect to AI R&D and policy (but yes we should do everything right to avoid falling behind).
DeepSeek’s model is open source. Instances are already being served on Azure and other platforms, thus nixing any fear that Chinese content or censorship will dominate.
It is a fantasy that we (US and China) are not co-dependent and can fully decouple quickly. The large pools of capital are fully aware of this. The poster child of AI, NVIDIA, would cease to exist with immediate total decoupling. It cannot survive without its Taiwan-made semiconductors (yes Taiwan is not China, but the geopolitics are complex and outside the scope of this post) nor its Western world design and integration and tech stack. To break this apart will take many years if possible. We will need to find ways to gingerly work around each other with our eyes wide open and treading carefully, but there is no “full” decoupling nor “winning it all” (two great books on this topic to get a balanced perspective: Chip War by Chris Miller and Wires of War by newly appointed Undersecretary of State Jacob Helberg).
As to mainland China itself, NVIDIA has strategically (and to the outgoing Administration’s chagrin) maintained its foothold in China by finding workarounds to increasing regulatory restrictions. It remains an open debate what’s the right strategy for America’s interests – increase the pressure and cause China to build its own NVIDIA, or tread carefully and allow the US leader to remain the global leader.
Argument 2: Overestimation of demand
It’s impossible to predict the future, and estimating value for AI is very hard. But the intellectual exercise is easy to decompose into its component parts:
The DeepSeek news dramatically altered the expected value for each of these factors. With disseminated ability to innovate (from just a few big players to many leveraging excellent open source tools means more shots on goal), ability to get quicker ROI on investment (unlocking more capital), and precipitously falling cost to serve (more adoption, more valid use cases), the probability of powerful AI arrival (Factor 1) and probability of sooner adoption (Factor 2) increase dramatically. So just the increase in these two factors could justify maintaining pre-announcement valuations even if the market sizing or competitive dynamics shifted.
In addition, as endlessly parroted this week as well, Jevons Paradox points to an increase in usage because if it’s cheap to use there will be many more uses/users (Factor 3). The paradox is that with declining costs we end up using more, not less (the typical example studied in school is adding a car lane to reduce traffic, which only brings more cars).
Argument 3: companies betting on owning the best most expensive models have no moat
This one is the most real and worthy of dissection. Yes, value is migrating from training to inference, and at the extreme perhaps foundation models have negligible value. In that case, everyone investing heavily in foundation models is a relative loser. OpenAI, a startup valued most recently at $340B on the strength of its closed source best-in-class foundation models and boasting of successive runs getting more and more expensive seems to be a big loser (It does give many of us schadenfreude seeing the open source turned closed source, scraper of IP across the Internet get some of its comeuppance, and see Sam Altman forced to acknowledge some strategic mistakes and do some quick pivoting).
However, 2 points here: (1) as a private company, the public market meltdown was not directly connected to its value, and (2) what OpenAI retains is a pole position to focus its efforts on the inference layer. My children haven’t heard of Perplexity or DeepSeek or Gemini or anybody else, they use ChatGPT for everything (much more than I realized!). OpenAI can still win big given its brand position and rapid consumer adoption. And as mentioned above, given the enormous increase in expected market value, a smaller share of a massively bigger pie is still a net positive for its value creation (if not for its aspirations to world domination).
In the case of the hyperscalers, the demand for more compute as efficient as possible is undeniable, and just like they made sizable profits in the cloud, more sizable profits are likely to come as more economic value transitions to data centers, irrespective of whose models or how many companies serve end users.
What we took as the most important (and very positive) takeaways from R1 are the following:
1) Truly useful AI is much more likely and much sooner than previously expected. Many more sources of innovation, decoupled from massive upfront investment make the evolution and adoption of AI much faster, as many more can contribute and the race remains competitive longer.
2) Beneficial AI is much more likely than dark scenarios, mainly because distributed innovation reduces the risk that a few bad actors dominate, and reduces the risk of further concentration of wealth and power. We think society benefits as this does not point to a “winner takes most” or “incumbents win it all” scenario.
3) The opportunity set for new startups is bigger than ever. The biggest technological, geopolitical, and structural/infrastructural shifts in our lifetime at a time when small teams can build transformative businesses for reasonable sums of capital is the panacea of opportunity for innovation.
4) Cheaper AI → more AI → more energy. DeepSeek’s R1 release set off a flywheel effect resulting in a need for abundant, clean energy. Its resource-efficient development proves that powerful AI can be built with less, sparking higher demand, reinforcing the urgency to build an abundant energy economy to sustain this growth (we were bullish on it in 2023 and are even more now).
In short, we are more excited about AI, slightly less scared of AI, and thrilled to support the next generation of founders building the foundational companies of a better tomorrow.
If you have any thoughts (especially contrary ones) on the above, we’d love to hear!
Patty @Avila_VC
PS: fwiw, while more bullish on AI we remain skeptical that it will overtake humans anytime soon. Case in point: I once again tried to write this with the help of various AIs. The output was generic, corporate-speak and imprecise enough to lose power. It “seemed” good (in Venezuela we say “buen lejos”), but didn’t stand up to scrutiny. Took me longer to admit I should just write it myself. It’s a great intern for now.
To discuss or learn more about Avila VC please reach out to us.
Follow us for sporadic postings on LinkedIn and X, and Patty Wexler on Medium
I loved that even helped by several AIs you can totally hear Patty telling the story ;-)
I think history repeats itself sadly, and the same that in the early OOs browsers were diverse and there was no clear winner, the same with Social, then with ridesharing, the with neobanks, etc. At the end the best singer is not the one that makes the most money. So, I wish just like you that AIs were able to differentiate themselves jn some ways to avoid again one voice only. And maybe, given how fast this time AI models are created, we are closer to diversification than a monopolistic winner. What is clear is that I like you am bullish in what this technology already brings to the table!!!
Hello there,
Huge Respect for your work!
New here. No huge reader base Yet.
But the work has waited long to be spoken.
Its truths have roots older than this platform.
My Sub-stack Purpose
To seed, build, and nurture timeless, intangible human capitals — such as resilience, trust, truth, evolution, fulfilment, quality, peace, patience, discipline, relationships and conviction — in order to elevate human judgment, deepen relationships, and restore sacred trusteeship and stewardship of long-term firm value across generations.
A refreshing take on our business world and capitalism.
A reflection on why today’s capital architectures—PE, VC, Hedge funds, SPAC, Alt funds, Rollups—mostly fail to build and nuture what time can trust.
“Built to Be Left.”
A quiet anatomy of extraction, abandonment, and the collapse of stewardship.
"Principal-Agent Risk is not a flaw in the system.
It is the system’s operating principle”
Experience first. Return if it speaks to you.
- The Silent Treasury
https://tinyurl.com/48m97w5e