DeepSeek Triggers Plunge in Tech Stocks
Advertisements
In a day that set the global financial community abuzz, the stock market witnessed dramatic shifts as the notion that "only reckless financial expenditure can drive AI development" was put to the testThe computational industry, particularly the so-called "pick-and-shovel" stakeholders, who profit from supplying the tools needed for the AI gold rush, found themselves caught in a violent sell-off.
By the time the day ended, two powerhouses of high-performance computing, Nvidia and Broadcom, reported losses exceeding 15% in tandemOther players in the supply chain such as TSMC, ASML, and Tokyo Electron faced similar downturnsThe year had already seen promising growth in the "AI + Power/Nuclear" sectors, yet stocks from renowned companies like Constellation Energy, Vistra Energy, General Electric's Vernova, and leading nuclear assets like Oklo and NuScale felt the impact, each witnessing intraday drops hovering near 20% at their lowest points.
The catalyst for this upheaval was an internet revelation presented by the AI industry's cream of the crop: DeepSeek unveiled a method to train large models that can rival those created by industry giants like OpenAI, but at a fraction of the cost and with the potential for replication by engineering teams across the globeThis breakthrough raised eyebrows on Wall Street, igniting fierce skepticism regarding the justification for the lofty valuations of these technology giants.
Adding to the tension was the fact that much of the stock price appreciation in the US markets over the past two years had been attributed to just a handful of tech behemothsAnalysts had been reluctantly accepting that the profit growth of these companies might not keep pace with share prices, maintaining their high valuations through the sheer momentum of hypeThus, any blows to the underlying logic of their inflated market positions made sustaining these valuations increasingly challenging.
Despite the significant sell-off among AI stocks on Monday, some bullish analysts stood defiant, arguing that DeepSeek's achievements shouldn’t be viewed as purely detrimental to the sector at large.
Cantor Fitzgerald, an investment bank founded by the incoming US Trade Secretary, Mark Luthnick, shared their latest analysis with clients on the same day
Advertisements
They contended that the emergence of lower-powered large models from China could likely benefit high-end GPU manufacturers and data center builders in the long run.
Within their report, C.JMuse, a semiconductor sector analyst at Cantor, noted the collective anxiety surrounding the computational demands following DeepSeek's launch of their V3 modelMany feared a peak in GPU requirementsMuse challenged this perspective, asserting that such a view is fundamentally misaligned with realityOn the contrary, he suggested that this advancement is highly "bullish," signaling that artificial general intelligence (AGI) seems closer than everThe paradox known as Jevons Paradox, which states that as efficiency improves, total consumption of resources can actually increase, further supports the argument that the AI sector's demand for computational power will only continue to rise.
The timing of Muse's commentary coincided serendipitously with remarks from Microsoft CEO Satya Nadella over the weekend, underscoring the resurgence of Jevons ParadoxHe posited that as AI technologies become more efficient and accessible, the demand for these innovations is likely to surge, turning them into a sought-after commodity.
Muse continued to elaborate that the industry will persist with pre-training, post-training, and time-based reasoningInvestments in large-scale farming clusters for chips are bound to accelerate furtherHe reinforced that this development is a positive signal for the trend of increasing computation demands, refuting the idea of a decrease.
UBS's semiconductor research chief, Timothy Arcuri, echoed similar sentiments in his report on Monday, hinting that while there are some speculations regarding resources used in training the newly developed R1 model, these won't undermine its efficiency during inference, as each token's cost is reported to be over 95% lower than that of OpenAI's O1 modelDevelopers will likely consider integrating new technologies from R1 into their models, optimizing performance in the process.
Arcuri concluded, asserting that although it might appear that this trend could negatively impact computational demand, the reality remains that even with models gaining efficiency, the need for extensive computational power to enhance model performance persists.
Additionally, Bernstein analysts provided distinct insights from a specialized viewpoint regarding the current market climate
Advertisements
Advertisements
Advertisements
Advertisements