Key Points
Memory stocks like Micron and Sandisk fell amid concerns about Google’s TurboQuant algorithm.
Google’s TurboQuant algorithm results in more efficient memory usage.
Analysts believe TurboQuant’s improved efficiency may increase overall hardware demand, not reduce it, citing Jevons’ Paradox.
Highflying memory stocks like Micron and SanDisk have been dented this week and it might have something to do with TurboQuant, a compression algorithm detailed by Google in a research paper this week.
Micron Technology and Sandisk both fell more than 3% on Wednesday and were headed for bigger falls in early trading Thursday. Memory stocks have surged in recent months, so volatility isn’t unusual. But analysts pointed to a Google blog detailing its TurboQuant algorithm on Tuesday as a potential trigger for concerns.
TurboQuant is an algorithm designed to address bottlenecks in the key-value cache, which Google describes as a “digital cheat sheet,” effectively acting as the short-term memory for an artificial-intelligence model. There are ways to compress information to fit more of it onto this cheat sheet but it still fills up quickly. The compression, however, can cause the model to hallucinate, or make mistakes.
TurboQuant is designed to fix some of those issues. Google researchers said TurboQuant reduced “key value memory size” in AI models by at least six times and increased speeds by up to eight times, without compromising model accuracy.
Hearing about a sixfold reduction in anything related to AI use of memory could be alarming for investors who have ridden the boom in Micron and Sandisk, which has been built upon AI’s ever-increasing demands for memory processors and storage capacity.
However, TurboQuant isn’t as scary as it sounds. Firstly, it’s not new—a technical draft of research on the algorithm appeared online in April 2025. So it likely doesn’t represent the cutting edge of what AI companies are currently doing and the claimed results may not be as dramatic with today’s models.
“I am sure every cloud hyperscaler and LLM [large language model] builder has a lot of engineers and smart scientists working on similar technology as we speak,” wrote Jordan Klein, a technology, media, and telecommunications sector specialist at Mizuho. “But if it was so good and in wide scale use at Google, trust me, they would not come out and publish a paper on it.”
Secondly and more importantly, the likely result of better memory usage is that it will result in more overall demand for hardware.
The analogy is with January 2025 when China’s DeepSeek sparked a $1 trillion market panic by apparently showing an advanced AI model could be trained relatively cheaply, fueling concerns about overinvestment in chips and other components. Instead, what followed was a demonstration of Jevons’ Paradox: More-capable and efficient models greatly increased demand for AI services, justifying more investment.
Sandisk Chief Financial Officer Luis Visoso pointed to that same dynamic in a conversation with BofA Securities analysts.
“Mr. Visoso pointed out that [TurboQuant] can improve return on investment of hyperscale capex, and this increased efficiency could, in-turn, cause demand to rise,” BofA analyst Wamsi Mohan wrote in a research note.
Mohan kept a Buy rating and $900 target price on Sandisk stock.
The huge run up in memory stocks and increasing investment means investors are sensibly wary of anything that might indicate the cycle will turn and lead to a glut. But TurboQuant doesn’t look like it will be the thing that kills the momentum.
Comments