Cadi Poon
03-16
NVIDIA has publicly acknowledged that the main cost pressure behind maintaining its ~75% gross margin comes from HBM (High Bandwidth Memory).

Meanwhile, Amazon has reportedly raised its 2026 capital expenditure target to $200 billion, signaling massive investment by hyperscalers. Their hunger for memory is reshaping the entire bill of materials (BOM) for AI infrastructure.

Micron, SNDK Selloff on TurboQuant: Overreaction or Time to Cool Down?
$Micron Technology (MU), $SanDisk (SNDK), $Western Digital (WDC), and $Seagate Technology (STX) fell 3–6% even as the $Nasdaq-100 rose. The debate: does TurboQuant reduce memory demand — or unlock more usage? Morgan Stanley says it only optimizes inference KV cache, not HBM or training. Others warn efficiency could pressure total capacity needs. With capex surging and expectations high, is this an overreaction — or the first crack in AI memory demand?
Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Comments

We need your insight to fill this gap
Leave a comment