$NVIDIA(NVDA)$ $Advanced Micro Devices(AMD)$
Both Nvidia and AMD now trade at similar forward earnings multiples -- 28x for Nvidia and 26x for AMD -- a convergence that would have seemed absurd during the AI melt-up in early 2024, when Nvidia’s multiple topped 60 and AMD’s hovered in the low 30s. At face value, the market is signaling parity. But the multiple is a mirage -- because these companies aren’t converging. They’re diverging into opposite ends of the AI stack.
We were told there could only be one. That the AI revolution would consolidate around a singular force. And for a time, it did. Nvidia wasn’t just dominant in training -- it was the training cycle. CUDA became the operating system of intelligence. Hopper became the standard of scale. $Meta Platforms, Inc.(META)$ , $Alphabet(GOOGL)$ , $Microsoft(MSFT)$ , and $Amazon.com(AMZN)$ bent their infrastructure to fit the DGX mold. Nvidia didn’t just sell hardware. It defined the software layer that made intelligence usable. In the first chapter of the AI boom, there was no competition -- only allocation.
And that dominance hasn’t faded. Nvidia is still the core infrastructure layer of centralized AI. It owns the two most important footholds in compute: the clusters where foundational models are born, and the software where they’re refined. Every expansion -- into NVLink, into Infiniband, into quantum -- loops back into that same gravity well. When you choose Nvidia, you’re not buying a chip. You’re inheriting an ecosystem.
But the map is expanding.
We’ve entered the era of agentic AI -- a world where thousands of specialized models, not a few mega-models, define utility. Where intelligence isn’t confined to hyperscale clusters, but pushed out to endpoints, into call centers, manufacturing lines, vehicles, and sovereign clouds. Inference is no longer a tailpipe workload. It’s the dominant surface area. It’s what gives AI presence in the real world.
That’s not a threat to Nvidia. But it is a gap. And AMD is building for that gap deliberately.
AMD’s MI300X isn’t a direct swing at Blackwell. It’s an architecture designed for a different reality -- one where deployment matters more than training scale. It’s for inference that needs to happen close to the user, inside proprietary stacks, in latency-sensitive environments. It’s what enterprises and governments reach for when CUDA becomes a constraint -- when flexibility, openness, and ownership outweigh ecosystem lock-in.
This is the second chapter of AI infrastructure. And AMD isn’t playing catch-up. It’s showing up in a market that didn’t fully exist a year ago.
ZT Systems wasn’t about winning a chip spec war. It was about owning deployment velocity. The Humain acquisition wasn’t a media headline -- it was an alignment with a future where AI is local, secure, and adaptable. These aren’t distractions. They’re directional moves in a world shifting from AI infrastructure to AI distribution.
Because in the agentic economy, where every device, node, and app layer is infused with model logic, the question isn’t who owns the biggest data center. It’s who powers the next hundred million deployments.
Nvidia remains the gravity center for training. But AMD is increasingly becoming the wedge for edge. For inference. For governments and enterprises that can’t or won’t centralize their AI strategy. These moves don’t register as threats to Nvidia because they’re not meant to. They’re moves that exist orthogonally -- not to shrink Nvidia’s pie, but to grow a different one.
That’s why AMD matters now -- not because it will dethrone Nvidia, but because it’s building a framework Nvidia didn’t set out to own. Not every AI agent needs a DGX. Not every deployment needs CUDA. In a world of AI mesh networks, sovereign LLMs, edge applications, and real-time robotics -- AMD has the flexibility, the cost structure, and increasingly the architecture to be the go-to provider for everything outside the centralized core.
If Nvidia is the AI mainframe, AMD is becoming the modem -- the plug, the edge, the accessible layer that carries intelligence across the rest of the economy.
That doesn’t mean AMD is executing perfectly. It’s not. Data center revenue missed. Adoption is still lagging. ROCm is still niche. Perception is sticky. And Nvidia’s flywheel -- reinforced by real revenue, real deployment, and real developer entrenchment -- is still spinning too fast for any near-term reversal. If AMD were truly breaking through, we’d see it in the numbers, not just the press releases. The harsh truth is that enterprise procurement doesn’t reward technical parity -- it rewards continuity, ease, ecosystem depth. And in that domain, Nvidia is years ahead.
But zoom out.
If the future of AI is 10 million endpoints, not 10 superclusters -- if the economics shift toward inference-first architectures, not just training centers -- if enterprises demand model sovereignty and edge control, not just access to centralized APIs -- then the game changes. And in that game, AMD doesn’t have to win the crown to become essential. It just has to build the connective tissue. The infrastructure layer for a world that wants to opt out -- or can’t afford to opt in.
That’s the real debate. Not whether AMD can beat Nvidia at Nvidia’s game. But whether AMD can build a game Nvidia isn’t optimized for. Whether the second-best training chip can become the first-choice deployment solution for everyone outside the walled garden. Whether the market eventually values modularity and sovereignty as much as raw throughput and scale.
That outcome isn’t guaranteed. AMD has to ship. It has to scale ROCm. It has to convert architecture into momentum. It has to shift perception from backup plan to core strategy. But if it does -- if Lisa Su can sell that vision clearly, if MI300X can anchor a broader interoperable stack, if the edge turns into a battleground and not just an afterthought -- then AMD becomes more than a competitor.
It becomes the indispensable second pole of AI infrastructure -- not an alternative to Nvidia, but an answer to the question Nvidia never set out to solve.
Image
For whom haven't open CBA can know more from below:
🏦 Open a CBA today and enjoy privileges of up to SGD 20,000 in trading limit with 0 commission. Trade SG, HK, US stocks as well as ETFs unlimitedly!
Find out more here:
Comments