Google has introduced TPU 8t (training) and TPU 8i (inference): splitting training and inference for the first time
TPU 8t: 2.8x performance at the same cost, +124% performance per watt; Supports up to 9,600 chips in a supercomputing cluster

TPU 8i: +80% performance; 384MB on-chip SRAM (3x previous generation); Optimized for low-latency multi-agent inference

# Cathie Wood Dumps AMD: Is Now the Time to Buy or Sell?

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Report

Comment

  • Top
  • Latest
empty
No comments yet