Artificial Intelligence & The End Of Silicon Valley's Money Factory

This week, Sam Altman gave away the economics of the AI world. And they may not be all that attractive.

The fact that OpenAI is losing money isn’t surprising. WHY OpenAI is losing money on a $200/month subscription is what’s new to Silicon Valley.

AI Scaling 1.0 - Pre-Training

To explain how the economics of AI have been upended in the last few months, we need to go through some history.

Most of the scaling that took place through the end of 2023 was related to pre-training.

Simply, developers were able to use more data and more compute to make AI models smarter. This is often called the foundation model and it’s the core of the models we see today.

This type of scaling is also why you heard talk about a model costing $1 billion and $100 billion models being just a few years away.

But the idea of an all-knowing $100 billion model has subsided as the gains from more data and more compute have leveled off. Now, almost every model is the same after pre-training because models have trained on all of the data the internet had available.

AI Scaling 2.0 - Post-Training

The second scaling step to scale models was reinforcement learning. This involves human feedback to tell the model if it got something right or not. Reinforcement training now also uses AI feedback and synthetic data generation, which makes post-training more scalable.

But like pre-training, there are limits to the scaling of post-training.

AI Scaling 3.0 - Reasoning

So, developers have found that test-time scaling, or “reasoning”, is the next scaling law for AI. $NVIDIA(NVDA)$ CEO Jensen Huang showed what this looks like in his CES keynote. (Notice, he doesn’t acknowledge the diminishing returns of scaling pre-training or scaling, the lines just go faint.)

“Reasoning” is somewhat akin to an AI system thinking the way humans do. It’s going down different paths and determining which one is the best. I think of it a little like this graphic where the "prompt” is on the far left and “Today” is the single output you see from the AI.

Source

What you don’t see in this image is the compute going into creating these alternate branches that aren’t used. That compute is EXPENSIVE.

Think of it like this:

  • A pre or post-trained model will take a prompt and go down a single path to give you an output.

  • A reasoning model will run a prompt hundreds of times, choosing the best answer.

    • There may even be multiple models called and different steps could be re-run multiple times, checking parts of the answer as it goes.

This means the model doesn’t need to be smarter, the prompt simply uses GPU power for longer than older techniques.

OpenAI doesn’t release the number of reasoning tokens (which we should use as a proxy for cost and GPU time) it uses for the o3 model yet, but data from Artificial Analysis shows reasoning tokens are a growing piece of the tokens used in AI based on the early reasoning model o1.

Prices for reasoning models have also reflected this higher compute cost associated with reasoning.

This will have a profound impact on the economics of artificial intelligence tools and businesses.

Being smarter will be more costly. And companies are already pricing services based on how much intelligence you want to pay for.

That may lead to pricing that looks more like electricity (pay for what you use) than today’s SaaS model (per seat). And the impact on Silicon Valley’s economy could be profound.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Report

Comment1

  • Top
  • Latest
  • KSR
    ·01-10
    👍
    Reply
    Report