python273 11 hours ago

A much better article on token prices: https://www.tensoreconomics.com/p/llm-inference-economics-fr...

There's not much incentive to subsidize prices for OpenRouter providers for example, and the prices are much lower than the $6.37/M estimate from the article.

https://openrouter.ai/meta-llama/llama-3.3-70b-instruct

avg $0.37/M input tokens, $0.73/M output tokens (21 providers)

Llama is not even a good example, as the recent models are more optimized using Mixture Of Experts and KV cache compression.

apsec112 14 hours ago

This ignores batching - token generation is much more efficient in batch - and I strongly suspect is itself written by AI, given the heavy use of bullets

  • twoodfin 11 hours ago

    The “X—not Y” pattern is also a dead giveaway.

  • biophysboy 13 hours ago

    is it common for adjacent tokens to use the same weights in a memory cache?

PaulHoule 14 hours ago

When the AI hype train left the station I said "we don't understand how these things work at all and they're going to get much cheaper to run" and that turned out to be... true.

Already vendors of legacy models like ChatGPT-4 have to subsidize inference to keep up with new entrants based on a better foundation. It's likely that inference costs can be brought down by another factor of ten or so so of course you have to 90% subsidize these to get where the industry will be in 2-3 years.

daft_pink 9 hours ago

Also, it ignores the fact that they will optimize it and make it more efficient like Moore’s law, so everyone is basically assuming that the price will come down over time.

impure 11 hours ago

I’ve been playing around with Gemma E4B and have gotten really good results. That’s a model you can run on a phone. So although prices have been going up recently I suspect they will start to fall again soon.

GaggiX 14 hours ago

This calculation doesn't account for batches, it makes no sense.

  • BriggyDwiggs42 13 hours ago

    On average how much does batching bring costs down?

    • GaggiX 12 hours ago

      It balances the computing and memory bandwidth bottleneck so by a lot, with continuous batching you can easily see a x10, x20 or more.

mrtksn 14 hours ago

Subsidized is probably not the correct word here, it's probably more like loss leader in the race of the land grab.

It's like the early days of the internet when everything was amazing and all the people who put money into this thing were "losing" their money.

It's going to be like this until monopolization and moat becomes defensible and then they will enshittify the crap of it and make their money back 10x, 100x etc.

revskill 14 hours ago

No lol. The quality is mostly bad. Basically u need to prompt in detail like writing a novel for llm to understand. At that price, we want real AI who can really have common sense, not just an autocompletion tool.

Stop adverting LLM as AI, instead sell it as a superior copy & paste engine.

What's worst about LLM, is the more you talk with it, the worse it became to the point of broken.