I'm waiting for the shoe to drop when someone comes out with an FPGA optimized for reconfigurable computing and lowers the cost of llm compute by 90% or better.
This is where I do wish we had more people working on the theoretical CS side of things in this space.
Once you recognize that all ML techniques, including LLMs, are fundamentally compression techniques you should be able to come up with some estimates of the minimum feasible size of an LLM based on: information that can be encoded in a given parameter size, relationship between loss of information and model performance, and information contained in the original data set.
I simultaneously believe LLMs are bigger than the need to be, but suspect they need to be larger than most people think given that you are trying to store a fantastically large amount of information. Even given lossy compression (which ironically is what makes LLMs "generalize"), we're still talking about an enormous corpus of data we're trying to represent.
At least it could add a theoretical bound on the expected hallucinations for a particular model/quant at hand? Although I'm very skeptical that companies would disclose their training corpus, and derivative models trained on top of foundation models are another level of indirection, it would still be interesting to have these numbers, even if just as rough estimates. The compression angle in this thread is spot-on, but yeah, operationalizing this is hard.
We're already seeing it with DeepSeek's and other optimizations - like that law with highways - the wider the highway the even more usage of it. Dropping by 90% would open even more use cases.
For white-collar jobs replacement - we can always evolve up the knowledge/skills/value chain. It is the blue-collar jobs where bloodbath with all the robotics is coming.
> For white-collar jobs replacement - we can always evolve up the knowledge/skills/value chain.
I'm not so sure about this one. I partially agree with the statement, but less-abled collegues might have troubles with this :( Ultimately there will be less stuff to do for a plain human being.
Raw gemm computation was never the real bottleneck, especially on the newer GPUs. Feeding the matmuls i.e memory bandwidth is where it’s at, especially in the newer GPUs.
I wouldn't wait. fpgas weren't design to serve this model architecture. yes they are very power efficient but the layout/p+r overhead, the memory requirement (very few on-the-market fpgas have hbm), slower clock speed, and just an unpleasant developer experience makes it a hard sell.