|
I’m so sick of the positivity people have for llms. Sure there’s so many examples of great answers but people don’t understand how they work and that they should be amazed when it says anything correct. It’s completely crazy that we treat them as if they think and blame ‘bad prompting’ when they go haywire.
These things are word correlation, there’s no semantic or reason. Frankly it’s horrifying to me that such a simple thing can give answers that fool so many people. They are pretty swell at stoking peoples egos. My fear this llm stuff is actually part of how a lot of our own brains function. It’s what we call intuition.
These things can’t add unless they’ve remembered the answer from somewhere. Literally they can’t do something as simple as add two numbers, let alone long division. Ah you say, but what if the new ones can offload math requests… if it can’t add and doesn’t know what add means then how does it know what to add?
Since they seen it before they can tell you how to add, how to compute pi, how to try and prove the rheimann hypothesis, but they can’t do anything because it’s just lists of words. All you get is what’s been memoized into the coefficients.
Turtles all the way down, at least for now. This approach can’t scale it’s a dead end.
|