View Single Post
Wayne 962 Wayne 962 is offline
Author of "101 Projects"
 
Wayne 962's Avatar
[continued]

Implementing this feature made for an instant 5% increase in sales, minimum, on day one. It also probably saved a bunch of time for people who were going to have to go back and scroll through the catalog to find related parts. Win-win-win for everyone.

Okay, so what does this have to do with AI? Well, our very primitive “Customers who bought X also bought Y” is a very, very similar mechanism to how modern LLMs (Large Language Models like ChatGPT and Grok) work. Indeed, this feature in our shopping cart from the 2000s is like comparing a Texas Instruments digital calculator from the late 1970s, to the modern equivalent of an iPhone – they both run on solid-state transistor technology, but the advances since then are almost incomprehensible.

Getting back to AI - the modern AI models are confusing to a lot of people. They misspell words, they “lie” and have “hallucinations”. If you ask it to draw twelve kittens in a Ferrari, it will give you 14 kittens in something that looks a little bit like a Ferrari, but not quite. How does all of this work? It works just like our “Customers who bought X also bought Y”, but with decades more data and light years more advanced.

With our “Customers who bought X also bought Y” feature, we used to set the server up to run / train itself every Friday night from about 2AM to Saturday morning. It took a long time to process and run through all of the data. The training data for our feature does not store results, nor does it store any indications or history of where the data came from – I cannot go back and trace the path – I have to just trust the data and the original learning method. Fun fact - if some customers had a weird coupon code that caused them to buy a sweatshirt or model car that was on sale, *and* they were buying other parts, then it might give off some weird associations – just like AI will hallucinate from time to time.

AI trains itself on patterns and then figure out what the “next thing” should be when processing a task. This is why the results come in slowly from AI – it’s not like the old days where you had a 2400 baud model and it’s just sending stuff slowly – it’s building the answer one word at a time based upon probabilities and the training data it has. Just like if you were on our shopping cart, and then added a rod bolt, and then added a rod nut, and then added some assembly lube, and then added some rod bearings, one-at-a-time based upon the shopping cart’s recommendations – this is how AI is processing the response to you.

This is why when you indeed ask it to draw twelve kittens in a Ferrari, you see the image come in top-down – just like an old 2400 baud modem loading images in the 1990s. This is not because of bandwidth though – the AI platform is figuring out in real-time, pixel-by-pixel, what is most likely to come next based upon an analysis of your prompt. If you were a human and were drawing a picture of twelve kittens in a Ferrari, you would start, probably, by drawing a Ferrari and then drawing the kittens in and around the Ferrari. AI doesn’t do that, and doesn’t work that way. It goes top down pixel-by-pixel. This is how hallucinations happen, how spelling errors happen and so on. At this point in time, all LLMs have this problem.

Which brings me to my point – it *is* just a magic trick at this moment in time. Don’t get me wrong, it’s a really, really, really – David Copperfield making the Statue of Liberty disappear – type of great, legendary trick. But it’s still a trick. There’s no brain, there’s no logic, there’s no thought pattern. It’s just predicting patterns and spitting everything back to you in a very, very, very advanced manner. No different than our “Customers who bought X also bought Y” feature (note how I don’t call that an “algorithm” because it’s not – it’s just spitting back predictive patterns based upon pre-learned / pre-analyzed data.)

The LLM models themselves – they don’t take up trillions of gigabytes of storage. They are quite manageable in size – the current OpenAI model I think is estimated to be about 750GB. I think the new iPhone actually has more storage space than that! The LLM "database" / "training data" doesn’t contain a “copy” of the Internet in its database, just a summary of training data – just like our “Customers who bought X also bought Y” feature with data that takes up a tiny fraction of the space of the original order data.

So, back to the magic trick metaphor – there are people out there who are fooled by the trick and don’t quite understand how the LLMs work (I was indeed at first too). They are convinced that LLMs and other AI tech will take over the world and launch WWIII and will achieve sentience, etc. - i.e. they will come alive. If one understands the underlying technology and how it’s just repeating back patterns in response to prompts, it becomes obvious that the current technology is still fairly primitive – even though it seems insanely powerful and dynamic. That’s the trick part. Heck, when I first saw it work, I was like “no way, there has got to be an army of people in India typing this stuff back to me.” But after I dived into it some more and began to see the parallels between the LLMs and our “Customers who bought X also bought Y” feature, it became a bit more apparent to me what we were seeing.

Conclusion? The technology is amazing. The same feeling I had when I ran the first search for the first time in our “Customers who bought X also bought Y” feature - that *is* exactly the same feeling I had when I first first used ChatGPT more than a year ago. But, at the same time, since I understand how it works, I understand its limitations, and more importantly, I understand why it makes errors and why it’s not likely to “take over the world.” At least any time in the semi-near future. Neat trick indeed, but at the end of the day, it’s still just like a more advanced Texas Instruments calculator from the late 1970s than it is like a human being.

Okay, so the words "magic trick" may be a little harsh and "click bait", but it's designed to make my point that it's more of an illusion than it is a thinking, learning being like some people believe it to be...

Thoughts?

-Wayne


Last edited by Wayne 962; 11-19-2025 at 07:32 PM..
Old 11-16-2025, 02:58 PM
  Pelican Parts Catalog | Tech Articles | Promos & Specials    Reply With Quote #2 (permalink)