![]() |
|
|
|
Registered
Join Date: Dec 1969
Location: chula vista ca usa
Posts: 5,700
|
Most people seem to forget that any computer software is initially written by humans with lots of "if then else" statements or a whole lot of choices to have a choice made. When I was working on my masters degree in software engineering, the school used the ADA programing language which had the ability to add additional choices if a pre-programed choice did not give a correct answer. I was followed a bit later by the development of C++ which was designed to do the same thing. Our senior project was a user interface and a library of software modules that allowed a programmer to look for code to solve a certain problem and transfer the module(s) to their in box. It was grabbed up by SAIC in San Diego but a lack of "Windows interface" caused it to be dropped.
John Rogers the oldracer |
||
![]() |
|
Registered
|
If we are talking AI-LLM then I don't really trust it. It is good form summaries or getting you headed the right direction, but I won't be vibe coding with it.
If we are talking AI-ML like they are using for earthquake detection and the like, then yes, it is far more useful and trustworthy. The problem is all type of AI get lumped into AI as a general description.
__________________
Brent The X15 was the only aircraft I flew where I was glad the engine quit. - Milt Thompson. "Don't get so caught up in your right to dissent that you forget your obligation to contribute." Mrs. James to her son Chappie. |
||
![]() |
|
Back in the saddle again
Join Date: Oct 2001
Location: Central TX west of Houston
Posts: 56,131
|
![]()
__________________
Steve '08 Boxster RS60 Spyder #0099/1960 - never named a car before, but this is Charlotte. '88 targa ![]() |
||
![]() |
|
Banned
Join Date: Sep 2025
Posts: 12
|
you need to re-check all the infos you get from AI. Very often is out of date
|
||
![]() |
|
Registered
|
I’m so sick of the positivity people have for llms. Sure there’s so many examples of great answers but people don’t understand how they work and that they should be amazed when it says anything correct. It’s completely crazy that we treat them as if they think and blame ‘bad prompting’ when they go haywire.
These things are word correlation, there’s no semantic or reason. Frankly it’s horrifying to me that such a simple thing can give answers that fool so many people. They are pretty swell at stoking peoples egos. My fear this llm stuff is actually part of how a lot of our own brains function. It’s what we call intuition. These things can’t add unless they’ve remembered the answer from somewhere. Literally they can’t do something as simple as add two numbers, let alone long division. Ah you say, but what if the new ones can offload math requests… if it can’t add and doesn’t know what add means then how does it know what to add? Since they seen it before they can tell you how to add, how to compute pi, how to try and prove the rheimann hypothesis, but they can’t do anything because it’s just lists of words. All you get is what’s been memoized into the coefficients. Turtles all the way down, at least for now. This approach can’t scale it’s a dead end. |
||
![]() |
|