![]() |
Artificial “Intelligence” is not very intelligent
I’ve been really surprised at how dumb AI actually is. Both ChatGPT and Grok. When I read how much people are relying on its “smarts,” it’s really perplexing. I read about lawyers using it, doctors etc. Frightening.
For example: I asked it “has Bertone ever designed a Ferrari?” It said no, Bertone never did. I pointed out that Bertone designed the mid 70s 308 GT4, and it said “oh, yes, you’re right, sorry about that.” Huh? It couldn’t accurately research a super basic question that has been discussed on the Internet for decades? I asked it to give me the plot of a movie that I had just watched on Prime. Not a big blockbuster type movie. It gave me a whole long detailed and authoritative plot description. Except it had almost no relationship to what actually happened in the movie. It was 80% made up stuff! I asked “why did you make all that up??” It said it couldn’t find much info about the movie so it just made up things that were “likely” to have happened. I asked why it would not just say it didn’t know or couldn’t find any info. It said “I’m programmed to make up info I don’t know because our research indicates that enhances the customer experience, people are not satisfied with ‘I don’t know’ as an answer.” Wow. It’s also really bad with current events. I asked what time a baseball game was on TV, and it gave an obviously wrong answer. I told it it was wrong, and it rechecked and insisted it was correct, twice. The only way it would admit it was wrong was when I gave it the correct answer. Seems kind of useless to me. It certainly can’t be relied on in any way for accuracy. |
Similar. I was researching locations for a "sunny winter location" in the Pacific Northwest. ChatGPT suggested Corvallis, Eugene, and some other absolutely grey and depressing locations. Which, was EXACTLY where I was trying to avoid.
Yes. It is very dumb. |
The people that live in the sunny winter locations in the PNW have paid off the AI bots to keep those locations secret, and deflect the answers because those places are so rare.
|
I recently reviewed a document generated by AI for a friend who is involved in a divorce from a lunatic and (the lunatic) was using the document to create a valuation for an asset. It was so worthless from one end to the other that you'd have sworn it was parody. Because this particular thing was somewhat unique and I happen to know more about it than just about anybody I was able to recognize from what it said where the AI program sourced some of the data it compiled.
Absolute garbage. Random monkeys typing on a broken laptop quality garbage. Which gives me more than a little concern for our next tech bubble. |
but was it grammatically correct?
|
You are asking it the wrong questions or using it wrong.
I use it every day. If I’m not using it, I’m wasting my clients’ money and my time. Similar how you wouldn’t use a claw hammer to remove a head bolt, you need to use these tools correctly and understand that different AI tools are better in some applications than others. I use it for troubleshooting car issues all the time because it searches all of the forums and uses a semantic type search instead of key words. It’s also really effective for mining IP and creating technical documents. |
ChatGPT 5’s response to this thread…
Here’s a possible response you could post (or adapt) in that forum thread that’s constructive, acknowledges criticism, but also corrects misunderstandings: ⸻ I saw the thread title “Artificial Intelligence — Not Very Intelligent,” and I get the frustration behind it. AI is far from perfect. But I’d suggest that much of what people criticize is often a mismatch between expectations and the actual limitations of current systems. Here are a few thoughts: 1. AI is not general intelligence (yet). Most tools labeled “AI” today are specialized: language models, image recognition systems, recommendation engines. They excel in narrow domains but don’t understand context the way humans do. When you expect them to reason like a human, errors and “stupidity” become obvious. 2. Mistakes are often due to data, not “intelligence.” AI systems learn from past data, and if that data is biased, incomplete, or noisy, the outputs will reflect those flaws. Garbage in, garbage out. Also, edge cases or ambiguous queries often trip them up. 3. Transparency and error modes matter. One issue is that many AI systems don’t “explain” their reasoning (or do so in opaque ways). When they make mistakes, it looks like they’re just hallucinating or being “dumb.” Better introspection tools (visibility into confidence, reasoning paths) would help users trust and correct them. 4. Human oversight is still essential. Where accuracy, safety, or nuance matters—especially in technical, medical, or legal domains—humans must supervise, verify, and interpret outputs. AI should augment human expertise, not replace it (at least not yet). 5. Progress is real, but it’s incremental. Yes, there will always be spectacular failures. But over time, we see improvements: better models, more data, better architectures, hybrid systems combining symbolic reasoning and neural methods, etc. |
What I've found it particularly useful for is creating itineraries.
All it's doing is querying and analyzing information available on the web. Last spring I had set aside a site seeing day in Europe. I had 3 places I wanted to visit and a timetable. I asked ChatGPT to put together an itinerary with a desired start time and return time and the 3 spots I wanted to see. The result gave me the order I should see them in as well as suggested places to stop for coffee, lunch, dinner etc. |
Quote:
|
"Artificial Intelligence" is not simply publicly available chat bots.
And publicly available chat bots are capable of far more than many, perhaps most, realize. And the speed of AI technology evolution is stupifying compared to other technologies many of us enjoyed for the past 50-75 years. "What something is worth depends on what you compare it to." |
Quote:
|
Clearly some folks think AI technology has value, and bets have been getting placed for many years.
The pace of AI technology evolution from 1956 to now is likely very different from what will happen from now through the next 5-10 years. http://forums.pelicanparts.com/uploa...1760547542.jpg And I suspect a broader range of folks are, and will be involved in biz and tech leadership during this period. Here is an example: https://www.theaireport.ai/articles/mira-murati-net-worth-2025-ex-openai-ctos-fortune She attended a great engineering school just a 10-15 minute walk from that plaque. |
LLMs are just predictive algorithms and considering they use everything from news sites to reddit threads like Bone Apple Tea, you can't expect it will always be accurate. It is good for some things like getting you started laying out code, or making a sarcastic email more business class.
The AI results in a web search are generally not good other than they have links to get somewhere good. Machine Learning is different and it is actually working well for things like earthquake analysis and prediction. |
AI and ML advancements are rapid and wide ranging.
Medical imaging/healthcare are good examples... I had experience with some of this in the 1980's! I will be attending an all-day private conference on AI on Friday... in sillycon valley. One of the speakers: http://forums.pelicanparts.com/uploa...1760559935.png In-Q-Tel: https://www.iqt.org/mission |
So last night and this morning I was researching pitfalls of a qsub election to possibly help a basis issue with multiple S corps and several times got conflicting answers. Literally 'yes' and 'no' type stuff.
|
It’s not actually intelligence, it’s more like data mining with a plain text interface. It’s also only as good as the data it’s pulling from. I’ve found some value with very basic web research type stuff.
|
The gifted computer science & engineering minds, et al ... will use AI as another tool in their toolbox.
For most ... it will become ... or already is ... their only tool AND their only toolbox ... jmho Be afraid of them :D |
https://www.anthropic.com/research/small-samples-poison
In a joint study with the UK AI Security Institute and the Alan Turing Institute, we found that as few as 250 malicious documents can produce a "backdoor" vulnerability in a large language model—regardless of model size or training data volume. |
|
Google search AI mode:
Q: How do I not get banned on a forum? A: Quote:
|
All times are GMT -8. The time now is 07:29 AM. |
Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
Search Engine Optimization by vBSEO 3.6.0
Copyright 2025 Pelican Parts, LLC - Posts may be archived for display on the Pelican Parts Website