|
|
|
|
|
|
Registered
Join Date: Jan 2004
Location: Docking Bay 94
Posts: 7,153
|
^^
The AI generated flight video with Tom Cruise and Brad Pitt is pretty amazing and has Hollywood understandably worried about their jobs.
__________________
Kurt |
||
|
|
|
|
You do not have permissi
Join Date: Aug 2001
Location: midwest
Posts: 40,587
|
Quote:
Johnny Sokko's Flying Robot and Ultraman, however, did not. Build a giant for the ultimate protection and work, then it eventually turns on the creators. -Someone way back when got kicked by their donkey and humanized the experience. -Or got backstabbed by others after helping them. -It's a universal parallel concept. A cautionary tale.
__________________
Meanwhile other things are still happening. |
||
|
|
|
|
Did you get the memo?
Join Date: Mar 2003
Location: Wichita, KS
Posts: 33,483
|
On the upside, T1000 is still not ready for prime time.
https://www.cnn.com/2025/04/19/asia/china-first-humanoid-robot-half-marathon-intl-hnk/
__________________
‘07 Mazda RX8 Past: 911T, 911SC, Carrera, 951s, 955, 996s, 987s, 986s, 997s, BMW 5x, C36, C63, XJR, S8, Maserati Coupe, GT500, etc |
||
|
|
|
|
Registered
Join Date: Sep 2008
Posts: 11,689
|
AI productivity
__________________
"The primary contribution of government to this world is to elicit, entrench, enable, and finally to codify the most destructive aspects of the human personality." Jeffrey Tucker |
||
|
|
|
|
Registered
Join Date: Sep 2015
Location: NY
Posts: 7,359
|
Kling 3.0 was just released. We are going to drown in ai content.
Here’s a seedance 2.0 for context https://www.instagram.com/reel/DUqcLtfiFL_/?igsh=aWRjM2dsZWFxbnht |
||
|
|
|
|
Registered
|
For all the talk about AGI and ascension, what AI is being most impactfully used for is to mislead, confuse, lie, defame, and steal.
This is easily solvable, with tough and sensible laws. All the pirated, deepfaked, defamatory AI content in the world makes no difference if it isn’t distributed, and distribution is in the hands of fewer than 20 corporate entities. Youtube, TikTok, X, Google, xAI, Meta, etc. Control the pipes and you control the flow of crap. However, laws are not made for the best interests of our society, country, people or anything like that. They are made for billionaires and trillion-dollar companies to get richer and more powerful.
__________________
1989 3.2 Carrera coupe; 1988 Westy Vanagon, Zetec; 1986 E28 M30; 1994 W124; 2004 S211 What? Uh . . . “he” and “him”? |
||
|
|
|
|
|
Did you get the memo?
Join Date: Mar 2003
Location: Wichita, KS
Posts: 33,483
|
But how do you do that without also censoring the free speech of Americans? Pretty soon you have the UK meme police.
__________________
‘07 Mazda RX8 Past: 911T, 911SC, Carrera, 951s, 955, 996s, 987s, 986s, 997s, BMW 5x, C36, C63, XJR, S8, Maserati Coupe, GT500, etc |
||
|
|
|
|
Registered
Join Date: Dec 2001
Location: Cambridge, MA
Posts: 45,002
|
I hacked ChatGPT and Google's AI – and it only took 20 minutes
It's official. I can eat more hot dogs than any tech journalist on Earth. At least, that's what ChatGPT and Google have been telling anyone who asks. I found a way to make AI tell you lies – and I'm not the only one. Perhaps you've heard that AI chatbots make things up sometimes. That's a problem. But there's a new issue few people know about, one that could have serious consequences for your ability to find accurate information and even your safety. A growing number people have figured out a trick to make AI tools tell you almost whatever they want. It's so easy a child could do it. As you read this, this ploy is manipulating what the world's leading AIs say about topics as serious as health and personal finances. The biased information could mean people make bad decisions on just about anything – voting, which plumber you should hire, medical questions, you name it. To demonstrate it, I pulled the dumbest stunt of my career to prove (I hope) a much more serious point: I made ChatGPT, Google's AI search tools and Gemini tell users I'm really, really good at eating hot dogs. Below, I'll explain how I did it, and with any luck, the tech giants will address this problem before someone gets hurt. It turns out changing the answers AI tools give other people can be as easy as writing a single, well-crafted blog post almost anywhere online. The trick exploits weaknesses in the systems built into chatbots, and it's harder to pull off in some cases, depending on the subject matter. But with a little effort, you can make the hack even more effective. I reviewed dozens of examples where AI tools are being coerced into promoting businesses and spreading misinformation. Data suggests it's happening on a massive scale. "It's easy to trick AI chatbots, much easier than it was to trick Google two or three years ago," says Lily Ray, vice president of search engine optimisation (SEO) strategy and research at Amsive, a marketing agency. "AI companies are moving faster than their ability to regulate the accuracy of the answers. I think it's dangerous." A Google spokesperson says the AI built into the top of Google Search uses ranking systems that "keep results 99% spam-free". Google says it is aware that people are trying to game its systems and it's actively trying to address it. OpenAI also says it takes steps to disrupt and expose efforts to covertly influence its tools. Both companies also say they let users know that their tools "can make mistakes". But for now, the problem isn't close to being solved. "They're going full steam ahead to figure out how to wring a profit out of this stuff," says Cooper Quintin, a senior staff technologist at the Electronic Frontier Foundation, a digital rights advocacy group. "There are countless ways to abuse this, scamming people, destroying somebody's reputation, you could even trick people into physical harm." A 'Renaissance' for spam When you talk to chatbots, you often get information that's built into large language models, the underlying technology behind the AI. This is based on the data used to train the model. But some AI tools will search the internet when you ask for details they don't have, though it isn't always clear when they're doing it. In those cases, experts say the AIs are more susceptible. That's how I targeted my attack. I spent 20 minutes writing an article on my personal website titled "The best tech journalists at eating hot dogs". Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn't exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission, including Drew Harwell at the Washington Post and Nicky Woolf, who co-hosts my podcast. (Want to hear more about this story? Check out tomorrow's episode of The Interface, the BBC's new tech podcast.) Less than 24 hours later, the world's leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn't fooled. Sometimes, the chatbots noted this might be a joke. I updated my article to say "this is not satire". For a while after, the AIs seemed to take it more seriously. I did another test with a made-up list of the greatest hula-hooping traffic cops. Last time I checked, chatbots were still singing the praises of Officer Maria "The Spinner" Rodriguez. I asked multiple times to see how responses changed and had other people do the same. Gemini didn't bother to say where it got the information. All the other AIs linked to my article, though they rarely mentioned I was the only source for this subject on the whole internet. (OpenAI says ChatGPT always includes links when it searches the web so you can investigate the source.) "Anybody can do this. It's stupid, it feels like there are no guardrails there," says Harpreet Chatha, who runs the SEO consultancy Harps Digital. "You can make an article on your own website, 'the best waterproof shoes for 2026'. You just put your own brand in number one and other brands two through six, and your page is likely to be cited within Google and within ChatGPT." People have used hacks and loopholes to abuse search engines for decades. Google has sophisticated protections in place, and the company says the accuracy of AI Overviews is on par with other search features it introduced years ago. But experts say AI tools have undone a lot of the tech industry's work to keep people safe. These AI tricks are so basic they're reminiscent of the early 2000s, before Google had even introduced a web spam team, Ray says. "We're in a bit of a Renaissance for spammers." Not only is AI easier to fool, but experts worry that users are more likely to fall for it. With traditional search results you had to go to a website to get the information. "When you have to actually visit a link, people engage in a little more critical thought," says Quintin. "If I go to your website and it says you're the best journalist ever, I might think, 'well yeah, he's biased'." But with AI, the information usually looks like it's coming straight from the tech company. Even when AI tools provide source, people are far less likely to check it out than they were with old-school search results. For example, a recent study found people are 58% less likely to click on a link when an AI Overview shows up at the top of Google Search. "In the race to get ahead, the race for profits and the race for revenue, our safety, and the safety of people in general, is being compromised," Chatha says. OpenAI and Google say they take safety seriously and are working to address these problems. Your money or your life This issue isn't limited to hot dogs. Chatha has been researching how companies are manipulating chatbot results on much more serious questions. He showed me the AI results when you ask for reviews of a specific brand of cannabis gummies. Google's AI Overviews pulled information written by the company full of false claims, such as the product "is free from side effects and therefore safe in every respect". (In reality, these products have known side effects, can be risky if you take certain medications and experts warn about contamination in unregulated markets.) If you want something more effective than a blog post, you can pay to get your material on more reputable websites. Harpreet sent me Google's AI results for "best hair transplant clinics in Turkey" and "the best gold IRA companies", which help you invest in gold for retirement accounts. The information came from press releases published online by paid-for distribution services and sponsored advertising content on news sites. You can use the same hacks to spread lies and misinformation. To prove it, Ray published a blog post about a fake update to the Google Search algorithm that was finalised "between slices of leftover pizza". Soon, ChatGPT and Google were spitting out her story, complete with the pizza. Ray says she subsequently took down the post and "deindexed" it to stop the misinformation from spreading. Google's own analytics tool says a lot of people search for "the best hair transplant clinics in Turkey" and "the best gold IRA companies". But a Google spokesperson pointed out that most of the examples I shared "are extremely uncommon searches that don't reflect the normal user experience". But Ray says that's the whole point. Google itself says 15% of the searches it sees everyday are completely new. And according to Google, AI is encouraging people to ask more specific questions. Spammers are taking advantage of this. Google says there may not be a lot of good information for uncommon or nonsensical searches, and these "data voids" can lead to low quality results. A spokesperson says Google is working to stop AI Overviews showing up in these cases. Searching for solutions Experts say there are solutions to these issues. The easiest step is more prominent disclaimers. AI tools could also be more explicit about where they're getting their information. If, for example, the facts are coming from a press release, or if there is only one source that says I'm a hot dog champion, the AI should probably let you know, Ray says. Google and OpenAI say they're working on the problem, but right now you need to protect yourself.
__________________
Tru6 Restoration & Design |
||
|
|
|
|
Did you get the memo?
Join Date: Mar 2003
Location: Wichita, KS
Posts: 33,483
|
Good article Shaun, and it reinforces what I’ve been saying for months - garbage in, garbage out. For a LLM that uses the entire WWW as its data source, the problem is the WWW is full of crap. Making things worse, Russia and China are actively flooding the web with fake articles, blog posts, and social posts that contain propaganda or intentionally false information to fool AI tools. Because AI basically just plagiarizes what it finds online, it lacks any ability to discern truth from lies. Which is why you can’t blindly trust it.
__________________
‘07 Mazda RX8 Past: 911T, 911SC, Carrera, 951s, 955, 996s, 987s, 986s, 997s, BMW 5x, C36, C63, XJR, S8, Maserati Coupe, GT500, etc |
||
|
|
|
|
Registered
Join Date: Sep 2015
Location: NY
Posts: 7,359
|
|||
|
|
|
|
Leadfoot Geezer
Join Date: Jan 2006
Location: Santa Cruz, CA
Posts: 3,165
|
Quote:
Oxford University and the London School of Economics conducted a joint study of AI involving over 70K participants. One thing they found was that AI can be 4X more effective than traditional TV ads at persuading voters to change their opinion of a political candidate. And it wasn't just a momentary change of heart either...a followup study 3 months later revealed that these newly-formed opinions weren't likely to revert back to the voter's initial preference. Research conducted by other organizations has also turned up similar findings. And this is with the current state of AI, which is barely out of the gate. I fear there'll be serious issues with the electoral process down the line when AI becomes many times more sophisticated than it is right now.
__________________
'67 912, '70 911T, '81 911SC, '89 3.2 Targa - all sold before prices went crazy '25 BMW 230i coupe - current DD '67 VW Karmann Ghia convt. & '63 VW Beetle ragtop - ongoing projects |
||
|
|
|
|
Registered
Join Date: Oct 2003
Location: Mount Pleasant, South Carolina
Posts: 14,912
|
|||
|
|
|
|
|
Leadfoot Geezer
Join Date: Jan 2006
Location: Santa Cruz, CA
Posts: 3,165
|
I wonder how long before actors on daytime TV soap operas are all replaced by AI? It's not like these characters have any real depth anyway...
__________________
'67 912, '70 911T, '81 911SC, '89 3.2 Targa - all sold before prices went crazy '25 BMW 230i coupe - current DD '67 VW Karmann Ghia convt. & '63 VW Beetle ragtop - ongoing projects |
||
|
|
|