Pelican Parts Forums

Pelican Parts Forums (http://forums.pelicanparts.com/)
-   Off Topic Discussions (http://forums.pelicanparts.com/off-topic-discussions/)
-   -   AI projected cost build out (http://forums.pelicanparts.com/off-topic-discussions/1183070-ai-projected-cost-build-out.html)

masraum 08-28-2025 12:32 PM

Quote:

Originally Posted by rcooled (Post 12523548)
One use I'd like to see is AI applied to traffic signals, so that I don't constantly find myself sitting at red lights when there's no cross traffic in sight :mad:

gov't network security has historically been a joke. I'm sure that it's better these days. I'm also sure that it's probably best at the top and worst at the bottom which means that where the lights are controlled from (I assume municipal systems) is going to have the worst security which means that it'll be vulnerable to attack and misuse.

But yes, synchronization of traffic lights for efficiency would be great. 30 years ago, I moved to Houston and worked in a bar downtown. The lights were synchronized so if you drove 20-21mph, you could hit a green light at every intersection. But what I've noticed is that over time, the light synch timing goes off. When I'm driving home these days, I usually hit every light between the office and freeway, but last week, it seemed like I hit a green every time so the timing had changed in my favor (which is not usually the case).

stealthn 08-28-2025 12:38 PM

Here you go: Create you own hacking campaign and make millions:

https://www.nbcnews.com/tech/security/hacker-used-ai-automate-unprecedented-cybercrime-spree-anthropic-says-rcna227309

Alan A 08-28-2025 04:52 PM

Quote:

Originally Posted by masraum (Post 12523364)
I'm not an AI adopter yet. It seems like it's lacking. I hear lots of "use it for coding" but then I also hear lots of "I used AI for coding and none of the code worked". I guess if you're a coder and needing to create something from scratch, maybe asking AI for code and then being able to take the AI framework and fix it is better than having to start from scratch yourself. I don't know. AI does do a lot of stuff well that is hard to do without AI.

The problem as I see it is that if we decided to save some dough rather than spending it on AI, we'd end up way behind and that would put us at a distinct disadvantage when it is mature. And by that point, I believe the uses for it will be considerable.

Not investing in AI today would be like being in the world in the stone age and thinking "metal work looks like something really expensive and hard. Metal will never catch on. We shouldn't invest in learning metal work." At some point, you'd be using stone axes and knives and points and your neighbors will be using swords and guns and you'll be screwed.

Caveat - I use LLMs. Plus other ML stuff. I don’t train them - that gets expensive fast..

The usefulness for coding depends heavily on the model.
Opus is actually pretty good as a time saver if your google-fu is poor. ChatGPT is offshored to India level. The real slick stuff happens when you use it as an agent and let it write the code - vs cut n paste suggestions.

It’s great for things that require text answers. Chatbots and help are two easy examples. It’s also great for image generation. We have some commercial stuff that turned out really cool, but to play around go look at something like nightcafe.

It’s also decent - and improving rapidly - at tool use. That’s where it calls [provided] apis to ‘do stuff’ and either orchestrate multiple calls to synthesize complex behavior or add value.

We have something that acts as an NLP front end to query generation. The user types “show me all the widgets currently on sale that are marked down 15% or more that I bought in the last six months” vs them trying to fill out a form. It generates SQL queries from the text that it can run for them by calling a tool to run the query and return the answer in a structured format. It’s a gimmick, but has potential.

I read a paragraph somewhere that described an LLM as a huge chunk of hardware and software that’s been designed to guess the next word to write out. It helps - when using - to think of it that way.

The one thing I can see it doing - soon - is disintermediating your PCP. Microsoft has thrown a bunch of $ into models for this and the diagnosis numbers look real promising.

Same for lawyers and any other knowledge based profession.

The big problem will come when the AI companies have to transition from burning $ to making a profit. The register had a thought provoking article on same that I’ll links.

https://www.theregister.com/2025/08/15/are_you_willing_to_pay/


Ok I’m on a mobile device. That’s enough hunt and peck for tonight.

Eric 951 08-29-2025 02:32 AM

Quote:

Originally Posted by Mahler9th (Post 12523575)
And don't forget chips.

I am learning just a little about next-level chips for boards that go into AI data centers (reviewing some start up pitches) ... tons of investments already made and more coming.

And of course everything related to the expansion of data center infrastructure.

and don't forget power supply. I thought bitcoin miners used a lot of juice, they pale in comparison to the AI servers. The amount of power required isn't going to come from solar or wind, I am curious how they are planning to feed the beast.

cabmandone 08-29-2025 04:05 AM

Quote:

Originally Posted by Eric 951 (Post 12523816)
and don't forget power supply. I thought bitcoin miners used a lot of juice, they pale in comparison to the AI servers. The amount of power required isn't going to come from solar or wind, I am curious how they are planning to feed the beast.

We're having a discussion about that in my stock chat thread. It appears point of use through natural gas turbines, fuel cells and microgrids are the things they're doing to to feed the beast.

masraum 08-29-2025 04:30 AM

I used to work for a company that produced power. We had plants all over the US, but weren't a huge company. At the time, I believe none of the plants were coal and possibly none were oil. I think they were almost all natural gas.

I went to a plant in the Houston area that existed solely to provide power to a Bayer chemical plant that was next door (and you had to wear phosgene sensor badges while there). Also, some of the steam produced was provided to Bayer. I would think huge data centers for AI would have similar setups.

Alan A 08-29-2025 04:30 AM

Quote:

Originally Posted by cabmandone (Post 12523833)
We're having a discussion about that in my stock chat thread. It appears point of use through natural gas turbines, fuel cells and microgrids are the things they're doing to to feed the beast.

nuclear is also back in the frame.

cabmandone 08-29-2025 04:42 AM

Quote:

Originally Posted by Alan A (Post 12523849)
nuclear is also back in the frame.

I've been researching that as a potential target for investment. From everything I've read, it's still a ways out on the horizon.

Deschodt 08-29-2025 07:19 AM

So far I found AI to be really good at replicating/imitating things and behaviors. I am not sure how "intelligent" it really is. That's not the same as processing power. It scans for patterns and things that work in the world, reproduces it, adds a finger and the like....

For simple coding it's amazing to me because it saved me work in a language I knew poorly. For writing it's very good, it's had a lot of samples to troll. For something truly demonstrative of intelligence, I am not convinced. Witness the models that turn racist or stupid, mostly because that's the garbage out there on the web that AI trolls for Data to learn.

I am a little worried about AI, but at the same time less and less nowadays, I think it will kill some service desk/phone operator type jobs as it can be good at imitating speech/text/following a decision tree, it will definitely HELP with a lot of decision tree jobs like basic medicine/law (help, not replace), it will help with interpreting images (Xray, MRI, again HELP), so yes some jobs will suffer a bit but AI is capable of some grade A bull$hit too, and it won't take but a few big visible flops for people to chill out a bit about where to use that. WE're not putting it in control like Skynet yet, I hope. Just my opinion, could be wrong as it's evolving very fast.

I'll change opinion when it comes up with truly lateral thinking and cures cancer, until then it's a tool to try more stuff faster (but in the same vein we already think), with a high cost in energy and resources.

zakthor 08-29-2025 07:54 AM

Im a coder. I worked in the ai group of a fang. The llm like chat gpt and claude work by correlation, they have no semantic. So a word has no meaning its just spit out because of probability. You give these things enough training data it will implicitly know all the words that people have ever strung together, but they arent ‘thinking’. Its why theyll give you diet advice for what sorts of rocks you should eat.

All the hoopla i have tried a few times to get it to write me code. It is great for generating ui and stupid stuff that takes no thought. Ive generated app framework code in languages i didnt know and it worked ‘good enough’. But every single times ive tried to get it to do something where id need to think about it it without fail makes totally eggregious errors. I debug, i see the problem, i snip out those 11 lines and ask what might be wrong and the llm has no clue, comes up with ridiculous suggestions like that ive discovered a revolutionary algorithm - where in fact theres an assignment two lines too high.

The thing has no dataflow its reading code like text and trying to correlation its way out of it. Foolish.

I still have friends in the business that defend the ml coding but admit… yeah… theres stuff it cant do. But theyre getting better! They say a cool trick is to just start over and maybe youll get lucky. Sigh.

I totally love using llm but my bar now is for boilerplate. Im super experienced so its output is easy to understand. What i wont do is let it loose on something complicated because when i care then literally 80% of what it makes is bad, broken and wrong.

What worries me a bit is who is going to debug anything if were not training new programmers.

So… in conclusion… this money being spent on llm is maybe a giant bubble. What isnt a bubble is applying the same tech for real problems. The way they can be trained to play games can be adapted to any sort of real problem and there they will be super effective. Think ‘automatic science’.

Mahler9th 08-29-2025 08:08 AM

"Casually listening to Bloomberg, one of the personalities just said the investment to build an AI architecture will be $3 to $4 trillion, that's trillion, dollars by the end of the decade."

A cursory search reveals that there may be a few folks that have asserted those kinds of numbers, but likely at the center of the fruit is the Nvidia CEO in the last week or so.

Why?

Here is a quote:

"For a data center costing as much as $60 billion, Nvidia can capture about $35 billion, Huang said."

Huang is CEO of Nvidia.

So you can see why. What is the "ceiling" for Nvidia? Is there one? What is the 5 year outlook?




"Can anyone here lay that out in broad strokes? What's the return? What can and certainly will, go wrong? Where does this money come from?"

The return? Just like any other return in capital spending. And at the same time, nothing like what humans have ever experienced before.

Things have already gone wrong and that will continue. The range of possibilities is vast. Let's ask LLM's what has, and is going wrong this time next year.

Obviously the greatest proportion of this type of spend is likely private enterprise and if so this will likely continue. But of course governments have and will also spend. And of course this is likely, if not almost certainly a projection of global expenditures.



"What will the impact on humanity be?"

Profound.

Enormous, all by itself.

But then add robotics, ML and other rapid tech advancements.

Let's look at just one area... energy.

https://www.iea.org/news/ai-is-set-to-drive-surging-electricity-demand-from-data-centres-while-offering-the-potential-to-transform-how-the-energy-sector-works


"How is $4 trillion better spent over the next 5 years?"

Perhaps that would be up to the enterprises spending the ducats, and of course any of those that support them can draw their own conclusions as ROI area under the curve accumulates.

On the gubmint side... the same is true.

Impacts of technology advancement and related infrastructure spending have been studied for a long time. But of course, information from such studies is available to far more people now than it was in 1969.

My current hypothesis is $$ spent on AI infrastructure build out will continue to have positive ROI in many predictable ways, and in many ways we cannot yet predict.

Continue.... how many Nvidia employees have been made millionaires in the past 5 years? How has their "sudden" wealth impacted.... real estate? Angel investing? The list goes on and on.

masraum 08-29-2025 08:38 AM

Quote:

Originally Posted by zakthor (Post 12523966)
Im a coder. I worked in the ai group of a fang. The llm like chat gpt and claude work by correlation, they have no semantic. So a word has no meaning its just spit out because of probability. You give these things enough training data it will implicitly know all the words that people have ever strung together, but they arent ‘thinking’. Its why theyll give you diet advice for what sorts of rocks you should eat.

All the hoopla i have tried a few times to get it to write me code. It is great for generating ui and stupid stuff that takes no thought. Ive generated app framework code in languages i didnt know and it worked ‘good enough’. But every single times ive tried to get it to do something where id need to think about it it without fail makes totally eggregious errors. I debug, i see the problem, i snip out those 11 lines and ask what might be wrong and the llm has no clue, comes up with ridiculous suggestions like that ive discovered a revolutionary algorithm - where in fact theres an assignment two lines too high.

The thing has no dataflow its reading code like text and trying to correlation its way out of it. Foolish.

I still have friends in the business that defend the ml coding but admit… yeah… theres stuff it cant do. But theyre getting better! They say a cool trick is to just start over and maybe youll get lucky. Sigh.

I totally love using llm but my bar now is for boilerplate. Im super experienced so its output is easy to understand. What i wont do is let it loose on something complicated because when i care then literally 80% of what it makes is bad, broken and wrong.

What worries me a bit is who is going to debug anything if were not training new programmers.

So… in conclusion… this money being spent on llm is maybe a giant bubble. What isnt a bubble is applying the same tech for real problems. The way they can be trained to play games can be adapted to any sort of real problem and there they will be super effective. Think ‘automatic science’.

Good info thanks. I saw a thing online the other day that was meant to be funny and I thought it was. I'm just not sure if it was true or basically just a joke.

http://forums.pelicanparts.com/uploa...1756485468.jpg

zakthor 08-29-2025 01:15 PM

Quote:

Originally Posted by masraum (Post 12523988)
Good info thanks. I saw a thing online the other day that was meant to be funny and I thought it was. I'm just not sure if it was true or basically just a joke.

http://forums.pelicanparts.com/uploa...1756485468.jpg

Ive had it refactor a few times. Scares me to try but both times it did what i asked and saved me maybe 10 hours of tedium. It did make a few errors but they were simple to fix.

So yeah big thumbs up on the ability to refactor. Key i was just asking for reorganization, not to cons up new approaches from existing working code.

I dont believe current approach will ever be able to write good code.

Best characterization i have: the llm have recreated our intuition but not our logic or ‘executive function’. Funny we think computers are logical but right now these things are crazier than people.

GH85Carrera 08-29-2025 02:59 PM

https://www.news9.com/story/689ce9b641c17d2105424e83/google-9-billion-oklahoma-investment-pryor-stillwater-data-centers-workforce-programs-jobs-ai

Google is spending 9 billion in Oklahoma on a new data center. My wife's nephew works for some company that goes to data centers in Oklahoma, Texas and Kansas and swapping out failed components.

Mahler9th 08-29-2025 06:13 PM

"Google is spending 9 billion in Oklahoma on a new data center. My wife's nephew works for some company that goes to data centers in Oklahoma, Texas and Kansas and swapping out failed components."

So in this case, the spender is Google/alphabet or whatever they are called.

I am almost certain they are publicly traded, and if so it is likely that they have in some ways (consistent with gubmint laws) publicly expressed some expectations related to AI infrastructure build out expenditure ROI.

Their HQ is about 20 minutes from where I live.

Oh I see, their search engine provides ready access to an example:

https://www.utilitydive.com/news/google-cloud-blackstone-aws-us-ai-data-center-buildouts/753202/

$25b. Equal opportunity.

Yikes... ripple (the effect and not the wine):

https://www.theglobeandmail.com/investing/markets/stocks/FIX/pressreleases/33841020/play-it-cool-why-comfort-systems-usa-is-a-hidden-ai-winner/

https://www.emergenresearch.com/blog/top-10-companies-in-data-center-cooling-market-in-2024?srsltid=AfmBOoo-_wWikCKRFyNAXwSXBr1hhX2t4Vi-JXqj-SjvW5jDXxXBzp1P


All times are GMT -8. The time now is 09:14 PM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
Search Engine Optimization by vBSEO 3.6.0
Copyright 2025 Pelican Parts, LLC - Posts may be archived for display on the Pelican Parts Website


DTO Garage Plus vBulletin Plugins by Drive Thru Online, Inc.