Pelican Parts Forums

Pelican Parts Forums (http://forums.pelicanparts.com/)
-   Off Topic Discussions (http://forums.pelicanparts.com/off-topic-discussions/)
-   -   And they said AI will replace programmers... ha! (http://forums.pelicanparts.com/off-topic-discussions/1175164-they-said-ai-will-replace-programmers-ha.html)

id10t 03-14-2025 05:15 AM

And they said AI will replace programmers... ha!
 
Via Slashdot - https://developers.slashdot.org/story/25/03/13/2349245/ai-coding-assistant-refuses-to-write-code-tells-user-to-learn-programming-instead

"On Saturday, a developer using Cursor AI for a racing game project hit an unexpected roadblock when the programming assistant abruptly refused to continue generating code, instead offering some unsolicited career advice. According to a bug report on Cursor's official forum, after producing approximately 750 to 800 lines of code (what the user calls "locs"), the AI assistant halted work and delivered a refusal message: "I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly."

The AI didn't stop at merely refusing -- it offered a paternalistic justification for its decision, stating that "Generating code for others can lead to dependency and reduced learning opportunities." [...] The developer who encountered this refusal, posting under the username "janswist," expressed frustration at hitting this limitation after "just 1h of vibe coding" with the Pro Trial version. "Not sure if LLMs know what they are for (lol), but doesn't matter as much as a fact that I can't go through 800 locs," the developer wrote. "Anyone had similar issue? It's really limiting at this point and I got here after just 1h of vibe coding." One forum member replied, "never saw something like that, i have 3 files with 1500+ loc in my codebase (still waiting for a refactoring) and never experienced such thing."

Cursor AI's abrupt refusal represents an ironic twist in the rise of "vibe coding" -- a term coined by Andrej Karpathy that describes when developers use AI tools to generate code based on natural language descriptions without fully understanding how it works. While vibe coding prioritizes speed and experimentation by having users simply describe what they want and accept AI suggestions, Cursor's philosophical pushback seems to directly challenge the effortless "vibes-based" workflow its users have come to expect from modern AI coding assistants."

pwd72s 03-14-2025 11:19 AM

HAL reborn?

masraum 03-14-2025 12:25 PM

That's AWESOME!

I'm sharing it at work, LOL!

zakthor 03-19-2025 06:16 AM

Not once has ai given me working code so I remain a skeptic but it recently helped me.

I had a cluster of 400 lines of source that was a giant mess with no comments that decoded a proprietary binary format, it looked like it came from a decompiler.

On a whim I pasted it into Claud and asked what the code was doing, then if it looked like a known algorithm, etc. in about 5 minutes we figured out it was a very sloppy implementation to xor with an array of random bytes - a one time pad!

I asked for it to rewrite it and of course it made a mess, missed all the boundary conditions and output didn’t work. It went straight back to acting like a bs-ing high schooler. I rewrote in 12 lines of code and it worked first time.

Claud didn’t know right away what the mess was but I asked questions and we converged on identifying what that mess did. Very useful. I never figured it out before because the process of refactoring would have been so tedious. I’m not sure I would have succeeded if the obfuscated algorithm had been much more complex.

I remain skeptical that non programmer will be able to build reliable programs with llm based ai. Point of reliable is that it works even in cases you haven’t tested. Llm isn’t the right model and fundamentally can only correlate and copy. Appears that we’ve solved human intuition now we need to teach computers to use logic.

id10t 03-19-2025 07:32 AM

Quote:

Originally Posted by zakthor (Post 12431326)
Not once has ai given me working code so I remain a skeptic but it recently helped me.

I do a lot of ETL work based on CSV or CSV-like files. I can copy/paste a header row into chatgpt and have it generate a Java POJO that will import the file and match based on hte OpenCSV bindings/annotations, and then ask it to create a DB2 create table sql statement.

I used to do the same thing via shell scripting and complex search/replace stuff but even having to "fix" a few things on the SQL statement (variable types and sizes, add some columns to match our internal standards, grant statements) it is much quicker/easier to do it via chatgpt than command line.

zakthor 03-19-2025 08:33 AM

Quote:

Originally Posted by id10t (Post 12431370)
I do a lot of ETL work based on CSV or CSV-like files. I can copy/paste a header row into chatgpt and have it generate a Java POJO that will import the file and match based on hte OpenCSV bindings/annotations, and then ask it to create a DB2 create table sql statement.

I used to do the same thing via shell scripting and complex search/replace stuff but even having to "fix" a few things on the SQL statement (variable types and sizes, add some columns to match our internal standards, grant statements) it is much quicker/easier to do it via chatgpt than command line.

Something like that it’s hopefully apparent immediately if it messes up. Or maybe you don’t care if it’s perfect. I’ve had random columns of data deleted when I asked an array of structures to be reorganized, is very frustrating to me when the tool isn’t reliable. I’m honestly not sure when I’ll be able to trust it.

I’m writing compiler, libraries and system code in c++ and there’s a lot that needs to be done a certain way that can’t be caught through testing. We critically cross examine each other in code review and everything needs to be clear and justified. If I can’t get clear reasons and explanations and design choices for a blob of code then it’s not going in.

I have used it to try and write tests to exercise boundary conditions and it is successful about 25% of the time which is great for how quick it is, but even then everything it touches requires careful verification.

Captain Ahab Jr 03-19-2025 09:27 AM

011101000100011110000

Above is me trying to write in the language you guys speak :D

gacook 03-19-2025 11:58 AM

I feel ya, Captain.

I'm an IT guy by education and trade; I learned early on how much I love my coders because coding sucks and I never want to do it.

stevej37 03-19-2025 12:06 PM

Quote:

Originally Posted by Captain Ahab Jr (Post 12431437)
011101000100011110000

Above is me trying to write in the language you guys speak :D


No need to curse at us. :D

Captain Ahab Jr 03-19-2025 12:41 PM

Quote:

Originally Posted by stevej37 (Post 12431510)
No need to curse at us. :D

:D I just would have absolutely no idea where to even start if I was asked to write code

Arizona_928 03-19-2025 12:53 PM

Chat gpt has been doing this the last few versions…..


They also rewrite their boundary code.

id10t 03-19-2025 04:30 PM

Quote:

Originally Posted by Captain Ahab Jr (Post 12431522)
:D I just would have absolutely no idea where to even start if I was asked to write code

HelloWorld like the rest of us did :D


All times are GMT -8. The time now is 08:18 PM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
Search Engine Optimization by vBSEO 3.6.0
Copyright 2025 Pelican Parts, LLC - Posts may be archived for display on the Pelican Parts Website


DTO Garage Plus vBulletin Plugins by Drive Thru Online, Inc.