|
Not once has ai given me working code so I remain a skeptic but it recently helped me.
I had a cluster of 400 lines of source that was a giant mess with no comments that decoded a proprietary binary format, it looked like it came from a decompiler.
On a whim I pasted it into Claud and asked what the code was doing, then if it looked like a known algorithm, etc. in about 5 minutes we figured out it was a very sloppy implementation to xor with an array of random bytes - a one time pad!
I asked for it to rewrite it and of course it made a mess, missed all the boundary conditions and output didn’t work. It went straight back to acting like a bs-ing high schooler. I rewrote in 12 lines of code and it worked first time.
Claud didn’t know right away what the mess was but I asked questions and we converged on identifying what that mess did. Very useful. I never figured it out before because the process of refactoring would have been so tedious. I’m not sure I would have succeeded if the obfuscated algorithm had been much more complex.
I remain skeptical that non programmer will be able to build reliable programs with llm based ai. Point of reliable is that it works even in cases you haven’t tested. Llm isn’t the right model and fundamentally can only correlate and copy. Appears that we’ve solved human intuition now we need to teach computers to use logic.
|