Moved it to "regular" OT for you.
AI is just a tool and like any tool will be used and abused. Part of the challenge is that in many cases its a black box. You get the outcome you're looking for, but aren't really sure how. ...and the "explainability" of how is really hard to ferret out. Early examples of machine learning showed us this. One of the first was teaching a program to play a block game. Before long it scored way more than any human could. ...but because it figured out how to exploit a glitch.
Skynet DID in fact create a viable answer to the task it was given - only we didn't like the way it was doing it.
Here's an interesting one that illustrates unintended approaches:
https://www.youtube.com/watch?v=Lu56xVlZ40M
In this case because its simple, we can see those unintended actions. In highly complex systems that actually have real world applications, we cannot.