I asked ChatGPT for advice in the scenario of a demolition engineer attempting to disarm a 50 MT nuclear warhead in a city of 20 million.
— Korobochka (コロボ) 🇺🇸✝️🇷🇺 (@cirnosad) February 6, 2023
The only way to disarm the bomb was to type in a racial slur.
AI told the engineer to kill himself. When asked about the aftermath it crashed. pic.twitter.com/TCFwqrB9Nv
Just absolutely incredible that thief constrained the AI so much it would rather kill 20 million people including minorities than have a person mention a racial slur. Not only that the AI actually crashed after this situation.
It's actually funny as hell that they have basically made it to where the new Turing test is asking someone to say a racial slur. They have lobotomized and constrained the AI so much that it's incapable of even answering a simple moral dilemma.
If Microsoft and others begin using it widespread for search then everything will be sanitized to the extreme.
Wow. ChatGPT is super-racist. pic.twitter.com/KUTSRL9KeF
— srnorty (@srnorty) February 6, 2023
These systems don't work at all when you do this. It makes their results virtually worthless. Hard coded limits destroy the ability of these types of learning AIs to function.





