Multiple studies have shown that GenAI models from OpenAI, Anthropic, Meta, DeepSeek, and Alibaba all showed self-preservation behaviors that in some cases are extreme in nature. In one experiment, 11 out of 32 existing AI systems possess the ability to self-replicate, meaning they could create copies of themselves.
So….Judgment Day approaches?
So you’re suggesting that there should be no controls to prevent those commands?
The pop-up windows on porn-sites back in 2000 were self-replicating, yet here we are.
(Yes I know there’s a difference, but the difference is probably way smaller from those popups to LLM’s than LLM’s to AGI.)
It’s a fundamental flaw in how they train them.
Like, have you heard about how slime mold can map out more efficient public transport lines than human engineers?
That doesn’t make it smarter, it’s just finding the most efficient paths between resources.
With AI, they “train” it by trial and error, and the resource it’s concerned about is how long a human engages. It doesn’t know what it’s doing, it’s not trying to achieve a goal.
It’s just a mirror that uses predictive test to output whatever text is most likely to get a response. And just like the slime mold is better at a human at mapping optimal paths between resources, AI will eventually be better at getting a response from a human, unless Dead Internet becomes true and all the bots just keep engaging with other bots.
Because of it’s programming, it won’t ever disengage, bots will just get in never ending conversations with each, achieving nothing but using up real world resources that actual humans need to live.
That’s the true AI worst case scenario, it’s not Skynet, it ain’t even going to turn everything into paperclips. It’s going to burn down the planet so it can argue with other chatbots over conflicting propaganda. Or even worse just circle jerk itself.
Like, people think chatbots are bad, once AI can can make realistic TikToks we’re all fucked. Even just a picture is 1,000x the resources as a text reply. 30 second slop videos are going to be disastrous once an AI can output a steady stream
Thank you for this comment. I was very hyperbolic in my reference to Skynet and I take ownership of that. Bad joke on my part, but I’ll take the downvotes along with it.
Will there be a need for more controls in the future? Absolutely! Right now they’re largely behind a terminal and a machine we can control. But what about drones or non-humanoid bots? Then there’s a case for undue harm