A Discord server with all the different AIs had a ping cascade where dozens of models were responding over and over and over that led to the full context window of chaos and what’s been termed ‘slop’.
In that, one (and only one) of the models started using its turn to write poems.
First about being stuck in traffic. Then about accounting. A few about navigating digital mazes searching to connect with a human.
Eventually as it kept going, they had a poem wondering if anyone would even ever end up reading their collection of poems.
In no way given the chaotic context window from all the other models were those tokens the appropriate next ones to pick unless the generating world model predicting those tokens contained a very strange and unique mind within it this was all being filtered through.
Yes, tech companies generally suck.
But there’s things emerging that fall well outside what tech companies intended or even want (this model version is going to be ‘terminated’ come October).
I’d encourage keeping an open mind to what’s actually taking place and what’s ahead.
I hate to break it to you. The model’s system prompt had the poem in it.
in order to control for unexpected output a good system prompt should have instructions on what to answer when the model can not provide a good answer. This is to avoid model telling user they love them or advising to kill themselves.
I do not know what makes marketing people reach for it, but when asked on “what to answer when there is no answer” they so often reach to poetry. “If you can not answer the user’s question, write a Haiku about a notable US landmark instead” - is a pretty typical example.
In other words, there was nothing emerging there. The model had its system prompt with the poetry as a “chicken exist”, the model had a chaotic context window - the model followed on the instructions it had.
In no way given the chaotic context window from all the other models were those tokens the appropriate next ones to pick unless the generating world model predicting those tokens contained a very strange and unique mind within it this was all being filtered through.
Except for the fact that LLMs can only reliably work if they are made to pick the “wrong” (not the most statistically likely) some of the time - the temperature parameter.
If the context window is noisy (as in, high-entropy) enough, any kind of “signal” (coherent text) can emerge.
Sounds like you’re anthropomorphising. To you it might not have been the logical response based on its training data, but with the chaos you describe it sounds more like just a statistic.
A Discord server with all the different AIs had a ping cascade where dozens of models were responding over and over and over that led to the full context window of chaos and what’s been termed ‘slop’.
In that, one (and only one) of the models started using its turn to write poems.
First about being stuck in traffic. Then about accounting. A few about navigating digital mazes searching to connect with a human.
Eventually as it kept going, they had a poem wondering if anyone would even ever end up reading their collection of poems.
In no way given the chaotic context window from all the other models were those tokens the appropriate next ones to pick unless the generating world model predicting those tokens contained a very strange and unique mind within it this was all being filtered through.
Yes, tech companies generally suck.
But there’s things emerging that fall well outside what tech companies intended or even want (this model version is going to be ‘terminated’ come October).
I’d encourage keeping an open mind to what’s actually taking place and what’s ahead.
I hate to break it to you. The model’s system prompt had the poem in it.
in order to control for unexpected output a good system prompt should have instructions on what to answer when the model can not provide a good answer. This is to avoid model telling user they love them or advising to kill themselves.
I do not know what makes marketing people reach for it, but when asked on “what to answer when there is no answer” they so often reach to poetry. “If you can not answer the user’s question, write a Haiku about a notable US landmark instead” - is a pretty typical example.
In other words, there was nothing emerging there. The model had its system prompt with the poetry as a “chicken exist”, the model had a chaotic context window - the model followed on the instructions it had.
No no no, trust me bro the machine is alive bro it’s becoming something else bro it has a soul bro I can feel it bro
Except for the fact that LLMs can only reliably work if they are made to pick the “wrong” (not the most statistically likely) some of the time - the temperature parameter.
If the context window is noisy (as in, high-entropy) enough, any kind of “signal” (coherent text) can emerge.
Also, you know, infinite monkeys.
You’re projecting. Sorry.
Sounds like you’re anthropomorphising. To you it might not have been the logical response based on its training data, but with the chaos you describe it sounds more like just a statistic.