• floquant@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 hours ago

    In no way given the chaotic context window from all the other models were those tokens the appropriate next ones to pick unless the generating world model predicting those tokens contained a very strange and unique mind within it this was all being filtered through.

    Except for the fact that LLMs can only reliably work if they are made to pick the “wrong” (not the most statistically likely) some of the time - the temperature parameter.

    If the context window is noisy (as in, high-entropy) enough, any kind of “signal” (coherent text) can emerge.

    Also, you know, infinite monkeys.