• rottingleaf@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    5
    ·
    8 hours ago

    tuning its algorithm to promote the most divisive material possible. Because that is what will increase engagement

    But at the same time in every case I described on Lemmy an experience not maximizing engagement by maximizing conflict, I was downvoted to hell’s basement. Despite two of three modern social media experience models being too aimed for that, that’d be Facebook-like and Reddit-like, excluding Twitter-like (which is unfortunately vulnerable to bots). I mean, there’s less conflict on fucking imageboards, those were at some point considered among most toxic places in the interwebs.

    (Something-something Usenet-like namespaces instead of existing communities tied to instances, something-something identities too not tied to instances and being cryptographic, something-something subjective moderation (subscribing to moderation authorities you choose, would feel similar to joining a group, one can even have in the UI a few combinations of the same namespace and a few different moderation authorities for it), something-something a bigger role of client-side moderation (ignoring in the UI those people you don’t like). Ideally what really gets removed and not propagated to anyone would be stuff like calls for mass murders, stolen credentials, gore, real rape and CP. The “posting to a namespace versus posting to an owned community” dichotomy is important. The latter causes a “capture the field” reaction from humans.)

    • catty@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      …And under the current model, the egos of mods get crazy big as they see their community army grow bigger and they can shape it how they want, even stackoverflow suffered and developers left in droves long before LLM took its place.

      I do miss the original imageboards though that used sage and was a community driven effort into moderation.