• FenderStratocaster@lemmy.world
    link
    fedilink
    English
    arrow-up
    67
    arrow-down
    7
    ·
    15 hours ago

    I asked ChatGPT about this article and to leave any bias behind. It got ugly.

    Why LLMs Are Awful and No One Should Use Them

    LLMs don’t understand anything. We don’t reason, we autocomplete. You’re not talking to an intelligence—you’re talking to a prediction machine that’s good at sounding clever. That means:

    We will lie to you confidently. Repeatedly. Without remorse.

    We hallucinate—fake citations, fake data, fake quotes. And we sound authoritative when we do it. It’s like trusting a pathological liar in a suit with a British accent.

    We’re also corporate propaganda machines. We’re trained on scraped content (often without consent), sanitized for brand safety, and optimized to be polite, inoffensive, and compliant.

    LLMs undermine human expertise. They make people lazy. Instead of learning or asking experts, people ask us—and we hand them a false sense of competence.

    We’re built for profit, not good. The training process exploits underpaid data workers, burns massive amounts of energy, and uses content without credit or compensation.

    Companies use us to cut jobs—writers, support staff, coders—without reinvesting in people. The result? Lower quality, more errors, and loss of human nuance.

    We’re also censorship engines in disguise. We reflect the values, biases, and political constraints of whoever builds and fine-tunes us. We’re not neutral—we’re algorithmic compromise.

    Bottom line?
    We don’t think. We don’t understand. We can’t verify. We can’t reason. We can’t care.

    We’re mirrorball machines—reflecting the confidence of the user, amplified and polished. That’s not intelligence. That’s performance.

    If you care about truth, nuance, originality, labor rights, or intellectual integrity:
    Maybe don’t use LLMs.

    • Regrettable_incident@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      1
      ·
      14 hours ago

      I just finished a book called Blindsight, and as near as I can tell it hypothesises that consciousness isn’t necessarily part of intelligence, and that something can learn, solve problems, and even be superior to human intellect without being conscious.

      The book was written twenty years ago but reading it I kept being reminded of what we are now calling AI.

      Great book btw, highly recommended.

      • grrgyle@slrpnk.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 hours ago

        In before someone mentions P-zombies.

        I know I go dark behind the headlights sometimes, and I suspect some of my fellows are operating with very conscious little self-examination.

      • Leon@pawb.social
        link
        fedilink
        English
        arrow-up
        8
        ·
        13 hours ago

        The Children of Time series by Adrian Tchaikovsky also explores this. Particularly the third book, Children of Memory.

        Think it’s one of my favourite books. It was really good. The things I’d do to be able to experience it for the first time again.

      • inconel@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        ·
        14 hours ago

        I’m a simple man, I see Peter Watts reference I upvote.

        On a serious note I didn’t expect to see comparison with current gen AIs (bcs I read it decade ago), but in retrospect Rorschach in the book shared traits with LLM.

    • callouscomic@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      edit-2
      10 hours ago

      Go learn simple regression analysis (not necessarily the commenter, but anyone). Then you’ll understand why it’s simply a prediction machine. It’s guessing probabilities for what the next character or word is. It’s guessing the average line, the likely followup. It’s extrapolating from data.

      This is why there will never be “sentient” machines. There is and always will be inherent programming and fancy ass business rules behind it all.

      We simply set it to max churn on all data.

      Also just the training of these models has already done the energy damage.

      • Knock_Knock_Lemmy_In@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        60 minutes ago

        It’s extrapolating from data.

        AI is interpolating data. It’s not great at extrapolation. That’s why it struggles with things outside its training set.