• 🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮 @pawb.social
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    6
    ·
    edit-2
    8 days ago

    It’s just semantics in this case. Catloaf’s argument is entirely centered around the definition of the word “lie,” and while I agree with that, most people will understand the intent behind the usage in the context it is being used in. AI does not tell the truth. AI is not necessarily accurate. AI “lies.”

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      3
      ·
      8 days ago

      AI returns incorrect results.

      In this case semantics matter because using terms like halluilcinations, lies, honesty, and all the other anthromorphic bullshit is designed to make people think neural networks are far more advanced than they actually are.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        5
        arrow-down
        2
        ·
        7 days ago

        It’s not “anthropomorphic bullshit”, it’s technical jargon that you’re not understanding because you’re applying the wrong context to the definitions. AI researchers use terms like “hallucination” to mean specific AI behaviours, they use it in their scientific papers all the time.

      • thedruid@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        7 days ago

        Nn. It’s to make people who don’t understand llms be cautious in placing their trust in them. To communicate that clearly, language that is understandable to people who don’t understand llms need to be used.

        I can’t believe this Is the supposed high level of discourse on lemmy