Well-known AI chatbots can be configured to routinely answer health queries with false information that appears authoritative, complete with fake citations from real medical journals, Australian researchers have found.

Without better internal safeguards, widely used AI tools can be easily deployed to churn out dangerous health misinformation at high volumes, they warned in the Annals of Internal Medicine.

“If a technology is vulnerable to misuse, malicious actors will inevitably attempt to exploit it - whether for financial gain or to cause harm,” said senior study author Ashley Hopkins of Flinders University College of Medicine and Public Health in Adelaide.

  • AwesomeLowlander@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    23 hours ago

    It’s not ‘lying’ when they don’t know the truth to begin with. They could be trying to answer accurately and it’d still be dangerous misinformation.

  • Ilixtze@lemmy.ml
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 day ago

    Not just health information. It is easy to make them LIE ABOUT EVERYTHING.

  • sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    4
    ·
    16 hours ago

    I sincerely hope people understand what LLMs are and what they’re aren’t. They’re sophisticated search engines that aggregate results into natural language and refine results based on baked in prompts (in addition to what you provide), and if there are gaps, the LLM invents something to fill it.

    If the model was trained on good data and the baked-in prompt is reasonable, you can get reasonable results. But even in the best case, there’s still the chance that the LLM hallucinates something, that just how they work.

    For most queries, I’m mostly looking for which search terms to use for checking original sources, or sometimes a reference to pull out something I already know, but am having trouble remembering (i.e. I will recognize the correct answer). For those use cases, it’s pretty effective.

    Don’t use an LLM as a source of truth, use it as an aid for finding truth. Be careful out there!

    • chuckleslord@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      14 hours ago

      No, it isn’t. It’s a fancy next word generator. It knows nothing, can verify nothing, shouldn’t be used as a source for anything. It is a text generator that sounds confident and mostly human and that is it.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        14 hours ago

        That depends on what you mean by “know.” It generates text from a large bank of hopefully relevant data, and the relevance of the answer depends on how much overlap there is between your query and the data it was trained on. There are different models with different focuses, so pick your model based on what your query is like.

        And yeah, one big issue is the confidence. If users are aware of its limitations, it’s fine, I certainly wouldn’t put my kids in front of one without training them on what it can and can’t be relied on to do. It’s a tool, so users need to know how it’s intended to be used to get value from it.

        My use case is distilling a broad idea into specific things to do a deeper search for, and I use traditional tools for that deeper search. For that it works really well.

    • JandroDelSol@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      16 hours ago

      Don’t even use it as an aid for finding truth, it’s just as likely, if not more, to give incorrect info

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        14 hours ago

        Why not? It’s basically a search engine for whatever it was trained on. Yeah, it’ll hallucinate sometimes, but if you’re planning to verify anyway, it’s pretty useful in quickly distilling ideas into concrete things to look up.

        • Lemminary@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          13 hours ago

          Yeah, I agree. It’s a great starting place.

          Recently I needed a piece of information that I couldn’t find anywhere through a regular search. ChatGPT, Claude and Gemini all gave a similar answers, but it was only confirmed when I contacted the company directly which took about 3 business days to reply.

  • Guidy@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    16 hours ago

    Meh. Google Gemini has given me great medical advice always couched carefully in “but check with your doctor.” and so on.

    I was surprised too.