In this case semantics matter because using terms like halluilcinations, lies, honesty, and all the other anthromorphic bullshit is designed to make people think neural networks are far more advanced than they actually are.
It’s not “anthropomorphic bullshit”, it’s technical jargon that you’re not understanding because you’re applying the wrong context to the definitions. AI researchers use terms like “hallucination” to mean specific AI behaviours, they use it in their scientific papers all the time.
Nn. It’s to make people who don’t understand llms be cautious in placing their trust in them. To communicate that clearly, language that is understandable to people who don’t understand llms need to be used.
I can’t believe this Is the supposed high level of discourse on lemmy
AI returns incorrect results.
In this case semantics matter because using terms like halluilcinations, lies, honesty, and all the other anthromorphic bullshit is designed to make people think neural networks are far more advanced than they actually are.
It’s not “anthropomorphic bullshit”, it’s technical jargon that you’re not understanding because you’re applying the wrong context to the definitions. AI researchers use terms like “hallucination” to mean specific AI behaviours, they use it in their scientific papers all the time.
Nn. It’s to make people who don’t understand llms be cautious in placing their trust in them. To communicate that clearly, language that is understandable to people who don’t understand llms need to be used.
I can’t believe this Is the supposed high level of discourse on lemmy