LLMs performed best on questions related to legal systems and social complexity, but they struggled significantly with topics such as discrimination and social mobility.

“The main takeaway from this study is that LLMs, while impressive, still lack the depth of understanding required for advanced history,” said del Rio-Chanona. “They’re great for basic facts, but when it comes to more nuanced, PhD-level historical inquiry, they’re not yet up to the task.”

Among the tested models, GPT-4 Turbo ranked highest with 46% accuracy, while Llama-3.1-8B scored the lowest at 33.6%.

  • Hawk@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    Well, that’s simply not true. The llm is simply trained on patterns. Human history doesn’t really have clear rules such like programming languages, so it’s not going to be able to internalise that very well. But the English language does have patterns so If you used a Semantic or hybrid Search over a corpus of content and then used an LLM to synthesise well structured summaries and responses, it would probably be fairly usable.

    The big challenge that we’re facing with media today is that many authors do not have any understanding of statistics, programming or data science/ ML.

    Lllm is not ai, It’s simply an application of an NN over a large data set that works really well. So well, in fact that the runtime penalty is outweighed by its utility.

    I would have killed for these a decade ago and they’re an absolute game changer With a lot of potential to do a lot of good. Unfortunately the uninitiated among us have elected to treat them like a silver bullet because they think it’s the next dot com bubble