• cygnus@lemmy.ca
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    2 days ago

    Well, it does make sense in that the time during which we have AGI would be pretty short because AGI would soon go beyond human-level intelligence. With that said, LLMs are certainly not going to get there, assuming AGI is even possible at all.

    • phdepressed@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      2 days ago

      We’re never getting AGI from any current or planned LLM and ML frameworks.

      These LLMs and ML programs are above human intelligence but only within a limited framework.