• REDACTED@infosec.pub
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    33
    ·
    edit-2
    8 hours ago

    Fancy autocorrect? Bro lives in 2022

    EDIT: For the ignorant: AI has been in rapid development for the past 3 years. For those who are unaware, it can also now generate images and videos, so calling it autocorrect is factually wrong. There are still people here who base their knowledge on 2022 AIs and constantly say ignorant stuff like “they can’t reason”, while geniuses out there are doing stuff like this: https://xcancel.com/ErnestRyu/status/1958408925864403068

    • WhatAmLemmy@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      3
      ·
      5 hours ago

      You do realise that everyone actually educated in statistical modeling knows that you have no idea what you’re talking about, right?

        • Traister101@lemmy.today
          link
          fedilink
          English
          arrow-up
          10
          ·
          3 hours ago

          They can’t reason. LLMs, the tech all the latest and greatest still are, like GPT5 or whatever generate output by taking every previous token (simplified) and using them to generate the most likely next token. Thanks to their training this results in pretty good human looking language among other things like somewhat effective code output (thanks to sites like stack overflow being included in the training data).

          Generating images works essentially the same way but is more easily described as reverse jpg compression. You think I’m joking? No really they start out with static and then transform the static using a bunch of wave functions they came up with during training. LLMs and the image generation stuff is equally able to reason, that being not at all whatsoever

            • Traister101@lemmy.today
              link
              fedilink
              English
              arrow-up
              2
              ·
              28 minutes ago

              If you truly believe that you fundamentally misunderstand the definition of that word or are being purposely disingenuous as you Ai brown nose folk tend to be. To pretend for a second you genuinely just don’t understand how to read LLMs, the most advanced “Ai” they are trying to sell everybody is as capable of reasoning as any compression algorithm, jpg, png, webp, zip, tar whatever you want. They cannot reason. They take some input and generate an output deterministically. The reason the output changes slightly is because they put random shit in there for complicated important reasons.

              Again to recap here LLMs and similar neural network “Ai” is as capable of reasoning as any other computer program you interact with knowingly or unknowingly, that being not at all. Your silly Wikipedia page is a very specific term “Reasoning System” which would include stuff like standard video game NPC Ai such as the zombies in Minecraft. I hope you aren’t stupid enough to say those are capable of reasoning

            • cmhe@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              28 minutes ago

              This link is about reasoning system, not reasoning. Reasoning involves actually understanding the knowledge, not just having it. Testing or validating where knowledge is contradictionary.

              LLM doesn’t understand the difference between hard and soft rules of the world. Everything is up to debate, everything is just text and words that can be ordered with some probabilities.

              It cannot check if something is true, it just ‘knows’ that someone on the internet talked about something, sometimes with and often without or contradicting resolutions…

              It is a gossip machine, that trys to ‘reason’ about whatever it has heard people say.

    • sqgl@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      edit-2
      8 hours ago

      This comment, summarising the author’s own admission, shows AI can’t reason:

      this new result was just a matter of search and permutation and not discovery of new mathematics.

      • REDACTED@infosec.pub
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        12
        ·
        edit-2
        7 hours ago

        I never said it discovered new mathematics (edit: yet), I implied it can reason. This is clear example of reasoning to solve a problem

        • xektop@lemmy.zip
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          6 hours ago

          You need to dig deeper of how that “reasoning” works, but you got misled if you think it does what you say it does.

          • REDACTED@infosec.pub
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            6
            ·
            edit-2
            5 hours ago

            Can you elaborate? How is this not reasoning? Define reasoning to me

            Deep research independently discovers, reasons about, and consolidates insights from across the web. To accomplish this, it was trained on real-world tasks requiring browser and Python tool use, using the same reinforcement learning methods behind OpenAI o1, our first reasoning model. While o1 demonstrates impressive capabilities in coding, math, and other technical domains, many real-world challenges demand extensive context and information gathering from diverse online sources. Deep research builds on these reasoning capabilities to bridge that gap, allowing it to take on the types of problems people face in work and everyday life.

            • NoMoreCocaine@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              4 hours ago

              While that contains the word “reasoning” that does not make it such. If this is about the new “reasoning” capabilities of the new LLMS. It was if I recall correctly, found our that it’s not actually reasoning, just doing a fancy footwork appear as if it was reasoning, just like it’s doing fancy dice rolling to appear to be talking like a human being.

              As in, if you just change the underlying numbers and names on a test, the models will fail more often, even though the logic of the problem stays the same. This means, it’s not actually “reasoning”, it’s just applying another pattern.

              With the current technology we’ve gone so far into this brute forcing the appearance of intelligence that it is becoming quite the challenge in diagnosing what the model is even truly doing now. I personally doubt that the current approach, which is decades old and ultimately quite simple, is a viable way forwards. At least with our current computer technology, I suspect we’ll need a breakthrough of some kind.

              But besides the more powerful video cards, the basic principles of the current AI craze are the same as they were in the 70s or so when they tried the connectionist approach with hardware that could not parallel process, and had only datasets made by hand and not with stolen content. So, we’re just using the same approach as we were before we tried to do “handcrafted” AI with LISP machines in the 80s. Which failed. I doubt this earlier and (very) inefficient approach can solve the problem, ultimately. If this keeps on going, we’ll get pretty convincing results, but I seriously doubt we’ll get proper reasoning with this current approach.

                • NoMoreCocaine@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 hours ago

                  If we’re talking about Artificial INTELLIGENCE, then we should talk about “reasoning” as an ability to apply logic and not just match patterns. Because pure pattern matching is decidedly NOT reasoning, because if the pattern changes even a little (change the names and numbers, keeping the logic intact) all models start showing failures. So, yes, some people decided to reframe what “reasoning” means in this context (moving goalposts), but I’m pretty sure that 99% people who use the term when referring to AI don’t mean reasoning like that. Regardless, it’s not actually that of an interesting discussion, not do I actually care that much. So, sure, I’ll give you that point.