• Korhaka@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        Would be amusing if they release a new version and then make the old version completely free to self host, release it as a torrent. Just make Altman totally worthless.

  • HappyTimeHarry@lemm.ee
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    2 days ago

    Do they not know it works offline too?

    I noticed chatgpt today being pretty slow compared to the local deepseek I have running which is pretty sad since my computer is about a bajillion times less powerful

    • Rogue@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Is it possible to download it without first signing up to their website?

        • Rogue@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 day ago

          Thanks

          Any recommendations for communities to learn more?

          Frustratingly Their setup guide is terrible. Eventually managed to get it running. Downloaded a model and only after it download did it inform me I didn’t have enough RAM to run it. Something it could have known before the slow download process. Then discovered my GPU isn’t supported. And running it on a CPU is painfully slow. I’m using an AMD 6700 XT and the minimum listed is 6800 https://github.com/ollama/ollama/blob/main/docs/gpu.md#amd-radeon

            • Rogue@feddit.uk
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 day ago

              Thanks, I did get both setup with Docker, my frustration was neither ollama or open-webui included instructions on how to setup both together.

              In my opinion setup instructions should guide you to a usable setup. It’s a missed opportunity not to include a docker-compose.yml connecting the two. Is anyone really using ollama without a UI?

        • Kuvwert@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          22 hours ago

          https://ibb.co/wVNsn5H

          https://ibb.co/HpK5G5Pp

          https://ibb.co/sp1wGMFb

          https://ibb.co/4wyKhkRH

          https://ibb.co/WpBTZPRm

          https://ibb.co/0yP73j6G

          Note that my tests were via groq and the r1 70B distilled llama variant (the 2nd smartest version afaik)

          Edit 1:

          Incidentally… I propositioned a coworker to answer the same question. This is the summarized conversation I had:

          Me: “Hey Billy, can you answer a question? in under 3 seconds answer my following question”

          Billy: “sure”

          Me: “How many As are in abracadabra 3.2.1”

          Billy: “4” (answered in less than 3 seconds)

          Me: “nope”

          I’m gonna poll the office and see how many people get it right with the same opportunity the ai had.

          Edit 2: The second coworker said “6” in about 5 seconds

          Edit 3: Third coworker said 4, in 3 seconds

          Edit 4: I asked two more people and one of them got it right… But I’m 60% sure she heard me asking the previous employee, but if she didnt we’re at 1/5

          In probably done with this game for the day.

          I’m pretty flabbergasted with the results of my very unscientific experiment, but now I can say (with a mountain of anecdotal juice) that with letter counting, R1 70b is wildly faster and more accurate than humans .

            • rdri@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              1 day ago

              It didn’t, I just wanted a short reply. Though it failed when I asked again in the same chat. But when asked to split the word to 2 parts it became sure that the correct answer is 3.

              Here is the fun part

              Certainly! Let’s split “strawberry” into two equal parts and count the 'r’s in each:

              1. First half: “strawb”
                Letters: S, T, R, A, W, B
                'r’s: 1 (at position 3)

              2. Second half: “erry”
                Letters: E, R, R, Y
                'r’s: 2 (at positions 2 and 3)

              Now, summing them: 1 (first half) + 2 (second half) = 3 'r’s.

              Wait, this contradicts my earlier correction. It seems I made another mistake. Let’s carefully recount:

              • First half: “strawb”
                'r’s: 1 (at position 3)

              • Second half: “erry”
                'r’s: 1 (only one ‘r’ at position 2; the third letter is ‘r’ but already counted in the first half).

              Wait, no—this is getting confusing. Let’s re-examine the entire word:

              Full word: S, T, R, A, W, B, E, R, R, Y
              'r’s: Positions 3, 8, and 9 → 3 'r’s.

              I apologize for the back-and-forth. The correct total is 3 'r’s in “strawberry.” Thank you for your patience as I worked through this! 😊

              • blakenong@lemmings.world
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                2
                ·
                1 day ago

                That’s a lot of processing just to count letters. Hopefully it can add numbers without splitting the number ;)

                • ikt@aussie.zone
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  8 hours ago

                  That’s a lot of processing just to count letters

                  feel free to ask Google/Bing/Your favourite search engine to do the same :P

                • Pieisawesome@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  1 day ago

                  It’s because LLMs don’t work with letters. They work with tokens that are converted to vectors.

                  They literally don’t see the word “strawberry” in order to count the letters.

                  Splitting the letter probably separates them into individual tokens

      • blakenong@lemmings.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        edit-2
        2 days ago

        No. It literally cannot count the number of R letters in strawberry. It says 2, there are 3. ChatGPT had this problem, but it seems it is fixed. However if you say “are you sure?” It says 2 again.

        Ask ChatGPT to make an image of a cat without a tail. Impossible. Odd, I know, but one of those weird AI issues

        • SoftestSapphic@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          3
          ·
          2 days ago

          Because there aren’t enough pictures of tail-less cats out there to train on.

          It’s literally impossible for it to give you a cat with no tail because it can’t find enough to copy and ends up regurgitating cats with tails.

          Same for a glass of water spilling over, it can’t show you an overfilled glass of water because there aren’t enough pictures available for it to copy.

          This is why telling a chatbot to generate a picture for you will never be a real replacement for an artist who can draw what you ask them to.

          • JustARaccoon@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            2 days ago

            Not really it’s supposed to understand what a tail is, what a cat is, and which part of the cat is the tail. That’s how the “brain” behind AI works

            • SoftestSapphic@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              3
              ·
              edit-2
              2 days ago

              It searches the internet for cats without tails and then generates an image from a summary of what it finds, which contains more cats with tails than without.

              That’s how this Machine Learning progam works

              • Kogasa@programming.dev
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                24 hours ago

                It doesn’t search the internet for cats, it is pre-trained on a large set of labelled images and learns how to predict images from labels. The fact that there are lots of cats (most of which have tails) and not many examples of things “with no tail” is pretty much why it doesn’t work, though.

              • FatCrab@lemmy.one
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                1 day ago

                That isn’t at all how something like a diffusion based model works actually.

          • vrighter@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            1 day ago

            so… with all the supposed reasoning stuff they can do, and supposed “extrapolation of knowledge” they cannot figure out that a tail is part of a cat, and which part it is.

            • Kuvwert@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              ·
              22 hours ago

              The “reasoning” models and the image generation models are not the same technology and shouldn’t be compared against the same baseline.

              • vrighter@discuss.tchncs.de
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                24 hours ago

                I’m not seeing any reasoning, that was the point of my comment. That’s why I said “supposed”

      • Kuvwert@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        Non thinking prediction models can’t count the r’s in strawberry due to the nature of tokenization.

        However openai o1 and deep seek r1 can both reliably do it correctly