• boonhet@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    9 hours ago

    What price point are you trying to hit?

    With regards to AI?. None tbh.

    With this super fast storage I have other cool ideas but I don’t think I can get enough bandwidth to saturate it.

    • barsoap@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      With regards to AI?. None tbh.

      TBH, that might be enough. Stuff like SDXL runs on 4G cards (the trick is using ComfyUI, like 5-10s/it), smaller LLMs reportedly too (haven’t tried, not interested). And the reason I’m eyeing a 9070 XT isn’t AI it’s finally upgrading my GPU, still would be a massive fucking boost for AI workloads.

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      8 hours ago

      You’re willing to pay $none to have hardware ML support for local training and inference?

      Well, I’ll just say that you’re gonna get what you pay for.

      • bassomitron@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        7 hours ago

        No, I think they’re saying they’re not interested in ML/AI. They want this super fast memory available for regular servers for other use cases.