• Australis13@fedia.io
    link
    fedilink
    arrow-up
    38
    ·
    1 day ago

    The big win I see here is the amount of optimisation they achieved by moving from the high-level CUDA to lower-level PTX. This suggests that developing these models going forward can be made a lot more energy-efficient, something I hope can be extended to their execution as well. As it stands currently, “AI” (read: LLMs and image generation models) consumes way too many resources to be sustainable.

    • KingRandomGuy@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      19 hours ago

      What I’m curious to see is how well these types of modifications scale with compute. DeepSeek is restricted to H800s instead of H100s or H200. These are gimped cards to get around export controls, and accordingly they have lower memory bandwidth (~2 vs ~3 TB/s) and most notably, much slower GPU to GPU communication (something like 400 GB/s vs 900 GB/s). The specific reason they used PTX in this application was to help alleviate some of the bottlenecks due to the limited inter-GPU bandwidth, so I wonder if that would still improve performance on H100 and H200 GPUs where bandwidth is much higher.

      • mholiv@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        ·
        1 day ago

        Kind of the opposite actually. PTX is in essence nvidia specific assembly. Just like how arm or x86_64 assembly are tied to arm and x86_64.

        At least with cuda there are efforts like zluda. Cuda is more like objective-c was on the mac. Basicly tied to platform but at least you could write a compiler for another target in theory.

        • KingRandomGuy@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          IIRC Zluda does support compiling PTX. My understanding is that this is part of why Intel and AMD eventually didn’t want to support it - it’s not a great idea to tie yourself to someone else’s architecture you have no control or license to.

          OTOH, CUDA itself is just a set of APIs and their implementations on NVIDIA GPUs. Other companies can re-implement them. AMD has already done this with HIP.