• 0 Posts
  • 27 Comments
Joined 1 year ago
cake
Cake day: February 14th, 2024

help-circle





  • This is my use case exactly.

    I do a lot of analysis locally, this is more than enough for my experiments and research. 64 to 96gb VRAM is exactly the window I need. There are analyses I’ve had to let run for 2 or 3 days and dealing with that on the cloud is annoying.

    Plus this will replace GH Copilot for me. It’ll run voice models. I have diffusion model experiments I plan to run but are totally inaccessible locally to me (not just image models). I’ve got workloads that take 2 or 3 days at 100% CPU/GPU that are annoying to run in the cloud.

    This basically frees me from paying for any cloud stuff in my personal life for the foreseeable future. I’m trying to localize as much as I can.

    I’ve got tons of ideas I’m free to try out risk free on this machine, and it’s the most affordable “entry level” solution I’ve seen.




  • My current work is going through this

    They dropped an open system we used but the team managing the new one is so bureaucratic and disconnected from the people actually doing work it’s ridiculous.

    They reject every proposal/change unless it’s 100% perfect. I had a project delayed by four weeks because I didn’t end single line docstrings with periods. They didn’t review the substance of the pr, they just commented on the docstrings and stopped as if the rest had no merit. It was two weeks between review cycles, so it took three cycles to actually fix what could have been one.

    That whole team is just clearly a make work program. They nitpick and bike shed on every issue. But they aggressively document all the make work they do so they look super busy and important to the execs.

    I just want to get work done, but instead it’s a Sisyphean effort.