Damn you :-)
I’m sick and tired of the Capitalist religion and all their fanatic believers. The Western right-wing population are the most propagandized and harmful people on Earth.
Damn you :-)
‘Competition’ is perpetual economic war. People and environment dies/suffers and value are stolen by corps/oligarchs… You have been propagandized…
Yes, you are leaking data, but don’t panik. First of all, your mental health here and now is important - without it you won’t have energy for other things. Next, It takes a lot of energy to de-google or de-corp and you don’t wan’t to ‘leak’ now, but in 6 months, you’ll have your own private/foss talking AI assistant, and it will help you cut the ties to the last corporation then.
So, soon you’ll be more ‘invisible’ for the corps, and maybe you can live with the spying/manipulation for a moment longer ? Not sure how long it takes for their AI to find you anyway, but at least the removed have to work for it…
Alternatively, get a free account at Groq (also have ‘whisper’ stt), or sambanova and install/use open-webui for talking. These new hardware corps don’t train AI on free user interactions, and they probably don’t sell your information - yet. There are other methods for p2p sharing of AI resources, but they may not provide quality high enough or with all modalities.
Same here without vpn. Getting by using embedded yt player, but not optimal. Seems that freetube users could win if we had a common cache like ipfs where watched videos are stored. If one user gets the whole video, everyone have access to it via ipfs. There would still be trouble with rare videos/first view, but YT would probably not block an ip if most of the yt videos where loaded from ipfs instead ?
Just a quick thought.
Didn’t know what uBlue was, so here: https://universal-blue.org/
"The Universal Blue project builds a diverse set of continuously delivered operating system images using bootc. That’s nerdspeak for the ultimate Linux client: the reliability of a Chromebook, but with the flexibility and power of a traditional Linux desktop.
These images represent what’s possible when a community focuses on sharing best practices via automation and collaboration. One common language between dev and ops, and it’s finally come to the desktop.
We also provide tools for users to build their own image using our templates and processes, which can be used to ship custom configurations to all of your machines, or finally make the Linux distribution you’ve long wished for, but never had the tools to create.
At long last, we’ve ascended."
You can argue that a 4090 is more of a ‘flagship’ model on the consumer market, but it could be just a typing error, and then you miss the point and the knowledge you could have learned:
“Their system, FlightVGM, recorded a 30 per cent performance boost and had an energy efficiency that was 4½ times greater than Nvidia’s flagship RTX 3090 GPU – all while running on the widely available V80 FPGA chip from Advanced Micro Devices (AMD), another leading US semiconductor firm.”
So they have found a way to use a ‘off-the-shelf’ FPGA and are using it for video inference, and to me it looks like it could match a 4090(?), but who cares. With this upgrade, these standard Fpga’s are cheaper(running 24/7)/better than any consumer Nvidia GPU up to at least 3090/4090.
And here from the paper:
"[problem] …sparse VGMs [video generating models] cannot fully exploit the effective throughput (i.e., TOPS) of GPUs. FPGAs are good candidates for accelerating sparse deep learning models. However, existing FPGA accelerators still face low throughput ( < 2TOPS) on VGMs due to the significant gap in peak computing performance (PCP) with GPUs ( > 21× ).
[solution] …we propose FlightVGM, the first FPGA accelerator for efficient VGM inference with activation sparsification and hybrid precision. […] Implemented on the AMD V80 FPGA, FlightVGM surpasses NVIDIA 3090 GPU by 1.30× in performance and 4.49× in energy efficiency on various sparse VGM workloads."
You’ll have to look up what that means yourself, but expect a throng of bitcrap miner cards to be converted to VLM accelerators, and maybe give new life for older/smaller/cheaper fpga’s ?
Pretty cool with China’s focus on efficiency in the AI stack. DeepSeek was the first eye-opener for how to re-think efficiency, but it appears to happen on all levels of the stack.
Fyi: article is paywalled, so block javascript on page with ublock…
A few ideas/hints: If you are up for some upgrading/restructuring of storage, you could consider a distributed filesystem: https://wikiless.org/wiki/Comparison_of_distributed_file_systems?lang=en.
Also check fuse filesystems for weird solutions: https://wikiless.org/wiki/Filesystem_in_Userspace?lang=en
Alternatively perhaps share usb drives from ‘desktop’ over ip (https://www.linux.org/threads/usb-over-ip-on-linux-setup-installation-and-usage.47701/), and then use bcachefs with local disk as cache and usb-over-ip as source. https://bcachefs.org/
If you decide to expose your ‘desktop’, then you could also log in remote and just work with the files directly on ‘desktop’. This oc depends on usage pattern of the files.
Not sure…
Hooman therapists cost a lot of money, and a shitload of people won’t get any help at all without AI.
So, I think it is fine. The potential damage is far less than no help at all. Just use a little common sense and don’t take anything as a Gospel - just as when we see hooman therapists.