

And we’re just supposed to trust the word of partisan hack. Ya, no.
I do get that there is a lot of intransigence in Federal IT. I was an IT and IS contractor for a couple sites within the US FedGov and there were places where “that’s the way we’ve always done it” was the trump card for any proposed change. And this led to some abysmal security practices which should have resulted in a lot of management getting shown the door (and mostly not just IT/IS management, culture gets set from the top). And I’ve worked at others where we had a large staff of folks whose entire job was ensuring compliance with all required cybersecurity controls and documentation. While I’ll be one of the first to state that compliance is not security, I also have yet to see a site which got security mostly right which didn’t also have compliance on lock. If you are doing things the right way, compliance is actually pretty easy to achieve, since good documentation is the foundation of security. If you go into a site and they can’t even spell CMDB, expect a shitshow.
So ya, if the DHS team went to FEMA’s IT team and started asking for network diagrams, data flow diagrams, system and network baseline checklists and system documentation; and the FEMA IT team’s response was, “sorry, we don’t have that”. Then yes, I would get cleaning house. Though, I’d have started by figuring out if the problem is the IT team just not getting it done; or, if the IT team was prevented from getting it done. My experience has been that IT teams are willing to patch and correct configurations; but, this means downtime and risk to applications. So, upper management will side with the application owners who want five nines uptime on a “best effort” budget, which ends up blocking patching and configuration changes. Also, if the IT team is spending 40 hours a week putting out fires and dealing with the blow-back from accumulated technical debt, that’s an upper management problem.
The problem, of course, is that the DHS is led by a two-bit partisan hack. And this administration is known for straight up lying to clear the board for it’s own partisan interests. I have zero faith that they did any sort of good faith analysis of the FEMA IT department. Especially since this is the same administration which gave us Russian compromised DOGE servers.
I know that, during my own move from Windows to Linux, I found that the USB drive tended to lag under heavy read/write operations. I did not experienced that with Linux directly loaded on a SATA SSD. I also had some issues dealing with my storage drive (NVMe SSD) still using an NTFS file system. Once I went full Linux and ext4, it’s been nothing but smooth sailing.
As @MagicShel@lemmy.zip pointed out, performance will depend heavily on the generation of USB device and port. I was using a USB 3.1 device and a USB 3.1 port (no idea on the generation). So, speeds were ok-ish. By comparison, SATA 2 can have a transfer rate of 2 GB/s. And while the SSD itself may not have saturated that bandwidth, it almost certainly blew the transfer rate of my USB device out of the water. When I later upgraded to an NVMe drive, things just got better.
Overall, load times from the USB drive is the one place I wouldn’t trust testing Linux on USB. It’s going to be slower and have lag compared to an SSD. Read/Write performance should be comparable to Windows. Though, taking the precaution of either dual booting or backing up your Windows install can certainly make sense to test things out.