

CPU and Memory use clock speed regulated by voltage to pass data back and forth with no gates between
could you please explain what you mean by no gates?
CPU and Memory use clock speed regulated by voltage to pass data back and forth with no gates between
could you please explain what you mean by no gates?
Reading up on RDP
Microsoft requires RDP implementers to obtain a patent license
there it is. good info to dig up jrgd, well done! shame we had to scroll so far in the thread to find these actual proper, highly relevant details.
well, everyone has to pick their battles, and perhaps RHEL just couldn’t fight this one out.
but imo i’d much rather see VNC get some upgrades under RHEL than continue the ever increasing microsoft-ization of linux
deleted by creator
edit: nvm i re-read what you wrote
i agree it does mostly fulfill the criteria for libre software. perhaps not in every way to the same spirit as other projects, but that is indeed a separate discussion.
h̶o̶w̶ ̶m̶a̶n̶y̶ ̶c̶o̶m̶m̶u̶n̶i̶t̶i̶e̶s̶ ̶a̶r̶e̶ ̶d̶o̶i̶n̶g̶ ̶t̶h̶a̶t̶ ̶r̶i̶g̶h̶t̶ ̶n̶o̶w̶?̶ ̶i̶ ̶s̶u̶s̶p̶e̶c̶t̶ ̶y̶o̶u̶ ̶m̶a̶y̶ ̶b̶e̶ ̶d̶r̶a̶s̶t̶i̶c̶a̶l̶l̶y̶ ̶u̶n̶d̶e̶r̶s̶t̶a̶t̶i̶n̶g̶ ̶t̶h̶e̶ ̶b̶a̶r̶r̶i̶e̶r̶s̶ ̶f̶o̶r̶ ̶t̶h̶a̶t̶.̶ ̶b̶u̶t̶ ̶w̶o̶u̶l̶d̶ ̶b̶e̶ ̶d̶e̶l̶i̶g̶h̶t̶e̶d̶ ̶t̶o̶ ̶b̶e̶ ̶p̶r̶o̶v̶e̶n̶ ̶w̶r̶o̶n̶g̶.̶.̶.̶
Thanks for the reference, from there I found the very impressive original Nature paper “A RISC-V 32-bit microprocessor based on two-dimensional semiconductors” fantastic stuff!!
From the paper, that’s almost a 40x improvement on comparable logic integration!
Some notes from the paper:
Typically this is where people like to shit on the design “cos muh GHz” etc, but tbf not only will people doubtless work on improving the clock speeds etc, but there’s plenty of applications where computation time or complexity isn’t so demanding, so i’m just excited by any breakthrough in these areas.
if this is a full RISCV implementation in 2D materials this is a genuinely impressive breakthrough!!
afaict the topic of the article seems to be focusing on trust as in privacy and confidentiality
for the discussion i think we can extend trust as in also trusting the ethics and motivation of the company producing the “AI”
imo what this overlooks is that a community or privately made “AI” running entirely offline has the capacity to tick those boxes rather differently.
trusting it to be effective is perhaps an entirely different discussion however
feeling like you’ve been listened to can be therapeutic.
actionable advice is an entirely different matter ofc.
Or they’re just adding improvements to the software they heavily rely on.
which they can do in private any time they wish, without any of the fanfare.
if they actually believe in opensource let them opensource windows 7 1, or idk the 1/4 of a century old windows 2k
instead we get the fanare as they pat themselves on the back for opensourcing MS-DOS 4.0 early last year (not even 8.0, which is 24 years old btw, 4.0 which came out in 1986).
38 years ago…
MS-fucking-DOS, from 38 years ago, THAT’S how much they give a shit about opensource mate.
all we get is a poor pantomime which actually only illustrates just how stupid they truly think we are to believe the charade.
does any of that mean they’re 100% have to be actively shipping “bad code” in this project, not by any means. does it mean microsoft will never make a useful contribution to linux, not by any means. what it does mean is they’re increasing their sphere of influence over the project. and they have absolutely no incentive to help anyone but themselves, in fact the opposite.
as everyone knows (it’s not some deep secret the tech heads on lemmy somehow didn’t hear about) microsoft is highly dependent on linux for major revenue streams. anything a monolith depends on which they don’t control represents a risk. they’d be negligent if they didn’t try to exert control over it. and that’s for any organisation in their position. then factor in their widespread outspoken agenda against opensource, embrace, extend, extinguish and the vastly lacking longterm evidence to match their claims of <3 opensource.
they’re welcome to prove us all wrong, but that isn’t even on the horizon currently.
1 yes yes they claim they can’t because “licensing”, which is mostly but not entirely fucking flimsy, but ok devils advocate: release the rest, but nah.
yes they lost the battle, now they’re most likely aiming to win the war.
ah fair enough. i think that was the initial confusion from myself and perhaps the other user in this discussion. i didn’t realise your use cases.
it’s always a fun topic to discuss and got me thinking about some new ideas :)
afaik mmW is FR2
5G FR1 is sub x-band microwave
cool, sounds like you have most of the principles down.
what i didn’t yet see articulated with chat-e2ee is how the actual code itself verifies itself to the user in the browser? it sounds to me like it assumes the server which serves the code is ‘trusted’, while the theoretically different server(s) which transmits the messages can be ‘untrusted’.
out of interest, do you actually mean no login, or do you mean no email-verified login?
i’m trying to understand your exact scenario.
but in general, the problem is where do you get your original key, or original hash to verify from? if they are both coming from the server, along with the code which processes them, then if the server is compromised, so are you.
thankfully browsers give alot of crypto API lately (as discussed in your link)
but you still need at minimum a secure key, a hash and trusted code to verify the code the server serves you. there are ofc solutions to this problem, but if the server is unstrusted, you absolutely can’t get it from them, which means you have to get it from somewhere else (that you trust).
Thanks for the distinctions and links to the other good discussions you’ve started!
For the invasive bits that are included, it’s easy enough for GrapheneOS to look over the incremental updates in Android and remove the bits that they don’t like.
That’s my approximate take as well, but it wasn’t quite what I was getting at.
What I meant is, to ask ourselves why is that the case? A LOT of it is because google wills it to be so.
Not only in terms of keeping it open, but also in terms of making it easy or difficult - it’s almost entirely up to google how easy or hard it’s going to be. Right now we’re all reasonably assuming they have no current serious incentives to change their mind. After all, why would they? The miniscule % of users who go to the effort of installing privacy enhanced versions of chromium (or android based os), are a tiny drop in the ocean compared to the vast majority of users running vanilla and probably never even heard of privacy enhanced versions.
excellent writeup with some high quality referencing.
minor quibble
Firefox is insecure
i’m not sure many people would disagree with you that FF is less secure than Chromium (hardly a surprise given the disparity in their budgets and resources)
though i’m not sure it’s fair to say FF is insecure if we are by comparison inferring Chromium is secure? ofc Chromium is more secure than FF, as your reference shows.
another minor quibble
projects like linux-libre and Libreboot are worse for security than their counterparts (see coreboot)
does this read like coreboot is proprietary? isn’t it GPL2? i might’ve misunderstood something.
you make some great points about open vs closed source vs proprietary etc. again, it shouldn’t surprise us that many proprietary projects or Global500 funded opensource projects, with considerably greater access to resources, often arrive at more robust solutions.
i definitely agree you made a good case for the currently available community privacy enhanced versions based on open source projects from highly commercial entities (Chromium->Vanadium, Android/Pixel->GrapheneOS) etc. something i think to note here is that without these base projects actually being opensource, i’m not sure eg. the graphene team would’ve been able to achieve the technical goals in the time they have, and likely with even less success legally.
so in essence, in the current forms at least, we have to make some kind of compromise, choosing between something we know is technically more robust and then needing to blindly trust the organisation’s (likely malicious) incentives. therefore as you identify, obviously the best answer is to privacy enhance the project, which does then involve some semi-blind trusting the extent of the privacy enhancement process - assuming good faith in the organisation providing the privacy enhancement: there is still an implicit arms race where privacy corroding features might be implemented at various layers and degrees of opacity vs the inevitably less resourced team trying to counter them.
is there some additional semi-blind ‘faith’ we’re also employing where we are probably assuming the corporate entity currently has little financial incentive in undermining the opensource base project because they can simply bolt on whatever nastiness they want downstream? it’s probably not a bad assumption overall, though i’m often wondering how long that will remain the case.
and ofc on the other hand, we have organisations who’s motivation we supposedly trust (mostly…for now), but we know we have to make a compromise on the technical robustness. eg. while FF lags behind the latest hardening methods, it’s somewhat visible to the dedicated user where they stand from a technical perspective (it’s all documented, somewhere). so then the blind trust is in the purity of the organisation’s incentives, which is where i think the political-motivated wilfully-technically-ignorant mindset can sometimes step in. meanwhile mozilla’s credibility will likely continue to be gradually eroded, unless we as a community step up and fund them sufficiently. and even then, who knows.
there’s certainly no clear single answer for every person’s use-case, and i think you did a great job delineating the different camps. just wanted to add some discussion. i doubt i’m as up to date on these facets as OP, so welcome your thoughts.
I’m sick of privacy being at odds with security
fucking well said.
what a fucked timeline
browsers turning off specific extensions which protect us.
they shouldn’t even have a horse in this race. i mean we know why they do, but damn is it completely insane.
what’s also fucked is how normalised this is becoming.
all of that said, edge who?
fuck me lemmy is turning into an absolute reddit-esque cesspool shithole.
i do not understand why people are in here simping for cloudflare (presumably unpaid) do they have money in cloudflare? clearly they don’t have a fucking clue whats really going on in the world, but what makes them think they need to actively enforce (ie. downvote people) for pointing out issues with cloudflare??
this is beyond weird.
correct.
the level of unsubstantiated cope in this thread is mind boggling. from people many of whom should honestly know better.
(ok i see, you’re using the term CPU colloquially to refer to the processor. i know you obviously know the difference & that’s what you meant - i just mention the distinction for others who may not be aware.)
ultimately op may not require exact monitoring, since they compared it to standard system monitors etc, which are ofc approximate as well. so the tools as listed by Eager Eagle in this comment may be sufficient for the general use described by op?
eg. these, screenshots looks pretty close to what i imagined op meant
now onto your very cool idea of substantially improving the temporal resolution of measuring memory bandwidth…you’ve got me very interested with your idea :)
my inital sense is counting completed L3/4 cache misses sourced from DRAM and similar events might be alot easier - though as you point out that will inevitably accumulate event counts within a given time interval rather than an individual event.
i understand the role of parity bits in ECC memory, but i didn’t quite understand how & which ECC fields you would access, and how/where you would store those results with improved temporal resolution compared to event counts?
would love to hear what your setup would look like? :) which ECC-specific masks would you monitor? where/how would you store/process such high resolution results without impacting the measurement itself?