well said!
imo it’s not a coincidence the public are being steered away from supporting graphene. it’s one thing to see the general public do this, but seeing countless people who supposedly should know better its quite disturbing.
well said!
imo it’s not a coincidence the public are being steered away from supporting graphene. it’s one thing to see the general public do this, but seeing countless people who supposedly should know better its quite disturbing.


admittedly i’m not up to date on all the drama, but i thought that graphene saw themselves as victims of alt attacks?


some of it is kind of inevitable when you see how far ahead from everyone else they are technically and when people shitting on their work just aren’t at their (technical) level it seems to be very draining. and eventually lead to dramas.


for alot of people their relationship with windows is like that of an abusive partner. which is why you see alot of the same excuses pop up


I don’t think they’re disputing any of that if it’s hosted locally (including safely remote accessed by you). i think they’re talking about it being fed to the cloud & commoditised, which is a valid concern imo.
without further explanations of OP’s intent i’m inclined to think this is perhaps the best approach


exactly
default: on
user: explicitly turns off
random “update”: defaults back on
Now wait 1 year
I fucking hate this timeline.
my first thought as well…how did we get to the point that this is a valid topic?
(not a comment about you OP, just the state of the world)


can you pls explain what you mean in more depth?
your original post is sufficiently vague that tbh i don’t blame people for assuming you were just bootlicking? [which probably says more about the state of the world than you as an individual, but honestly it’s not clear what you’re trying to say?]
we all know a random citizen/local business presenting an identical calibre of evidence of repeated crimes would be extremely unlikely to routinely receive this degree of resource allocation.
so if it’s an idealised aspirational universal “order” you’re talking about then obviously noone’s buying it - and i don’t think you are either. so what do you mean?
tar pits target the scrapers.
were you talking also about poisoning the training data?
two distinct (but imo highly worthwhile) things
tar pits are a bit like turning the tap off (or to a useless trickle). fortunately it’s well understood how to do it efficiently and it’s difficult to counter.
poisoning is a whole other thing. i’d imagine if nothing comes out of the tap the poison is unlikely to prove effective. there could perhaps be some clever ways to combine poisoning with tarpits in series, but in general they’d be deployed separately or at least in parallel.
bear in mind to meaningfully deploy a tar pit against scrapers you usually need some permissions on the server, it may not help too much for this exact problem in the article (except for some short term fuckery perhaps). poisoning this problem otoh is probably important
deleted by creator
Imo signal protocol is mostly fairly robust, signal service itself is about the best middle ground available to get the general public off bigtech slop.
It compares favorably against whatsapp while providing comparable UX/onboarding/rendevous, which is pretty essential to get your non-tech friends/family out of meta’s evil clutches.
Just the sheer number of people signal’s helped to protect from eg. meta, you gotta give praise for that.
It is lacking in core features which would bring it to the next level of privacy, anonymity and safety. But it’s not exactly trivial to provide ALL of the above in one package while retaining accessibility to the general public.
Personally, I’d be happier if signal began to offer these additional features as options, maybe behind a consent checkbox like “yes i know what i’m doing (if someone asked you to enable this mode & you’re only doing it because they told you to, STOP NOW -> ok -> NO REALLY, STOP NOW IF YOU ARE BEING ASKED TO ENABLE THIS BY ANYONE -> ok -> alright, here ya go…)”.
i think they mean future devices, not previously sold.
either way the thread is 99% invalid criticism of what is afaict one of the best projects of our generation


Google could snap its fingers tomorrow and lock down the ability to unlock bootloaders.
only valid point in the post afaict


is the machine the problem? that seems more like a philosophical or semantic debate.
the machine is not fit for the advertised purpose.
to some people that means the machine has a fault.
to others that means the human salesperson is irresponsibly talking bs about their unfinished product
imo an earnest reading of the logs has to acknowledge at least potential evidence of openai’s monetisation loop at play in a very murky situation.


It’s not any more conductive
quick note: you’re likely correct the conductivity may not be higher, but the conductance likely is.
in other words, i second your suggestion of heavier duty foil (for EM reasons, skin effect etc) alongside the mechanical factors you mentioned.


tldr: VM->RDP seamless render
WinApps works by: Running Windows in a Docker, Podman or libvirt virtual machine. Querying Windows for all installed applications. Creating shortcuts to selected Windows applications on the host GNU/Linux OS. Using FreeRDP as a backend to seamlessly render Windows applications alongside GNU/Linux applications.


(ok i see, you’re using the term CPU colloquially to refer to the processor. i know you obviously know the difference & that’s what you meant - i just mention the distinction for others who may not be aware.)
ultimately op may not require exact monitoring, since they compared it to standard system monitors etc, which are ofc approximate as well. so the tools as listed by Eager Eagle in this comment may be sufficient for the general use described by op?
eg. these, screenshots looks pretty close to what i imagined op meant
now onto your very cool idea of substantially improving the temporal resolution of measuring memory bandwidth…you’ve got me very interested with your idea :)
my inital sense is counting completed L3/4 cache misses sourced from DRAM and similar events might be alot easier - though as you point out that will inevitably accumulate event counts within a given time interval rather than an individual event.
i understand the role of parity bits in ECC memory, but i didn’t quite understand how & which ECC fields you would access, and how/where you would store those results with improved temporal resolution compared to event counts?
would love to hear what your setup would look like? :) which ECC-specific masks would you monitor? where/how would you store/process such high resolution results without impacting the measurement itself?


CPU and Memory use clock speed regulated by voltage to pass data back and forth with no gates between
could you please explain what you mean by no gates?
i think i can answer this
personally, when i first encountered graphene’s radical statements about the terrible security landscape we’re all subjected to, i reacted quite negatively & assumed they were crazy.
then i actually check the technical details of their claims, and fuck me it turns out to be SCARILY correct.
most people don’t actually bother with the second part. and you end up with a classic “shoot the messenger” scenario, where the bearer of bad news is equated with the bad news itself & punished by the mob (because they feel it’s easier than actually facing the uncomfortable reality of the bad news).
that scenario can only play out for so long before the messenger gets sick of being shot every day & reacts badly to the crowd. then the crowd points at their poor reaction & uses it as further “evidence” against their character.