• 0 Posts
  • 31 Comments
Joined 5 years ago
cake
Cake day: October 2nd, 2020

help-circle
  • tar pits target the scrapers.

    were you talking also about poisoning the training data?

    two distinct (but imo highly worthwhile) things

    tar pits are a bit like turning the tap off (or to a useless trickle). fortunately it’s well understood how to do it efficiently and it’s difficult to counter.

    poisoning is a whole other thing. i’d imagine if nothing comes out of the tap the poison is unlikely to prove effective. there could perhaps be some clever ways to combine poisoning with tarpits in series, but in general they’d be deployed separately or at least in parallel.

    bear in mind to meaningfully deploy a tar pit against scrapers you usually need some permissions on the server, it may not help too much for this exact problem in the article (except for some short term fuckery perhaps). poisoning this problem otoh is probably important



  • ganymede@lemmy.mltoPrivacy@lemmy.mlIs Signal messaging really private?
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    edit-2
    9 days ago

    Imo signal protocol is mostly fairly robust, signal service itself is about the best middle ground available to get the general public off bigtech slop.

    It compares favorably against whatsapp while providing comparable UX/onboarding/rendevous, which is pretty essential to get your non-tech friends/family out of meta’s evil clutches.

    Just the sheer number of people signal’s helped to protect from eg. meta, you gotta give praise for that.

    It is lacking in core features which would bring it to the next level of privacy, anonymity and safety. But it’s not exactly trivial to provide ALL of the above in one package while retaining accessibility to the general public.

    Personally, I’d be happier if signal began to offer these additional features as options, maybe behind a consent checkbox like “yes i know what i’m doing (if someone asked you to enable this mode & you’re only doing it because they told you to, STOP NOW -> ok -> NO REALLY, STOP NOW IF YOU ARE BEING ASKED TO ENABLE THIS BY ANYONE -> ok -> alright, here ya go…)”.







  • (ok i see, you’re using the term CPU colloquially to refer to the processor. i know you obviously know the difference & that’s what you meant - i just mention the distinction for others who may not be aware.)

    ultimately op may not require exact monitoring, since they compared it to standard system monitors etc, which are ofc approximate as well. so the tools as listed by Eager Eagle in this comment may be sufficient for the general use described by op?

    eg. these, screenshots looks pretty close to what i imagined op meant

    now onto your very cool idea of substantially improving the temporal resolution of measuring memory bandwidth…you’ve got me very interested with your idea :)

    my inital sense is counting completed L3/4 cache misses sourced from DRAM and similar events might be alot easier - though as you point out that will inevitably accumulate event counts within a given time interval rather than an individual event.

    i understand the role of parity bits in ECC memory, but i didn’t quite understand how & which ECC fields you would access, and how/where you would store those results with improved temporal resolution compared to event counts?

    would love to hear what your setup would look like? :) which ECC-specific masks would you monitor? where/how would you store/process such high resolution results without impacting the measurement itself?





  • edit: nvm i re-read what you wrote

    i agree it does mostly fulfill the criteria for libre software. perhaps not in every way to the same spirit as other projects, but that is indeed a separate discussion.

    h̶o̶w̶ ̶m̶a̶n̶y̶ ̶c̶o̶m̶m̶u̶n̶i̶t̶i̶e̶s̶ ̶a̶r̶e̶ ̶d̶o̶i̶n̶g̶ ̶t̶h̶a̶t̶ ̶r̶i̶g̶h̶t̶ ̶n̶o̶w̶?̶ ̶i̶ ̶s̶u̶s̶p̶e̶c̶t̶ ̶y̶o̶u̶ ̶m̶a̶y̶ ̶b̶e̶ ̶d̶r̶a̶s̶t̶i̶c̶a̶l̶l̶y̶ ̶u̶n̶d̶e̶r̶s̶t̶a̶t̶i̶n̶g̶ ̶t̶h̶e̶ ̶b̶a̶r̶r̶i̶e̶r̶s̶ ̶f̶o̶r̶ ̶t̶h̶a̶t̶.̶ ̶b̶u̶t̶ ̶w̶o̶u̶l̶d̶ ̶b̶e̶ ̶d̶e̶l̶i̶g̶h̶t̶e̶d̶ ̶t̶o̶ ̶b̶e̶ ̶p̶r̶o̶v̶e̶n̶ ̶w̶r̶o̶n̶g̶.̶.̶.̶




  • afaict the topic of the article seems to be focusing on trust as in privacy and confidentiality

    for the discussion i think we can extend trust as in also trusting the ethics and motivation of the company producing the “AI”

    imo what this overlooks is that a community or privately made “AI” running entirely offline has the capacity to tick those boxes rather differently.

    trusting it to be effective is perhaps an entirely different discussion however

    feeling like you’ve been listened to can be therapeutic.

    actionable advice is an entirely different matter ofc.


  • Or they’re just adding improvements to the software they heavily rely on.

    which they can do in private any time they wish, without any of the fanfare.

    if they actually believe in opensource let them opensource windows 7 1, or idk the 1/4 of a century old windows 2k

    instead we get the fanare as they pat themselves on the back for opensourcing MS-DOS 4.0 early last year (not even 8.0, which is 24 years old btw, 4.0 which came out in 1986).

    38 years ago…

    MS-fucking-DOS, from 38 years ago, THAT’S how much they give a shit about opensource mate.

    all we get is a poor pantomime which actually only illustrates just how stupid they truly think we are to believe the charade.

    does any of that mean they’re 100% have to be actively shipping “bad code” in this project, not by any means. does it mean microsoft will never make a useful contribution to linux, not by any means. what it does mean is they’re increasing their sphere of influence over the project. and they have absolutely no incentive to help anyone but themselves, in fact the opposite.

    as everyone knows (it’s not some deep secret the tech heads on lemmy somehow didn’t hear about) microsoft is highly dependent on linux for major revenue streams. anything a monolith depends on which they don’t control represents a risk. they’d be negligent if they didn’t try to exert control over it. and that’s for any organisation in their position. then factor in their widespread outspoken agenda against opensource, embrace, extend, extinguish and the vastly lacking longterm evidence to match their claims of <3 opensource.

    they’re welcome to prove us all wrong, but that isn’t even on the horizon currently.

    1 yes yes they claim they can’t because “licensing”, which is mostly but not entirely fucking flimsy, but ok devils advocate: release the rest, but nah.