• 0 Posts
  • 45 Comments
Joined 3 years ago
cake
Cake day: July 11th, 2023

help-circle
  • So AI is a nice new technological tool in a big toolbox, not a technological and business revolution justifying the stock market valuations around it, investment money sunk into it or the huge amount of resources (such as electricity) used by it.

    Specifically for Microsoft, there doesn’t really seem to be any area were MS’ core business value for customers gains from adding AI, in which case this “AI everywhere” strategy in Microsoft is an incredibly shit business choice that just burns money and damages brand value.

    It’s a shiny new tool that is really powerful and flexible and everyone is trying to cram everywhere. Eventually, most of those attempts will collapse in failure, probably causing a recession and afterward the useful use cases will become part of how we all do things. AI is now where the internet was in the late 80s - just beyond the point where it’s not just some academics fiddling with it in research labs, but not in any way a mature technology.

    Most gaming PCs from the 2020s can run a model locally though it might need to be a pruned one, so maybe a little farther along.







  • that he just wants a propaganda bot that regurgitates all of the right wing talking points.

    Then he has utterly failed with Grok. One of my new favorite pastimes is watching right wingers get angry that Grok won’t support their most obviously counterfactual bullshit and then proceed to try to argue it into saying something they can declare a win from.



  • Wikipedia is not a trustworthy source of information for anything regarding contemporary politics or economics.

    Wikipedia presents the views of reliable sources on notable topics. The trick is what sources are considered “reliable” and what topics are “notable”, which is why it’s such a poor source of information for things like contemporary politics in particular.







  • You may have nothing to fear right now, but you never know who’s going to be in office soon.

    The way I always explain it to people - take any additional government power or access to information you either don’t care about or actively support. Now imagine whoever you oppose/hate the most taking office and trying to use that against your interests. Are you still OK with them having that power? Same principle applies regardless of what power or who’s pushing for it.

    It’s like due process - you don’t want any category of alleged violation not to be subject to due process, and if you don’t understand why then it’s time to wrongfully accuse you of doing that so you understand the problem.


  • If AI didn’t exist, it would’ve probably been Astrology or Conspiracy Theories or QAnon or whatever that ended up triggering this within people who were already prone to psychosis.

    Or hearing the Beatles White Album and believing it tells you that a race war is coming and you should work to spark it off, then hide in the desert for a time only to return at the right moment to save the day and take over LA. That one caused several murders.

    But the problem with ChatGPT in particular is that is validates the psychosis… that is very bad.

    If you’re sufficiently detached from reality, nearly anything validates the psychosis.


  • To be clear, when you say “seeded from” you mean an image that was analyzed as part of building the image classifying statistical model that is then essentially running reverse to produce images, yes?

    And you are arguing that every image analyzed to calculate the weights on that model is in a meaningful way contained in every image it generated?

    I’m trying to nail down exactly what you mean when you say “seeded by.”


  • OK, so this is just the general anti-AI image generation argument where you believe any image generated is in some meaningful way a copy of every image analyzed to produce the statistical model that eventually generated it?

    I’m surprised you’re going the CSAM route with this and not just arguing that any AI generated sexually explicit image of a woman is nonconsensual porn of literally every woman who has ever posted a photo on social media.


  • was seeded with the face of a 15yr old and that they really are 15 for all intents and purposes.

    That’s…not how AI image generation works? AI image generation isn’t just building a collage from random images in a database - the model doesn’t have a database of images within it at all - it just has a bunch of statistical weightings and net configuration that are essentially a statistical model for classifying images, being told to produce whatever inputs maximize an output resembling the prompt, starting from a seed. It’s not “seeded with an image of a 15 year old”, it’s seeded with white noise and basically asked to show how that white noise looks like (in this case) “woman porn miniskirt”, then repeat a few times until the resulting image is stable.

    Unless you’re arguing that somewhere in the millions of images tagged “woman” being analyzed to build that statistical model is probably at least one person under 18, and that any image of “woman” generated by such a model is necessarily underage because the weightings were impacted however slightly by that image or images, in which case you could also argue that all drawn images of humans are underage because whoever drew it has probably seen a child at some point and therefore everything they draw is tainted by having been exposed to children ever.