Jack Dorsey, co-founder of Twitter (now X) and Square (now Block), sparked a weekend’s worth of debate around intellectual property, patents, and copyright, with a characteristically terse post declaring, “delete all IP law.”

X’s current owner Elon Musk quickly replied, “I agree.”

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    80
    arrow-down
    1
    ·
    6 days ago

    IP law does 3 things that are incredibly important… but have been basically irrelevant between roughly 1995-2023.

    1. Accurate attribution. Knowing who actually made a thing is super important for the continued development of ideas, as well as just granting some dignity to the inventor/author/creator.
    2. Faithful reproduction. Historically, bootleg copies of things would often be abridged to save costs or modified to suit the politics of the bootlegger, but would still be sold under the original title. It’s important to know what the canonical original content is, if you’re going to judge it fairly and respond to it.
    3. Preventing bootleggers from outcompeting original creators through scale.

    Digital technology made these irrelevant for a while, because search engines could easily answer #1, digital copies are usually exact copies so #2 was not an issue, and digital distribution made #3 (scale) much more balanced.

    But then came AI. And suddenly all 3 of these concerns are valid again. And we’ve got a population who just spent the past 30 years living in a world where IP law had zero upsides and massive downsides.

    There’s no question that IP law is due for an overhaul. The question is: will we remember that it ever did anything useful, or will we exchange one regime of fatcats fucking over culture for another one?

      • odioLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        6 days ago

        Make yourself the question: how does genai respect these 3 boundaries set by IP law? All providers of Generative AI services should be forced by law to explicitly estate this.

        • Riskable@programming.dev
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          6 days ago

          I’m still not getting it. What does generative AI have to do with attribution? Like, at all.

          I can train a model on a billion pictures from open, free sources that were specifically donated for that purpose and it’ll be able to generate realistic pictures of those things with infinite variation. Every time it generates an image it’s just using logic and RNG to come up with options.

          Do we attribute the images to the RNG god or something? It doesn’t make sense that attribution come into play here.

          • ComfortablyDumb@lemmy.ca
            link
            fedilink
            English
            arrow-up
            8
            ·
            5 days ago

            I would like to take a crack at this. There is this recent trend going around with ghiblifying one’s picture. Its basically converting a picture into ghibli image. If you had trained it on free sources, this is not possible.

            Internally an LLM works by having networks which activate based on certain signals. When you ask it a certain question. It creates a network of similar looking words and then gives it back to you. When u convert an image, you are doing something similar. You cannot form these networks and the threshold at which they activate without seeing copyrighted images from studio ghibli. There is no way in hell or heaven for that to happen.

            OpenAI trained their models on pirated things just like meta did. So when an AI produces an image in style of something, it should attribute the person from which it actually took it. Thats not whats happening. Instead it just makes more money for the thief.

            • Riskable@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              If you hired someone to copy Ghibli’s style, then fed that into an AI as training data, it would completely negate your entire argument.

              It is not illegal for an artist to copy someone else’s style. They can’t copy another artist’s work—that’s a derivative—but copying their style is perfectly legal. You can’t copyright a style.

              All of that is irrelevant, however. The argument is that—somehow—training an AI with anything is somehow a violation of copyright. It is not. It is absolutely 100% not a violation of copyright to do that!

              Copyright is all about distribution rights. Anyone can download whatever TF they want and they’re not violating anyone’s copyright. It’s the entity that sent the person the copyright that violated the law. Therefore, Meta, OpenAI, et al can host enormous libraries of copyrighted data in their data centers and use that to train their AI. It’s not illegal at all.

              When some AI model produces a work that’s so similar to an original work that anyone would recognize it, “yeah, that’s from Spirited Away” then yes: They violated Ghibli’s copyright.

              If the model produces an image of some random person in the style of Studio Ghibli that is not violating anyone’s copyright. It is not illegal nor is it immoral. No one is deprived of anything in such a transaction.

          • Carrot@lemmy.today
            link
            fedilink
            English
            arrow-up
            5
            ·
            5 days ago

            I think your understanding of generative AI is incorrect. It’s not just “logic and RNG” It is using training data (read as both copyrighted and uncopyrighted material) to come up with a model of “correctness” or “expectedness”. If you then give it a pattern, (read as question or prompt) it checks its “expectedness” model for whatever should come next. If you ask it “how many cups in a pint” it will check the most common thing it has seen after that exact string of words it in its training data: 2. If you ask for a picture of something “in the style of van gogh”, it will spit out something with thick paint and swirls, as those are the characteristics of the pictures in its training data that have been tagged with “Van Gogh”. These responses are not brand new, they are merely a representation of the training data that would most work as a response to your request. In this case, if any of the training data is copyrighted, then attribution must be given, or at the very least permission to use this data must be given by the current copyright holder.

            • Riskable@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              I think your understanding of generative AI is incorrect. It’s not just “logic and RNG”…

              If it runs on a computer, it’s literally “just logic and RNG”. It’s all transistors, memory, and an RNG.

              The data used to train an AI model is copyrighted. It’s impossible for something to exist without copyright (in the past 100 years). Even public domain works had copyright at some point.

              if any of the training data is copyrighted, then attribution must be given, or at the very least permission to use this data must be given by the current copyright holder.

              This is not correct. Every artist ever has been trained with copyrighted works, yet they don’t have to recite every single picture they’ve seen or book they’ve ever read whenever they produce something.

              • Carrot@lemmy.today
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 days ago

                If it runs on a computer, it’s literally “just logic and RNG”. It’s all transistors, memory, and an RNG.

                Sure, but this is a bad faith argument. You can say this about anything. Everything is made up of other stuff, it’s what someone has done to combine or use those elements that matters. You could extend this to anything proprietary. Manufacturing equipment is just a handful of metals, rubbers, and plastics. However, the context in which someone uses those materials is what matters when determining if copyright laws have been broken.

                The data used to train an AI model is copyrighted. It’s impossible for something to exist without copyright (in the past 100 years). Even public domain works had copyright at some point.

                If the data used to train the model was copyrighted data acquired without explicit permission from the data owners, it itself cannot be copyrighted. You can’t take something copyrighted by someone else, put it in a group of stuff that is also copyrighted by others, and claim you have some form of ownership over that collection of works.

                This is not correct. Every artist ever has been trained with copyrighted works, yet they don’t have to recite every single picture they’ve seen or book they’ve ever read whenever they produce something.

                You speak confidently, but I don’t think you understand the problem area enough to act as an authority on the topic.

                Laws can be different for individuals and companies. Hell, laws of use can be different for two different individuals, and the copyright owner actually gets a say in how their thing can be used by different groups of people. For instance, for a 3d art software, students can use it for free. However, their use agreement is that they cannot profit off of anything they make. Non students have to pay, but can sell their work without consequences. Companies have to pay even more, but often times get bulk discounts if they are buying licenses for their whole team.

                Artists have something of value: AI training data. We know this is valuable to AI training companies, because artists are getting reached out to by AI companies, asking to sell them the rights to train their model on their data. If AI companies just use an artist’s AI training data without their permission, it’s stealing the potential revenue they could have made selling it to a different AI company. Taking away revenue potential on someone’s work is the basis for having violated copyright/fair use laws.

      • finitebanjo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        5 days ago

        I’ve decided all of your comments are all mine, I’m feeding them into an AI which approximates you except ends every statement with how stupid and lame it is. It talks a lot about gayness as a side effect of that, in a derogatory manner.

        Would you like me to stop?

      • finitebanjo@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        5 days ago

        Are we pretending metadata on images and sounds actually work and don’t get scrubbed almost immediately?