• RagingSnarkasm@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    ·
    19 hours ago

    When you’re a scrappy little startup like OpenAI, you can’t afford to be paying “people” for their “work” and other nonsense like that.

  • Grimy@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    edit-2
    20 hours ago

    We first use the DE-COP membership inference attack (Duarte et al. 2024) to determine whether a particular data sample was part of a target model’s training set. This works by quizzing an LLM with a multiple choice test to see whether it can identify original human-authored O’Reilly book paragraphs from machine-generated paraphrased alternatives that we present it with. If the model frequently correctly identifies the actual (human-generated) booktext (for books published during the model’s training period) then this likely indicates priormodel recognition (training) of that text.

    I’m almost certain OpenAI trained on copyrighted content but this proves nothing other then it’s ability to distinguish between human and machine written text.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      12 hours ago

      The other problem is that even if their books are in the data set there’s no evidence that they were taken directly from the source. OpenAI scrape websites right, and O’Reilly books are often pirated because of their predatory business model (they change their textbooks every year meaning you can’t use a previous year’s secondhand book). So it’s entirely possible, although unlikely, that the content got in there from scraping it from a pirate site.

    • ℍ𝕂-𝟞𝟝@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      5
      ·
      18 hours ago

      I mean it can be controlled for that by checking different texts, such as something that was definitely not in the training set.