In order to help train its AI models, Meta (and others) have been using pirated versions of copyrighted books, without the consent of authors or publishers. The company behind Facebook and Instagram faces an ongoing class-action lawsuit brought by authors including Richard Kadrey, Sarah Silverman, and Christopher Golden, and one in which it has already scored a major (and surprising) victory: The Californian court concluded last year that using pirated books to train its Llama LLM did qualify as fair use.

You’d think this case would be as open-and-shut as it gets, but never underestimate an army of high-priced lawyers. Meta has now come up with the striking defense that uploading pirated books to strangers via BitTorrent qualifies as fair use. It further goes on to claim that this is double good, because it has helped establish the United States’ leading position in the AI field.

Meta further argues that every author involved in the class-action has admitted they are unaware of any Llama LLM output that directly reproduces content from their books. It says if the authors cannot provide evidence of such infringing output or damage to sales, then this lawsuit is not about protecting their books but arguing against the training process itself (which the court has ruled is fair use).

Judge Vince Chhabria now has to decide whether to allow this defense, a decision that will have consequences for not only this but many other AI lawsuits involving things like shadow libraries. The BitTorrent uploading and distribution claims are the last element of this particular lawsuit, which has been rumbling on for three years now, to be settled.

    • artifex@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      51
      ·
      16 days ago

      As long as you’re rich enough to hire your own army of lawyers, probably.

      That said, it seems like when you’re rich enough to hire your own army of lawyers you can pretty much do whatever you want.

      • Kailn@lemmy.myserv.one
        link
        fedilink
        English
        arrow-up
        6
        ·
        16 days ago

        Well, that doesn’t sound civil or lawful at all and more like kindoms of the dark ages degree of “rules” where it doesn’t apply to a choosen few.

        If Meta and other bigcorps that support the US goverment get the special “avoid-judgment” card and you face punishment then there’s no law, only bigotry.

        That would encourage individuals and small groups to keep their activites a secret (go anonymous) and break the law whenever they can,
        because the “king and his followers” don’t follow their own “rules”.

        The US is not only getting dystopian, they’re commiting primitive mistakes.

    • Dr. Moose@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      15 days ago

      Yes, in fact there’s no framework or legal precedent right now so everyone already is doing it. You can just scrape the web etc and disregard IP ownership because training AI is transformative work - as it should be.

  • Archangel1313@lemmy.ca
    link
    fedilink
    English
    arrow-up
    89
    arrow-down
    2
    ·
    16 days ago

    I absolutely love the fact that all these companies are laying the legal groundwork to destroy intellectual property rights altogether. If they win enough of these cases, then every pirate on the open seas sails under a flag of amnesty.

      • lmmarsano@group.lt
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        11
        ·
        16 days ago

        I wouldn’t be so confident without a legal argument to support your opinion.

        • Dead_or_Alive@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          ·
          15 days ago

          No legal argument is necessary. Just look at history. The rich and well connected have always lived by a different set of rules.

          See below:

          • Robert Richards (Du Pont heir): A 2014 Forbes article noted that a Du Pont heir, Robert Richards, pleaded guilty to raping his 3-year-old daughter in 2008 and received probation instead of jail time, which caused public outrage.
          • August Busch IV: August Busch IV, a former Anheuser-Busch CEO, has been involved in past legal incidents, including a girlfriend’s overdose death at his house in 2010 and a car crash in 1983, but he was not charged with rape in these cases.
    • jabjoe@feddit.uk
      link
      fedilink
      English
      arrow-up
      10
      ·
      15 days ago

      Not all IP is self surviving. Even CopyRight isn’t always a bad thing, if you think of small artists, for example. My fear is about CopyLeft mainly as I feel it’s been incredible successful in pushing forwards openness. The megacorps hating it, tells you it is doing its job. Only of the things they love about LLM and code is it can license wash away CopyLeft.

  • Paranoid Factoid@lemmy.world
    link
    fedilink
    English
    arrow-up
    85
    ·
    15 days ago

    So meta gets to claim fair use with pure digital duplication, but archive.org doesn’t when they scan physical copies of books and only lend out the same number of copies as they own in warehouses. That’s piracy.

    Got it.

  • Goodlucksil@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    44
    ·
    16 days ago

    Classic “the end justifies the means” (bad) defense. If ISPs can send letter for torrenting, and Facebook torrented a lot, Facebook deserves a fair punishment.

  • Etterra@discuss.online
    link
    fedilink
    English
    arrow-up
    25
    ·
    15 days ago

    So when this works for them it’ll be precedent to allow the fair use pirating of all media and software, right?

    Oh never mind, I forgot that I don’t have billions of dollars to spend on lawyers. Never mind.

  • TheObviousSolution@lemmy.ca
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    16 days ago

    So we can pirate books as well as long as we aren’t able to reproduce them verbatim from memory as well?

    Judge Vince Chhabria either accepts whatever bribes and offers he’s probably getting offered and sides with Meta, or it will eventually go on to the Supreme Court where they most definitely will. That’s the part of this that will work the most under an administration of no accountability.

  • melfie@lemy.lol
    link
    fedilink
    English
    arrow-up
    17
    ·
    16 days ago

    Looking forward to Jellyfin getting a LLM to train locally on movie preferences so everyone’s library is fair use. Wait, is this why LLMs are being shoehorned into everything?

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    15 days ago

    Honestly I agree with Meta here but this should apply to everyone. I think most people here conflate their hate for Meta with the factual reality of intellectual property.

    • SpaceMan9000@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      ·
      15 days ago

      I can hate both.

      People can also hate the fact that if you have enough money you can make everything legal.

      • Dr. Moose@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        15 days ago

        What do you mean you can hate both? Whats the other of your hates? Disregard for copyright absolutism?

    • discocactus@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      16 days ago

      Unironically may become a legitimate defense. Although in that case, indiscriminately bombing gas stations in your town and extorting their owners should also be allowed but…

  • ArbitraryValue@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    10
    ·
    16 days ago

    We’re going to end up in a situation where whatever is necessary to train AI is permitted, and the main question is whether that will be through (re)interpretation of existing law or the passage of a new law.

    • ctrl_alt_esc@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      ·
      16 days ago

      Good thing I have a local model running that’s constantly learning, for precisely this reason

        • XLE@piefed.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          16 days ago

          If anything, this is proof you should be next in line for a large venture capital infusion!

  • andybytes@programming.dev
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    15 days ago

    So we subsidize these baby killing bastards and they pull the broke boy card. The united state is a brutal imperialist capitalist shithole …pffft fuck capitalism

  • ☂️-@lemmy.ml
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    16 days ago

    sure. thanks meta, anna’s archive will help me with my reading list, thanks.

  • ryathal@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    16 days ago

    Arguing that training models isn’t fair use us going to be a massive uphill battle, it’s basically reading the book but with a computer. It’s not actually a big deal to people, unless you hold the copyright to a ton of works and want to get a percentage of all the AI income these companies have made.

    Torrenting the books is likely absolutely copyright infringement, but that has relatively low payout compared to the money these companies are getting for their models. The training being fair use means that rights holders can’t try to take any money from the model’s use. The statutory limits for infringement even at per work levels aren’t significant compared to the legal cost of proving it happened.

    • OfCourseNot@fedia.io
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      16 days ago

      There’s an argument to be made that it is, in fact, not ‘reading’. The training of the model could be considered a lossy compression of the data. And streaming movies in a lossy compression format is not fair use, is it?

      • Fatal@piefed.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        16 days ago

        It’s not the storage of the information that matters as much as the presentation. Google’s search index stores a huge amount of copyrighted material, even losslessly. But they only present small snippets at a time which is not considered copyright infringement. The question really is whether or not the information being presented by the models is in a format which is considered copyright infringement. So far, courts have not found that they are.

      • ryathal@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        16 days ago

        The model doesn’t stream out anyone’s content though. The article mentions that the plaintiffs have provided no examples of a prompt that creates anything substantial.

        Streaming a lossy compression would generally be infringement, but there is definitely a point where it becomes not infringement if it’s lossy enough.

        What a model generally stores, is factual information that isn’t copyright in the first place. It’s storing word counts, sentence lengths, sentiment analysis, and so on.

    • FatCrab@slrpnk.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      16 days ago

      Anthropic pirating books for their training corpus resulted in the biggest copyright settlement in history–well over a billion. That is still being quibbled over i believe, but they settled because they were likely to pay out more if the case went forward. So I’m not really sure where you’re coming from that infringement via torrenting does not result in monstrously large liability.

      • ryathal@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        16 days ago

        The judge in that case ruled the training wasn’t fair use for pirated books, which left them on the hook for potentially all revenue (likely a court determined percentage) that the model generated for them in addition to statutory damages. That is well north of 1.5 billion.

        • artifex@piefed.socialOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          16 days ago

          Which is kind of a pity. Anyone who’s ever written something on the net should be getting royalty checks from these fucks. I’m not exactly famous but I’ve written prolifically in my field of work and have gotten nearly word-for-word reproductions of my articles out of every big model I’ve tested since GPT-3.

        • FatCrab@slrpnk.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          15 days ago

          Just noticed your reply and want to correct this. Anyhropic settled, the 1.5bil was not a judgment against them. Specifically, this covered the literal pirating of the training corpus. It had absolutely nothing to do with the way training on the data handled the training data–they literally torrented an enormous portion of their training corpus.

          Anthropoc DID try to argue that because they used the pirated material for training a model, it was fair use. The judge correctly decided that doesn’t make any fucking sense. Again, this is not about the models encoding data, it is literally just about the fact that these silly fucks torrented vast portions of their training corpus like college students building a porn library on college broadband.