So taking data without permission is bad, now?

I’m not here to say whether the R1 model is the product of distillation. What I can say is that it’s a little rich for OpenAI to suddenly be so very publicly concerned about the sanctity of proprietary data.

The company is currently involved in several high-profile copyright infringement lawsuits, including one filed by The New York Times alleging that OpenAI and its partner Microsoft infringed its copyrights and that the companies provide the Times’ content to ChatGPT users “without The Times’s permission or authorization.” Other authors and artists have suits working their way through the legal system as well.

Collectively, the contributions from copyrighted sources are significant enough that OpenAI has said it would be “impossible” to build its large-language models without them. The implication being that copyrighted material had already been used to build these models long before these publisher deals were ever struck.

The filing argues, among other things, that AI model training isn’t copyright infringement because it “is in service of a non-exploitive purpose: to extract information from the works and put that information to use, thereby ‘expand[ing] [the works’] utility.’”

This kind of hypocrisy makes it difficult for me to muster much sympathy for an AI industry that has treated the swiping of other humans’ work as a completely legal and necessary sacrifice, a victimless crime that provides benefits that are so significant and self-evident that it’s wasn’t even worth having a conversation about it beforehand.

A last bit of irony in the Andreessen Horowitz comment: There’s some handwringing about the impact of a copyright infringement ruling on competition. Having to license copyrighted works at scale “would inure to the benefit of the largest tech companies—those with the deepest pockets and the greatest incentive to keep AI models closed off to competition.”

“A multi-billion-dollar company might be able to afford to license copyrighted training data, but smaller, more agile startups will be shut out of the development race entirely,” the comment continues. “The result will be far less competition, far less innovation, and very likely the loss of the United States’ position as the leader in global AI development.”

Some of the industry’s agita about DeepSeek is probably wrapped up in the last bit of that statement—that a Chinese company has apparently beaten an American company to the punch on something. Andreessen himself referred to DeepSeek’s model as a “Sputnik moment” for the AI business, implying that US companies need to catch up or risk being left behind. But regardless of geography, it feels an awful lot like OpenAI wants to benefit from unlimited access to others’ work while also restricting similar access to its own work.

  • proceduralnightshade@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 hours ago

    I just don’t like the premise of a market where one has to sell their artistic labor in order to survive, or thrive. I’m on board with noncommercial licenses and everything because the reality looks different, but that was not my point. And neither was it the point of the original comment you replied to.