

But the people making money off of all of that are mad now, hence this article.
But the people making money off of all of that are mad now, hence this article.
You can’t be sued over or copyright styles. Studio Ponoc is made up of ex-Ghibli staff, and they have been releasing moves for a while. Stop spreading misinformation.
https://www.imdb.com/title/tt16369708/
https://www.imdb.com/title/tt15054592/
The dream is dead.
So you don’t interact with AI stuff outside of that? Have you seen any cool research papers or messed with any local models recently? Getting a bit of experience with the stuff can help you better inform people and see through the more bogus headlines.
It definitely seems that way depending on what media you choose to consume. You should try to balance the doomer scroll with actual research and open source news.
Ok, but is training an AI so it can plagiarize, often verbatim or with extreme visual accuracy, fair use? I see the 2 first articles argue that it is, but they don’t mention the many cases where the crawlers and scrappers ignored rules set up to tell them to piss off. That would certainly invalidate several cases of fair use
You can plagiarize with a computer with copy & paste too. That doesn’t change the fact that computers have legitimate non-infringing use cases.
Instead of charging for everything they scrap, law should force them to release all their data and training sets for free.
I agree
I’d wager 99.9% of the art and content created by AI could go straight to the trashcan and nobody would miss it. Comparing AI to the internet is like comparing writing to doing drugs.
But 99.9% of the internet is stuff that no one would miss. Things don’t have to have value to you to be worth having around. That trash could serve as inspiration for your 0.1% of people or garner feedback for people to improve.
But the law is largely the reverse. It only denies use of copyright works in certain ways. Using things “without permission” forms the bedrock on which artistic expression and free speech are built upon.
AI training isn’t only for mega-corporations. Setting up barriers like these only benefit the ultra-wealthy and will end with corporations gaining a monopoly of a public technology by making it prohibitively expensive and cumbersome for regular folks. What the people writing this article want would mean the end of open access to competitive, corporate-independent tools and would jeopardize research, reviews, reverse engineering, and even indexing information. They want you to believe that analyzing things without permission somehow goes against copyright, when in reality, fair use is a part of copyright law, and the reason our discourse isn’t wholly controlled by mega-corporations and the rich.
I recommend reading this article by Kit Walsh, and this one by Tory Noble staff attorneys at the EFF, this one by Katherine Klosek, the director of information policy and federal relations at the Association of Research Libraries, and these two by Cory Doctorow.
Is Miyazaki going to go in on his son again?
Fuck 'em. I don’t care. I hope no one uses them.
I’m not discussing the use of private data, nor was I ever. You’re presenting a false Dichotomy and trying to drag me into a completely unrelated discussion.
As for your other point. The difference between this and licensing for music samples is that the threshold for abuse is much, much lower. We’re not talking about hindering just expressive entertainment works. Research, reviews, reverse engineering, and even indexing information would be up in the air. This article by Tori Noble a Staff Attorney at the Electronic Frontier Foundation should explain it better than I can.
Private conversations are something entirely different from publically available data, and not really what we’re discussing here. Compensation for essentially making observations will inevitably lead to abuse of the system and deliver AI into the hands of the stupidly rich, something the world doesn’t need.
I mean realistically, we don’t have any proper rules in place. The AI companies for example just pirate everything from Anna’s Archive. And they’re rich enough to afford enough lawyers to get away with that. And that’s unlike libraries, which pay for books and DVDs in their shelves… So that’s definitely illegal by any standard.
You can make temporary copies of copyrighted materials for fair use applications. I seriously hope there isn’t a state out there that is going to pass laws that gut the core freedoms of art, research, and basic functionality of the internet and computers. If you ban temporary copies like cache, you ban the entire web and likely computers generally, but you never know these days.
Know your rights and don’t be so quick to bandwagon. Consider the motives behind what is being said, especially when it’s two entities like these battling it out.
You have to remember, AI training isn’t only for mega-corporations. By setting up barriers that only benefit the ultra-wealthy, you’re handing corporations a monopoly of a public technology by making it prohibitively expensive to for regular people to keep up. These companies already own huge datasets and have whatever money they need to buy more. And that’s before they bind users to predatory ToS allowing them exclusive access to user data, effectively selling our own data back to us. What some people want would mean the end of open access to competitive, corporate-independent tools and would leave us all worse off and with fewer rights than where we started.
The same people who abuse DMCA takedown requests for their chilling effects on fair use content now need your help to do the same thing to open source AI. Their next greatest foe after libraries, students, researchers, and the public domain. Don’t help them do it.
I recommend reading this article by Cory Doctorow, and this open letter by Katherine Klosek, the director of information policy and federal relations at the Association of Research Libraries. I’d like to hear your thoughts.
He’s not trying to get copyright for something he generated, he’s trying to have the court award copyright to his AI system “DABUS”, but copyright is for humans. Humans using Gen AI are eligible for copyright according to the latest guidance by the United States Copyright Office.
One of the provisions of fair use is the effects on the market. If your spambot is really shitting up the place, you may very well run afoul of the doctrine.
We’re saying the same thing here. It’s just your characterization of gen AI as a “tech-enabled copying device” isn’t accurate. You should read this which breaks down how all this works.
The fair use doctrine allows you to do just that. The alternative would be someone being able to publish a book and then shutting anyone else out of publishing, discussing, or building on their ideas without them getting a kick-back.
The funny part is most of the headlines want you to believe that using things without permission is somehow against copyright. When in reality, fair use is a part of copyright law, and the reason our discourse isn’t wholly controlled by mega-corporations and the rich. It’s sad watching people desperately trying to become the kind of system they’re against.
Record them anyway. There’ll be more ways to de-anonymize them in the future.