570
Microsoft’s AI boss thinks it’s perfectly OK to steal content if it’s on the open web
(www.theverge.com)
This is a most excellent place for technology news and articles.
copying is not theft
"Copying is theft" is the argument of corporations for ages, but if they want our data and information, to integrate into their business, then, suddenly they have the rights to it.
If copying is not theft, then we have the rights to copy their software and AI models, as well, since it is available on the open web.
They got themselves into quite a contradiction.
You wouldn't download a car!
You realize that half of Lemmy is tying themselves in inconsistent logical knots trying to escape the reverse conundrum?
Copying isn't stealing and never was. Our IP system that artificially restricts information has never made sense in the digital age, and yet now everyone is on here cheering copyright on.
No we don't, copying copyrighted material is copyright infringement. Which is illegal. that does not make it theft though.
Oversimplifying the issue makes for an uninformed debate.
any content you produce is automatically copyrighted
Issue is power imbalance.
There's a clear difference between a guy in his basement on his personal computer sampling music the original musicians almost never seen a single penny from, and a megacorp trying to drive out creative professionals from the industry in the hopes they can then proceed to hike up the prices to use their generative AI software.
Yeah, I'm not a fan of AI but I'm generally of the view that anything posted on the internet, visible without a login, is fair game for indexing a search engine, snapshotting a backup (like the internet archive's Wayback Machine), or running user extensions on (including ad blockers). Is training an AI model all that different?
You can't be for piracy but against LLMs fair the same reason
And I think most of the people on Lemmy are for piracy,
I'm not in favor of piracy or LLMs. I'm also not a fan of copyright as it exists today (I think we should go back to the 1790 US definition of copyright).
I think a lot of people here on lemmy who are "in favor of piracy" just hate our current copyright system, and that's quite understandable and I totally agree with them. Having a work protected for your entire lifetime sucks.
The problem with copyright has nothing to do with terms limits. Those exacerbate the problem, but the fundamental problem with copyright and IP law is that it is a system of artificial scarcity where there is no need for one.
Rather than reward creators when their information is used, we hamfistedly try and prevent others from using that information so that people have to pay them to use it sometimes.
Capitalism is flat out the wrong system for distributing digital information, because as soon as information is digitized it is effectively infinitely abundant which sends its value to $0.
Copyright is not a capitalist idea, it's collectivist. See copyright in the Soviet Union, the initial bill of which was passed in 1925, right near the start of the USSR.
A pure capitalist system would have no copyright, and works would instead be protected through exclusivity (I.e. paywalls) and DRM. Copyright is intended to promote sharing by providing a period of exclusivity (temporary monopoly on a work). Whether it achieves those goals is certainly up for debate.
Long terms go against any benefit to society that copyright might have. I think it does have a benefit, but that benefit is pretty limited and should probably only last 10-15 years. I think eliminating copyright entirely would leave most people worse off and probably mostly benefit large orgs that can afford expensive DRM schemes in much the same way that our current copyright duration disproportionately benefits large orgs.
Yes, it kind of is. A search engine just looks for keywords and links, and that's all it retains after crawling a site. It's not producing any derivative works, it's merely looking up an index of keywords to find matches.
An LLM can essentially reproduce a work, and the whole point is to generate derivative works. So by its very nature, it runs into copyright issues. Whether a particular generated result violates copyright depends on the license of the works it's based on and how much of those works it uses. So it's complicated, but there's very much a copyright argument there.
My brain also takes information and creates derivative works from it.
Shit, am I also a data thief?
That depends, do you copy verbatim? Or do you process and understand concepts, and then create new works based on that understanding? If you copy verbatim, that's plagiarism and you're a thief. If you create your own answer, it's not.
Current AI doesn't actually "understand" anything, and "learning" is just grabbing input data. If you ask it a question, it's not understanding anything, it just matches search terms to the part of the training data that matches, and regurgitates a mix of it, and usually omits the sources. That's it.
It's a tricky line in journalism since so much of it is borrowed, and it's likewise tricky w/ AI, but the main difference IMO is attribution, good journalists cite sources, AI rarely does.
Derivative works are not copyright infringement. If LLMs are spitting out exact copies, or near-enough-to-exact copies, that’s one thing. But as you said, the whole point is to generate derivative works.
They absolutely are, unless it's covered by "fair use." A "derivative work" doesn't mean you created something that's inspired by a work, but that you've modified the the work and then distributed the modified version.
None of those things replace that content, though.
Look, I dunno if this is legally a copyrights issue, but as a society, I think a lot of people have decided they're willing to yield to social media and search engine indexers, but not to AI training, you know? The same way I might consent to eating a mango but not a banana.
Didnt you hear? We stan draconian IP laws now because AI bad.
Is it that or is it that the laws are selectively applied on little guys and ignored once you make enough money? It certainly looks that way. Once you've achieved a level of "fuck you money" it doesn't matter how unscrupulously you got there. I'm not sure letting the big guys get away with it while little guys still get fucked over is as big of a win as you think it is?
Examples:
The Pirate Bay: Only made enough money to run the site and keep the admins living a middle class lifestyle.
VERDICT: Bad, wrong, and evil. Must be put in jail.
OpenAI: Claims to be non-profit, then spins off for-profit wing. Makes a mint in a deal with Microsoft.
VERDICT: Only the goodest of good people and we must allow them to continue doing so.
The IP laws are stupid but letting fucking rich twats get away with it while regular people will still get fucked by the same rules is kind of a fucking stupid ass hill to die on.
But sure, if we allow the giant companies to do it, SOMEHOW the same rules will "trickle down" to regular people. I think I've heard that story before... No, they only make exceptions for people who can basically print money. They'll still fuck you and me six ways to Sunday for the same.
I mean, the guys who ran Jetflicks, a pirate streaming site, are being hit with potentially 48 year sentences. Longer than a lot of way more serious fucking crimes. I've literally seen murderers get half that.
But yeah, somehow, the same rules will end up being applied to us? My ass. They're literally jailing people for it right now. If that wasn't the case, maybe this argument would have legs.
But AI companies? Totes okay, bro.
The laws are currently the same for everyone when it comes to what you can use to train an AI with. I, as an individual, can use whatever public facing data I wish to build or fine tune AI models, same as Microsoft.
If we make copyright laws even stronger, the only one getting locked out of the game are the little guys. Microsoft, google and company can afford to pay ridiculous prices for datasets. What they don't own mainly comes from aggregators like Reddit, Getty, Instagram and Stack.
Boosting copyright laws essentially kill all legal forms of open source AI. It would force the open source scene to go underground as a pirate network and lead to the scenario you mentioned.
Yes, it is a travesty that people are being hounded for sharing information, but the solution to that isn't to lock up information tighter by restricting access to the open web and saying if you download something we put up to be freely accessed and then use it in a way we don't like you owe us.
The solution to bad laws being applied unevenly isn't to apply the bad laws to everyone equally, its to get rid of the bad laws.
That's law in general...