830
China is attempting to mirror the entire GitHub over to their own servers, users report
(infosec.exchange)
This is a most excellent place for technology news and articles.
What's the limit? This needs to be absolutely explicit and easy to understand because this is what LLMs are doing. They take hundreds of thousands of similar algorithms and they create an amalgamation of it.
When is it copying and when it is "inspiration"? What's the line between learning and copying?
I disagree that it needs to be explicit. The current law is the fair use doctrine, which generally has more to do with the intended use than specific amounts of the text/media. The point is that humans should know where that limit is and when they've crossed it, with motive being a huge part of it.
I think machines and algorithms should have to abide by a much narrower understanding of "fair use" because they don't have motive or the ability to Intuit when they've crossed the line. So scraping copyrighted works to produce an LLM should probably generally be illegal, imo.
That said, our current copyright system is busted and desperately needs reform. We should be limiting copyright to 14 years (as in the original copyright act of 1790), with an option to explicitly extend for another 14 years. That way LLMs can scrape comment published >28 years ago with no concerns, and most content produced >14 years (esp. forums and social media where copyright extension is incredibly unlikely). That would be reasonable IMO and sidestep most of the issues people have with LLMs.
First, this conversation has little to do with fair use. Fair use is when there is an acceptable reason to break copyright. For example when you are making a parody or critique or for education purposes.
What we are talking about is the act of reading and/or learning and then using that information in order to synthesize new material. This is essentially the entire point of education. When someone goes to art school, they study many different artists and their techniques. They learn from these techniques as they merge them together in different ways to create novel art.
Everybody recognizes this is perfectly OK and to assume otherwise is absurd. So what we are talking about is not fair use, but extracting data from copyrighted material and using it to create novel material.
The distinction here is you claim when this process is automated, it should become illegal. Why?
My opinion is if it's legal for a human to do, it should be legal for a human to automate.
Sure, but that's not what LLMs are doing. They're breaking down works to reproduce portions of it in answers. Learning is about concepts, LLMs don't understand concepts, they just compare inputs with training data to provide synthesized answers.
The process a human goes through is distinctly different from the process current AI goes through. The process an AI goes through is closer to a journalist copy-pasting quotations into their article, which falls under fair use. The difference is that AI will synthesize quotations from multiple (many) sources, whereas a journalist will generally just do one at a time, but it's still the same process.