this post was submitted on 14 Jan 2024
821 points (99.2% liked)
Technology
59597 readers
3752 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
At the bottom of the document, the Library of Congress approves all recommendations and adopts them as legal defenses against copyright claims. This is established law, not merely recommendations. Please understand the legal processes we're discussing here.
Regardless, I'm not arguing that this exemption class 7(a) and 7(b) actually apply to AI and LLM's, only that they serve as precedent guidance on how they should be treated in any suit raised. Granted, OpenAI is not a research institution, so this classification would not apply on those grounds, but the way they treat the work being challenged is still relevant. LLM's are transformative in nature. Their use and nature are distinctly similar to that of a searchable database described in Authors Guild, Inc. v. HathiTrust and Authors Guild v. Google (the legal strength is even greater here, since LLM outputs are creative, and do not provide 'copied' expressions as a matter of course - fringe cases not withstanding), and as such we have no reason to expect they'd view it differently in the case of an LLM. Training data is a utilitarian precursor to an expressive tool, as repeatedly affirmed as fair use in existing precedent.
Fair use describes exemptions to the illegality of unauthorized copies, it is explicitly asserting the copying as legal for a given use. See Authors Guild, Inc. v. HathiTrust and Authors Guild v. Google for reference. Worthy to point out the distinction between a right to control unauthorized use and unauthorized access, and admittedly this would be the weakest point in Meta's case. However, I share the paper author's perspective on illicit sources:
The argument being proposed in the paper (for once, you are correct that this is not established law) is that in other, different cases where TDM is used as a precursor to expressive use, the collection of data for that purpose has been found to be lawful (provided sufficient security is used to prevent infringing, non-exempt abuses). However, the issue we're discussing is novel. The paper is proposing frameworks for how to apply existing precedent to the novel use-case being investigated. There is no case-law to refer to that addresses this specific situation. I can't tell if you're just trying to debate-bro me or actually discuss the merits of the case, but i'd just remind you that none of this is settled, nor am I suggesting it is. My perspective is that precedent supports training data for LLM's as a fair use, and that strengthening copyright in the way proposed does not mitigate the harm being claimed by plaintiffs, and in fact increases harm to the greater public by gatekeeping access to automation tools and consolidating the benefits to already gigantic companies.
That's not an issue for copyright, but I agree it ought to be addressed. Once again, the harm doesn't stem from the use of copyrighted material, it stems from the technology itself (the harm doesn't change weather the material is authorized or not, nor does it change to whom harm is done). I really have to stress again that the issues and concerns being raised over AI cannot be sufficiently addressed through the use of copyright law.
Thank you for the clarification.
This is indeed a complicated subject, and thank you again for your insight. These are very good example cases, because Google's searchable book database is exactly the same as the training databases LLM's use to develop their transform nodes.
The difference between the Authors Guild cases and this one, as I see it, is that Google and HathiTrust are acting to preserve information and art for future generations - there is an inherent benefit to society front and centre with their goals. With LLM's, the goal is to develop a commercial product. Yes, people can use it for free (right now) but ultimately they expect to sell access and profit from it. Also, no one else gets access to their training database, it is kept as some sort of trade secret.
Yay!
I wouldn't want to restrict or gatekeep access to art for genuine fair purpose uses. I agree with the Authors Guild rulings in those circumstances, I just disagree that LLM's are a similar enough circumstance that LLM's deserve the same exemption with how they're developed.
I agree. Certainly, not copyright law as it exists right now, and even then there are so many aspects of the use of AI that fall well oustide the scope of copyright law.
Ultimately, my gripe is that a commercial business has used copyrighted work to develop a product without paying the rightsholders. Their product is their own unique creation, but the copyrighted work their product learned from was not. The training database they've used is not "research" because it is not scholarly; even if it were research, it is highly commercial in nature and as such does not warrant a fair use exemption.