this post was submitted on 26 Oct 2024
936 points (99.1% liked)

Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ

54345 readers
1240 users here now

⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.

Rules • Full Version

1. Posts must be related to the discussion of digital piracy

2. Don't request invites, trade, sell, or self-promote

3. Don't request or link to specific pirated titles, including DMs

4. Don't submit low-quality posts, be entitled, or harass others



Loot, Pillage, & Plunder


💰 Please help cover server costs.

Ko-FiLiberapay


founded 1 year ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] Facebones@reddthat.com 14 points 5 hours ago

All is legal in the eyes of capital.

[–] rasakaf679@lemmy.ml 26 points 9 hours ago
[–] PanArab@lemm.ee 31 points 10 hours ago* (last edited 10 hours ago) (1 children)

Who writes the laws? There's your answer.

I'm curious why https://www.falconfinance.ae/ cares about this though.

The hell they are selling? https://www.falconfinance.ae/falcon-securities/

[–] TheOakTree@lemm.ee 7 points 3 hours ago

I did some digging. It's a parody finance website that makes it seem like you can invest in falcons and make a blockchain (flockchain) with them. Dig a little further, go to the linked forum, and you'll see it's just a community of people shitposting (mostly).

[–] CosmicTurtle0@lemmy.dbzer0.com 55 points 11 hours ago

To paraphrase Nixon:

"When you're a company, it's not illegal."

To paraphrase Trump:

"When you're a company, they just let you do it."

[–] ayyy@sh.itjust.works 41 points 12 hours ago
[–] Iunnrais@lemm.ee 141 points 16 hours ago (5 children)

Just let anyone scrape it all for any reason. It’s science. Let it be free.

[–] chicken@lemmy.dbzer0.com 3 points 8 hours ago (2 children)

The OP tweet seems to be leaning pretty hard on the "AI bad" sentiment. If LLMs make academic knowledge more accessible to people that's a good thing for the same reason what Aaron Swartz was doing was a good thing.

[–] umbrella@lemmy.ml 6 points 5 hours ago

i agree, my problem is that it wont

[–] Ashelyn@lemmy.blahaj.zone 8 points 6 hours ago* (last edited 6 hours ago) (1 children)

On the whole, maybe LLMs do make these subjects more accessible in a way that's a net-positive, but there are a lot of monied interests that make positive, transparent design choices unlikely. The companies that create and tweak these generalized models want to make a return in the long run. Consequently, they have deliberately made their products speak in authoritative, neutral tones to make them seem more correct, unbiased and trustworthy to people.

The problem is that LLMs 'hallucinate' details as an unavoidable consequence of their design. People can tell untruths as well, but if a person lies or misspeaks about a scientific study, they can be called out on it. An LLM cannot be held accountable in the same way, as it's essentially a complex statistical prediction algorithm. Non-savvy users can easily be fed misinfo straight from the tap, and bad actors can easily generate correct-sounding misinformation to deliberately try and sway others.

ChatGPT completely fabricating authors, titles, and even (fake) links to studies is a known problem. Far too often, unsuspecting users take its output at face value and believe it to be correct because it sounds correct. This is bad, and part of the issue is marketing these models as though they're intelligent. They're very good at generating plausible responses, but this should never be construed as them being good at generating correct ones.

[–] chicken@lemmy.dbzer0.com -1 points 5 hours ago* (last edited 5 hours ago) (2 children)

Ok, but I would say that these concerns are all small potatoes compared to the potential for the general public gaining the ability to query a system with synthesized expert knowledge obtained from scraping all academically relevant documents. If you're wondering about something and don't know what you don't know, or have any idea where to start looking to learn what you want to know, a LLM is an incredible resource even with caveats and limitations.

Of course, it would be better if it could also directly reference and provide the copyrighted/paywalled sources it draws its information from at runtime, in the interest of verifiably accurate information. Fortunately, local models are becoming increasingly powerful and lower barrier of entry to work with, so the legal barriers to such a thing existing might not be able to stop it for long in practice.

[–] Excrubulent@slrpnk.net 3 points 4 hours ago

The phrase "synthesised expert knowledge" is the problem here, because apparently you don't understand that this machine has no meaningful ability to synthesise anything. It has zero fidelity.

You're not exposing people to expert knowledge, you're exposing them to expert-sounding words that cannot be made accurate. Sometimes they're right by accident, but that is not the same thing as accuracy.

The fact you confused what the LLM is doing for synthesis is something loads of people will do, and this will just lend more undue credibility to its bullshit.

[–] Ashelyn@lemmy.blahaj.zone 1 points 4 hours ago* (last edited 4 hours ago)

People developing local models generally have to know what they're doing on some level, and I'd hope they understand what their model is and isn't appropriate for by the time they have it up and running.

Don't get me wrong, I think LLMs can be useful in some scenarios, and can be a worthwhile jumping off point for someone who doesn't know where to start. My concern is with the cultural issues and expectations/hype surrounding "AI". With how the tech is marketed, it's pretty clear that the end goal is for someone to use the product as a virtual assistant endpoint for as much information (and interaction) as it's possible to shoehorn through.

Addendum: local models can help with this issue, as they're on one's own hardware, but still need to be deployed and used with reasonable expectations: that it is a fallible aggregation tool, not to be taken as an authority in any way, shape, or form.

load more comments (4 replies)
[–] sunzu2@thebrainbin.org 66 points 15 hours ago (4 children)

Yes.. but it was MIT that pushed the feds to prosecute.

Never forge to name the proper perp.

Disgusting. And we subsidize their existence 🤡

[–] TheReturnOfPEB@reddthat.com 15 points 12 hours ago* (last edited 12 hours ago)

https://en.wikipedia.org/wiki/Carmen_Ortiz

Ortiz said "Stealing is stealing whether you use a computer command or a crowbar, and whether you take documents, data or dollars. It is equally harmful to the victim whether you sell what you have stolen or give it away."

So that was some bullshit, huh ?

[–] Flocklesscrow@lemm.ee 21 points 13 hours ago

MIT releases financials and endowment figures for 2024:

The Institute’s pooled investments returned 8.9 percent last year; endowment stands at $24.6 billion

load more comments (2 replies)
[–] What_Religion_R_They@hexbear.net 32 points 13 hours ago (3 children)

double standards are capitalism's lifeblood

load more comments (3 replies)
[–] xiao@sh.itjust.works 13 points 11 hours ago

I'm still blaming the MIT for that !

[–] doctortran@lemm.ee 13 points 12 hours ago* (last edited 6 hours ago) (3 children)

Can we be honest about this, please?

Aaron Swartz went into a secure networking closet and left a computer there to covertly pull data from the server over many days without permission from anyone, which is absolutely not the same thing as scraping public data from the internet.

He was a hero that didn't deserve what happened, but it's patently dishonest to ignore that he was effectively breaking and entering, plus installing a data harvesting device in the server room, which any organization in the world would rightfully identity as hostile behavior. Even your local library would call the cops if you tried to do that.

[–] TheDoctor@hexbear.net 59 points 11 hours ago

You left out the part where, instead of telling him to knock it off as soon as they learned about it and disciplining him internally as a student, the school contacted law enforcement and allowed him to continue doing it so they could prosecute him harder make an example out of him. You’d think if he was as big of a threat as you’re implying, they would stop what he was doing ASAP. And if you’re going to be pedantic about leaving out details, maybe tell the whole thing. Maybe it’s not “honest” enough if we haven’t posted the full text of a documentary in a comment. That’s clearly your call.

[–] UlyssesT@hexbear.net 25 points 11 hours ago

Can we be honest about this

Saying "can we be honest" isn't a magic spell that transmutes your opinion to fact.

patently dishonest ignore that he was effectively breaking and entering, plus installing a data harvesting device in the server room, which any organization in the world would rightfully identity as a hostile.

bootlicker

[–] sunzu2@thebrainbin.org 8 points 10 hours ago

After state prosecutors dropped their charges, federal prosecutors filed a superseding indictment adding nine more felony counts, which increased Swartz's maximum criminal exposure to 50 years of imprisonment and $1 million in fines.

Another bootlicker spotted.

[–] crmsnbleyd@sopuli.xyz 18 points 13 hours ago

Anything the rich and powerful do retroactively becomes okay

[–] EmbarrassedDrum@lemmy.dbzer0.com 29 points 15 hours ago (1 children)

and in due time, we'll hack OpenAI and get the sources from the chat module..

I've seen a few glitches before that made ChatGPT just drop entire articles in varying languages.

[–] FaceDeer@fedia.io 19 points 12 hours ago (1 children)

AI models don't actually contain the text they were trained on, except in very rare circumstances when they've been overfit on a particular text (this is considered an error in training and much work has been put into coming up with ways to prevent it. It usually happens when a great many identical copies of the same data appears in the training set). An AI model is far too small for it, there's no way that data can be compressed that much.

[–] EmbarrassedDrum@lemmy.dbzer0.com 6 points 10 hours ago

thanks! it actually makes much sense.

welp guess I was wrong. so back to .edu scraping!

load more comments
view more: next ›