this post was submitted on 10 Oct 2025
117 points (100.0% liked)

Fuck AI

4292 readers
928 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
 

Just 250 malicious training documents can poison a 13B parameter model - that's 0.00016% of a whole dataset Poisoning AI models might be way easier than previously thought if an Anthropic study is anything to go on. …

you are viewing a single comment's thread
view the rest of the comments
[–] Grimy@lemmy.world 5 points 2 days ago* (last edited 2 days ago)

Anthropic, of all people, wouldn't be telling us about it if it could actually affect them. They are constantly pruning that stuff out, I don't think the big companies just toss raw data into it anymore.