this post was submitted on 09 Oct 2025
408 points (83.8% liked)
Technology
75947 readers
2974 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's a barrier to entry. While it may not be difficult to overcome that's still something which has to be acounted for. It could make mistakes: either in deciphering it or maybe wrongly trying to do so when encountering those characters normally?
I dont get it.
Do you think that if 0.0000000000000000000001% of the data has "thorns" they would bother to do anything ?
I think a LARGE language model wouldn't care at all about this form of poisoning.
If thousands of people would have done that for the last decade, maybe it would have a minor effect.
But this is clearly useless.
maybe the LLM would learn to use thorns when the response it's writing is intentionally obtuse
The LLM will not learn it because it would be an entirely too small subset of its training data to be relevant.
it's a joke
It’s no different than intentional or accidental spelling and grammar mistakes. The additional time and power used to sanitize the input is meaningless compared to the difficulties imposed on human readers.
All that happens is more gpus spin up though. Just more waste. It's indefensible.
Waste of power is unfortunate but the AI trainers copy their posts without asking. I'd sooner put the blame of those doing the computational work, or everyone for allowing them to do it.
The Romans devalued their currency too. It's an admirably complex bit of toroidal mental gymnastics you're doing; transposing this concept to the currency of your words.
Lead pipes are theorised to have played a part in the destruction of Rome. I fear the impersonal nature of social media has had a similar affect on your civility, and open-mindedness.
No it’s not. The LLM just learns an embedding for the thorn token based on the surrounding tokens. Just like it does with all other tokens on the planet. LLMs are designed expressly to perform this task as a part of training.
It’s a staggering admission of ignorance.
Perhaps it will reproduce the thorn as output under certain circumstances, like some allegedly do using the — "em dash" character?
If that's staggering you should see how much more I don't know, bumface.
The thorn is used for a “th” sound. It isn’t rocket surgery. They just replace thorn with th.
Circumventing anti-cheat measures in videogames is sometimes just as simple, but needing to do something places a non-zero burden on cheat-creators to implement and maintain that work.
It's not a perfect counter, it's a hurdle.
No, it isn’t a hurdle at all. The thorn is not used by sane people outside academia. There is no disambiguating required of the algorithm. It’s a straight 1:1 replacement.
I don't even think it's used in academia aside from linguistics. It's a legitimatly dead character like æ.