this post was submitted on 05 Jun 2025
379 points (98.0% liked)
Technology
70995 readers
3582 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
There's the interaction model and there's the technical organization.
The interaction model you're describing as good existed in unmoderated Usenet groups (personal kill lists to avoid reading something) and in Frost (vulnerable, abandoned, sad, I liked it more) and FMS on Freenet.
However! As yesterday I was reminded, things to ban include not just "wrong" opinions, but also executable binaries with probable trojans inside, murder\rape\CP materials, spam, bots, stolen credentials.
The problem of self-moderation being hard doesn't exist. Today giving the user control over their communications came out of fashion, but just like for e-mail clients local Bayesian filters existed, one can do today - with even some local AI tool probably, somehow everyone pretends that for such purposes said family of programs doesn't exist.
At the same time ultimately someone should do the filtering. What you are describing is your own preference in filtering, some other people have other preferences. Expecting people to self-moderate posts with stolen credentials when they are the criminals those are posted for - would be stupid.
So - it's hard to decide. Fundamentally a post with CP image and a post with Gadsden flag are the same. They even have a similar proportion of people willing to ban them, bigger for CP, but one can't just treat some point between them as a constant, for which a post reputation system should be designed, to collectively stop propagation of the CP image, but for the ancap flag image to still be propagated by enough nodes. That point will move, there might be a moment when CP becomes more acceptable for users in a segment of network (suppose there are many CP bots and we have temporarily failed to collectively detect and ignore bots), or there might be a moment when ancaps are so hated that they are flagged by bigger proportions of users than CP. One is still a violation and the other is still not.
So - to avoid solving the hard problem, one can have a system similar to a multi-channel ( posts propagated all practical ways, #1 store-and-forward nodes - network services like news servers and nostr relays, #2 Retroshare-like p2p exchange between users - I'm ignorant in computer science, so my own toy program does this not very optimally, but rsync and git exist, so the problem is solvable, #3 export-import like in a floppinet, #4 realtime notices network service like IRC ) Usenet, with a kind of necessary mechanism being used as a filter - a moderation authority signing every post as pre-moderated, checked, banned and so on. The moderation authority shouldn't be a network service, it should be a participant of the system, with its "signature posts" being propagated similarly to the material posts, because otherwise both the load on the moderation authority service would be too big and the moment it went offline you'd lose a lot.
Then on every kind of posts exchange a storage server or a notice server or a user can set up whether they propagate further everything they have, or only material posts pre-moderated or not banned by specific moderation authorities, and all signature posts, or only said authorities' signature posts.
However the user reading a hierarchy in such a system sees its contents they should be able to decide by themselves, using logical operators and the moderation authorities chosen.
If we assume that almost everyone almost everywhere doesn't propagate things flagged as CP\gore\fraud, it would be hard enough for a typical user to get them, even if their setting is wildcard. While the "wrong" opinions they will get.
Then they can add users with those opinions to a personal kill list. Just like in olden days.