this post was submitted on 20 Jan 2024
168 points (88.2% liked)

Futurology

1854 readers
50 users here now

founded 1 year ago
MODERATORS
all 33 comments
sorted by: hot top controversial new old
[–] Endorkend@kbin.social 36 points 11 months ago (2 children)

Its obvious when watching Google and Bing results.

Try to find any sort of objective information and the first 3-4 pages will almost all be AI generated garbage that took most of the information from some other highly outdated source that was garbage to begin with.

And as the engines are AI, they can automatically manipulate search results and keep dates and time stamps updated, so that whenever google visits, the page is always the "newest" information.

[–] snooggums@kbin.social 19 points 11 months ago (1 children)

When the first five results are the same sentences worded slightly differently like a freshman essay it is not a good sign that I will find a real answer.

[–] Endorkend@kbin.social 19 points 11 months ago (3 children)

The most annoying thing is that almost all tech information has fallen victim to this shit.

We now have to go back to pre-2000's methods of searching sites, by first identifying sites as reliable and then by relying on the sites own search engines to not suck.

In some cases, this is workable.

In cases where the sites have integrated Google searches, this is even more useless than using Google itself.

[–] Lugh 7 points 11 months ago (5 children)

Someone should invent a search engine that allows for curated sources. For most things, I'd love to search among the top few thousand sites, and exclude everything else.

[–] Semi-Hemi-Demigod@kbin.social 6 points 11 months ago

Yahoo started out like this. They had humans curating the sites that they searched, and it was pretty good until the web got too big for that to be efficient.

[–] ApathyTree@lemmy.dbzer0.com 5 points 11 months ago* (last edited 11 months ago)

I haven’t used kagi, but I believe you can do exactly that with it. You do have to pay for the service, but that’s probably a good thing.

This is a link to the features page. It allows you to permanently ban or boost results from specific domains. But you may need to do some manual effort to make that happen, I don’t really know if there are community-curated backbones or anything for that.

But you can also see if the result is popular, and they seem to work pretty hard to make their platform worth the spend. Everything I’ve heard from people who use it is good.

https://blog.kagi.com/kagi-features

[–] Endorkend@kbin.social 5 points 11 months ago

I've got exactly that running on my home network for tech stuff.

I've thought of opening it up and even been thinking of building a group of people trustworthy to do the curation of sites, but I generally CBA interacting with people that much, I used to be highly active on forums like Madonion/futuremark, [H], etc, but those days are long behind me and these days, I post a bit on Reddit and talk to my wife and that's about it.

If things proceed to go to shit as much as it has, I may open it up anyway, mostly because maintaining and re-curating sites is a drag on its own.

The amount of sites that were once great tech spots that then got gulped up by the same ol same ol big tech sites to be turned into generic shit, it's not that they become uncountable, it's that it's almost every single one of them.

The best still seems to be simply posting questions on the few OG computer/tech forums that managed to survive.

For hardware and OS, places like ServeTheHome, [H], Anandtech, Techpowerup, etc.

For programming information, it's so murky I can't even suggest any specific sites anymore, not even Stack.

Phone/Tablet info, even XDA is getting murky, mostly because a lot of users there only watch the forum for their specific device, so if yours isn't one that is used by a lot of people, info gets super limited.

It's gotten bad out there.

[–] Endorkend@kbin.social 4 points 11 months ago

No need to invent.

That's how originally search engines, including Google, Yahoo and all the other big ones worked.

You didn't get indexed by default.

You either got indexed by being submitted or by being referenced often by one or more well represented sites.

It's only later in the game they started crawling everything.

[–] Truck_kun@beehaw.org 1 points 11 months ago

Someone should invent a search engine that allows for curated sources. For most things, I'd love to search among the top few thousand sites, and exclude everything else.

While I was typing up and fleshing out an idea on curated source lists for search engines, your post beat me to the punch.

As others have said, a curated internet is very old timey, and kind of limited, but I think what I fleshed out could work well with the modern internet, and be interesting. Maybe a major search engine might actually take up the task if user demand is there.

Quality of search results from google have been downward tending for years, and maybe it will boost the quality of results again (albeit with their ads still stuck in the results).

[–] sevenapples@lemmygrad.ml 1 points 11 months ago

Google search with a site filter (e.g linux site:lemmy.ml) will almost always be better than the site's own search function.

[–] Truck_kun@beehaw.org 1 points 11 months ago* (last edited 11 months ago) (1 children)

Well, maybe Google can add a catered feature (not by them, that would suck), where by users can publish lists of trusted sites to search, and a user can optionally select a catered list of someone they trust, and Google will only search sites on that list.

Possibly allow multiplexing of lists.

So say I am looking for computer security, I can a catered list for sites "Steve Gibson" trusts, and a list of trustworthy sources "Bleeping Computer" uses, and anything I search for will use both lists as a base for the search.

Maybe it isn't something people even publish to the search engine; maybe they publish a file on their site that people can point the search engine to, like in Steve Gibson's case the fictitious file: grc.com/search.sources or create a new file format like .cse (catered search engine), grc.com/index.cse

Maybe allow individual lists to multiplex other lists. Something like this multiplexing two lists added to some additional sites, sub domains, directories, and * all subdomains:

multiplex: grc.com/search.cse

multiplex: bleepingcomputer.com/search.sources

arstechnica.com

*.ycombinator.com

stackoverflow.com

security.samesite.com

linux.samesite.com

differentsite.com/security

differentsite.com/linux

Honestly sounds like a horrible idea, but in a world filled with everything made by AI content, it may become a necessity.

Anyways, I officially put the above idea into the Public Domain. Anyone can use or modify it; feel free Google/Bing.

EDIT: It was posting all fake addresses on the same line, so trying to force them onto separate lines.

[–] Truck_kun@beehaw.org 1 points 11 months ago

Apparently in the time I put thought into, typed up, changed things, etc, someone else posted a curating idea, so maybe it's not such a bad idea after all. AI content internet is going to suck.

To expand on the sounding like a horrible idea, it's mainly because if people rely too much on it, it creates a bubble, and limits the ability to discover new things or ideas outside of that bubble. But if outside of that bubble just sucks or is inaccurate, meh, what are you going to do? Especially if you are researching for something you are working on, could be a paper, a project, maybe something that could have dire financial or safety concerns if you get something wrong, and may need the information to be reliable.

[–] asdfasdfasdf@lemmy.world 3 points 11 months ago (1 children)

I wonder if this will push humanity to go back to books and libraries.

[–] Endward23 2 points 11 months ago

Books and textes on paper have one big favour: They are not as easy to change than digital textes.

[–] RememberTheApollo@lemmy.world 30 points 11 months ago (1 children)

It found that most of the internet is translated, as 57.1 percent of the sentences in the corpus were multi-way parallel in at least three languages.

What a shit title. The article basically is about translating languages and goes on to say that AI is doing it badly.

Not that 50% of all web content is AI generated.

Shame on whoever wrote this clickbait garbage.

[–] Lugh 10 points 11 months ago* (last edited 11 months ago) (3 children)

Not that 50% of all web content is AI generated.

On the contrary. It explicitly states that it is.

To quote - "It found that most of the internet is translated, as 57.1 percent of the sentences in the corpus were multi-way parallel in at least three languages. "

So in other words, that majority of web content is AI translations of other content. As its often poorly translated, or entirely mistranslated, it qualifies as "AI-generated garbage" - hence the headline.

[–] CanadaPlus 7 points 11 months ago (1 children)

Technically, but I think I and a lot of other readers thought it was talking about original content from AIs, as opposed to translated.

I have noticed those sites with answers to commonly searched questions, which look very convincing and have AI generated "authors" as well as a topic-specific URL, but then sometimes lose the plot of the question half way through. I almost fall for them, and I'm a crusty internet person, so I can only imagine how many people are just totally swallowing the info.

[–] snooggums@kbin.social 0 points 11 months ago (1 children)

'Original content' from AI is just regurgitated content with adjustments.

[–] CanadaPlus 1 points 11 months ago (1 children)

How many adjustments does it take before it's new content. If the answer is a lot, are humans ever original either?

[–] snooggums@kbin.social 2 points 11 months ago (1 children)

Humans are frequently unoriginal, which is why they get caught copying existing things with adjustments. But they also do make new things based on existing things that add something that new in a way that is significantly different in a way that might use some parts of existing things in a way that is original.

The Thing, Predator, and Alien are all other worldly being who hunt humans but would you consider them regurgitated content with adjustments?

The thing is about fear of other people, with an alien monster.

Predator is about macho men being outclassed, with an alien monster.

Alien is about sexual assault, with an alien monster.

AI won't accidentally create anything comparable by accident, because these three movies aren't even the output of a single human. Hell, even books are not the out the output of a single person. They have editors and reviews and collaboration that involves sharing of knowledge and influenced by experiences that AI won't accidentally stumble upon by accident. AI will create the direct to video knock offs that are just copying existing media to profit because AI is like an executive who tries to always make what is already proven to work because it is seen as reliable.

[–] CanadaPlus 1 points 11 months ago

Alright, that's a weaker claim (that is, less of an extraordinary claim) than I was expecting. LLMs aren't quite as good as a human at conceptual originality yet, and I can't prove they will catch up, especially if thematic subtext is the measure.

I guess I'll just say my original point stands then. There's a difference between something made from a prompt by ChatGPT, and something produced from a roughly equivalent text by a translation tool.

[–] Grimy@lemmy.world 4 points 11 months ago (1 children)

You do not generate text when you translate it. The two words have different meanings.

[–] Baahb@lemmy.world 2 points 11 months ago* (last edited 11 months ago) (1 children)

If you translate a sentence once using a computer, it's probably a translation. If you translate a translation, you are using a computer to regenerate computer generated content, even if it started with a human seed in the first translation. The two words only have different meanings in specific context. They CAN mean the same thing, but don't necessarily or even often.

In this case though, the article does suggest that AI is taking ai content and rewriting it, or translating from "English" to "English" a bunch of times. Which is both translation and generation.

[–] Grimy@lemmy.world 1 points 11 months ago* (last edited 11 months ago)

English to English would be rewording and would totally fall under AI generated garbage but the article doesn't seem to mention this. It's entirely about English to other language, mostly in Africa and the global south.

Although taking articles and translating them is using AI, I don't think that's what most people associate with "AI generated garbage" hence the click bait.

It's an interesting article, I just think the headline is misleading.

[–] lowleveldata@programming.dev 2 points 11 months ago

Translate is not generate. It's intentionally misleading.

[–] TheOakTree@beehaw.org 12 points 11 months ago* (last edited 11 months ago)

I was trying to install a mod for a game yesterday. Gave it a google search. The first 5 links were trash sites that literally just said "download the mod files," "install the mod," "enjoy the game."

No other instructions, no links, irrelevant images and captions. Just random filler details about the base game.

Lol.

[–] saltesc@lemmy.world 11 points 11 months ago

Oh, we know.

[–] Kolanaki@yiffit.net 9 points 11 months ago* (last edited 11 months ago)

Man, I have been accused of being AI tons of times in the last few years. I don't think people are very good at distinguishing reality from AI when it comes to text.

Researchers at the Amazon Web Services AI lab found that over half of the sentences on the web have been translated into two or more languages...

They attribute this to machine learning algorithms, yet even without those translations of translations of translations also have decreasing accuracy when done by people.

[–] Lugh 6 points 11 months ago (1 children)

One of the ironies of Google leading so much cutting-edge AI development is that it is simultaneously poisoning its own business from within. Google Search is getting worse and worse, on an almost monthly basis, as it fills up with ever more SEO-spam. Early adopters are abandoning it for Chat-GPT-like alternatives; which means the mass market probably soon will too.

The other irony is that it will probably take AI to save us from AI-generated SEO spam. For everyone touting AI products that will write blogs and emails, there will be people selling products that detect their garbage and save you from wasting your time reading it.

[–] msage@programming.dev 4 points 11 months ago (1 children)

I would argue that most of the webpages have been generated without human input for a long time. So much automated scam with sketchy download links was the norm years before any 'modern AI' have been a thing.

[–] gandalf_der_12te@feddit.de 2 points 11 months ago

Yeah, the internet is mostly a tool for machines to communicate with one another.

[–] Endward23 3 points 11 months ago

I hear the message but to be honest, I can't believe it. There must something I don't get. But at a second thought, in the google search resolutes, I see a lot of dubious resultes.