this post was submitted on 15 Oct 2023
362 points (97.4% liked)

Technology

72734 readers
1561 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Google has plunged the internet into a “spiral of decline”, the co-founder of the company’s artificial intelligence (AI) lab has claimed.

Mustafa Suleyman, the British entrepreneur who co-founded DeepMind, said: “The business model that Google had broke the internet.”

He said search results had become plagued with “clickbait” to keep people “addicted and absorbed on the page as long as possible”.

Information online is “buried at the bottom of a lot of verbiage and guff”, Mr Suleyman argued, so websites can “sell more adverts”, fuelled by Google’s technology.

you are viewing a single comment's thread
view the rest of the comments
[–] squaresinger@feddit.de 3 points 2 years ago (1 children)

Yeah, because people selling AI products have a great track record on predicting how their products will develop in the future. Because of that, Teslas don't have steering wheels any more, because Full Self Driving drives people incident-free from New York to California since 2017.

The thing with AI development is, that it rapidly gets to 50% of the desired solution, but then gets stuck there, not being able to get consistently good enough that you can actually rely on it.

[–] Zeth0s@lemmy.world -1 points 2 years ago* (last edited 2 years ago) (1 children)

I don't really understand what it means. If the product is unreliable people won't use it, and everything will stay as it is now. It's not a big issue. But It is already pretty reliable for many use cases.

Realistically the real future problem will be monetization (which is causing the issues of Google), not features

[–] Phanatik@kbin.social 1 points 2 years ago (1 children)

Well, here's the thing. How often are you willing to dismiss the misses because of the hits? Your measure of unreliability is now subject to bias because you're no longer assessing the bot's answers objectively.

[–] Zeth0s@lemmy.world 0 points 2 years ago* (last edited 2 years ago)

I don't expect it to be 100% correct. I have realistic expectations built on experience. Any source isn't 100% reliable. A friend is 50% reliable, an expert maybe 95. A random web page probably 40... I don't know.

I built up my strategies to address uncertainty by applying critical thinking. It is not much different than in the past. By experience, chatgpt 4 is currently more reliable than a random web page that comes in the first page of a Google search. Unless I exactly search for a trustworthy source, such as nhs or guardian.

The main problem is the drop in quality of search engines. For instance, I often start with chatgpt 4 without plugins to focus my research. Once I understand what I should look for, I use search engines for focused searches on official websites or documentation pages.