this post was submitted on 23 May 2024
1842 points (98.6% liked)

Technology

59578 readers
2784 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Archive link: https://archive.ph/GtA4Q

The complete destruction of Google Search via forced AI adoption and the carnage it is wreaking on the internet is deeply depressing, but there are bright spots. For example, as the prophecy foretold, we are learning exactly what Google is paying Reddit $60 million annually for. And that is to confidently serve its customers ideas like, to make cheese stick on a pizza, “you can also add about 1/8 cup of non-toxic glue” to pizza sauce, which comes directly from the mind of a Reddit user who calls themselves “Fucksmith” and posted about putting glue on pizza 11 years ago.

A joke that people made when Google and Reddit announced their data sharing agreement was that Google’s AI would become dumber and/or “poisoned” by scraping various Reddit shitposts and would eventually regurgitate them to the internet. (This is the same joke people made about AI scraping Tumblr). Giving people the verbatim wisdom of Fucksmith as a legitimate answer to a basic cooking question shows that Google’s AI is actually being poisoned by random shit people say on the internet.

Because Google is one of the largest companies on Earth and operates with near impunity and because its stock continues to skyrocket behind the exciting news that AI will continue to be shoved into every aspect of all of its products until morale improves, it is looking like the user experience for the foreseeable future will be one where searches are random mishmashes of Reddit shitposts, actual information, and hallucinations. Sundar Pichai will continue to use his own product and say “this is good.”

(page 2) 50 comments
sorted by: hot top controversial new old
[–] homesweethomeMrL@lemmy.world 27 points 6 months ago (2 children)

Wow, that is . . some art. There.

Fucksmith would probably approve.

[–] jabathekek@sopuli.xyz 22 points 6 months ago

The legendary hero, Fucksmith, of the by-gone age.

load more comments (1 replies)
[–] SharkAttak@kbin.social 26 points 6 months ago (6 children)

It almost makes me regret deleting all of my comments..
almost.

load more comments (6 replies)
[–] UnhingedFridge@lemmy.world 25 points 6 months ago

I haven't laughed this fucking hard all year. Good stuff.

[–] simplejack@lemmy.world 25 points 6 months ago (1 children)

It begins:

Me:

Have people tried using a coconut as a fleshlight. If so, what happened?

Gemini fed by Reddit:

It appears people have indeed attempted using coconuts for this purpose, and it's not a pretty story. There are accounts online of things going very wrong, like maggots. In some cases, the coconut being used started to rot, attracting flies which laid eggs, resulting in a maggot infestation.

load more comments (1 replies)
[–] Eeyore_Syndrome@sh.itjust.works 25 points 6 months ago (1 children)

I was curious, so I fired up Gemini on my phone.

I would be sad if the glue didn't withstand baking temperatures 🥹😭

I deleted my Reddit account and did my GDPR June 30th when he axed the API and sold out, but that's just me. Everyone's free to do what they want.

load more comments (1 replies)
[–] andrew_bidlaw@sh.itjust.works 23 points 6 months ago (5 children)

Their probable way to solve it? Hire hundreds of $2\hour foreign workers to verify outputs.

load more comments (5 replies)
[–] Kolanaki@yiffit.net 22 points 6 months ago (1 children)

It's weird because it's not exactly misinformation... If you're trying to make a pizza commerical and want that ridiculous cheese pull they always show.

[–] tal@lemmy.today 20 points 6 months ago* (last edited 6 months ago) (4 children)

Some food discoveries have been made by doing what I would call some alarmingly questionable stuff.

I was pretty shocked when I discovered how artificial sweeteners were generally discovered. It frequently involved a laboratory where unknown chemicals accidentally wound up in some researcher's mouth.

Saccharin

Saccharin was produced first in 1879, by Constantin Fahlberg, a chemist working on coal tar derivatives in Ira Remsen's laboratory at Johns Hopkins University.[21] Fahlberg noticed a sweet taste on his hand one evening, and connected this with the compound benzoic sulfimide on which he had been working that day.[22][23]

Cyclamate

Cyclamate was discovered in 1937 at the University of Illinois by graduate student Michael Sveda. Sveda was working in the lab on the synthesis of an antipyretic drug. He put his cigarette down on the lab bench, and when he put it back in his mouth, he discovered the sweet taste of cyclamate.[3][4]

Aspartame

Aspartame was discovered in 1965 by James M. Schlatter, a chemist working for G.D. Searle & Company. Schlatter had synthesized aspartame as an intermediate step in generating a tetrapeptide of the hormone gastrin, for use in assessing an anti-ulcer drug candidate.[54] He discovered its sweet taste when he licked his finger, which had become contaminated with aspartame, to lift up a piece of paper.[10][55]

Acesulfame potassium

Acesulfame potassium was developed after the accidental discovery of a similar compound (5,6-dimethyl-1,2,3-oxathiazin-4(3H)-one 2,2-dioxide) in 1967 by Karl Clauss and Harald Jensen at Hoechst AG.[16][17] After accidentally dipping his fingers into the chemicals with which he was working, Clauss licked them to pick up a piece of paper.[18]

Sucralose

Sucralose was discovered in 1976 by scientists from Tate & Lyle, working with researchers Leslie Hough and Shashikant Phadnis at Queen Elizabeth College (now part of King's College London).[16] While researching novel uses of sucrose and its synthetic derivatives, Phadnis was told to "test" a chlorinated sugar compound. According to an anecdotal account, Phadnis thought Hough asked him to "taste" it, so he did and found the compound to be exceptionally sweet.[17]

Maybe we'll find that glue pizza works.

load more comments (4 replies)
[–] pkmkdz@sh.itjust.works 22 points 6 months ago* (last edited 6 months ago) (1 children)

And then they just slap small disclaimer on bottom of the page "Ai may make mistakes" and they are safe legally. I hope ~~there will be class action lawsuit on them some day regardless.~~ this shit gets regulated before anyone hurts themselves

[–] NotMyOldRedditName@lemmy.world 16 points 6 months ago* (last edited 6 months ago) (1 children)

Air Canada tried this and lost in court.

The AI gave wrong advice on a policy, person acted on it, and then Air Canada said, nah dude, the AI was wrong, tough shit.

[–] can@sh.itjust.works 14 points 6 months ago* (last edited 6 months ago)

More info

Air Canada has been ordered to pay compensation to a grieving grandchild who claimed they were misled into purchasing full-price flight tickets by an ill-informed chatbot.

In an argument that appeared to flabbergast a small claims adjudicator in British Columbia, the airline attempted to distance itself from its own chatbot's bad advice by claiming the online tool was "a separate legal entity that is responsible for its own actions."

"This is a remarkable submission," Civil Resolution Tribunal (CRT) member Christopher Rivers wrote.

"While a chatbot has an interactive component, it is still just a part of Air Canada's website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot."

[–] Maggoty@lemmy.world 21 points 6 months ago

I'm just thinking of all the really dumb shit we all said on Reddit as satire. Oh I need to go search military meme stuff!

[–] CosmicCleric@lemmy.world 20 points 6 months ago* (last edited 6 months ago)

because its stock continues to skyrocket behind the exciting news that AI will continue to be shoved into every aspect of all of its products until morale improves,

Okay, I have to admit, this made me laugh. Definitely commentary, but still, a good read.

~Anti~ ~Commercial-AI~ ~license~ ~(CC~ ~BY-NC-SA~ ~4.0)~

[–] NutWrench@lemmy.world 16 points 6 months ago (3 children)

They also highlight the fact that Google’s AI is not a magical fountain of new knowledge, it is reassembled content from things humans posted in the past indiscriminately scraped from the internet and (sometimes) remixed to look like something plausibly new and “intelligent.”

This. "AI" isn't coming up with new information on its own. The current state of "AI" is a drooling moron, plagiarizing any random scrap of information it sees in a desperate attempt to seem smart. The people promoting AI are scammers.

load more comments (3 replies)
[–] andrewth09@lemmy.world 16 points 6 months ago (1 children)

Imagine using the resources of a small country just to generate responses to questions that have the same reliability and verifiability of your stoner older brother remembering something he read online.

load more comments (1 replies)
[–] duffman@lemmy.world 16 points 6 months ago (1 children)

I Googled some extremely invasive weed(creeping buttercup) and Google suggested to let it be, quoting some awful reddit comment.

load more comments (1 replies)
[–] HawlSera@lemm.ee 16 points 6 months ago (1 children)

Can reddit just fucking die off?

load more comments (1 replies)
[–] tal@lemmy.today 15 points 6 months ago* (last edited 6 months ago) (1 children)

For example, as the prophecy foretold, we are learning exactly what Google is paying Reddit $60 million annually for.

You don't have to pay anything to train on the wisdom of the crowd on Fediverse!

load more comments (1 replies)
load more comments
view more: ‹ prev next ›