this post was submitted on 01 Jun 2024
1598 points (98.6% liked)

Technology

59578 readers
3092 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Phegan@lemmy.world 144 points 5 months ago (6 children)

It blows my mind that these companies think AI is good as an informative resource. The whole point of generative text AIs is the make things up based on its training data. It doesn't learn, it generates. It's all made up, yet they want to slap it on a search engine like it provides factual information.

[–] SpaceCowboy@lemmy.ca 30 points 5 months ago (1 children)

Yeah, I use ChatGPT fairly regularly for work. For a reminder of the syntax of a method I used a while ago, and for things like converting JSON into a class (which is trivial to do, but using chatGPT for this saves me a lot of typing) it works pretty good.

But I'm not using it for precise and authoritative information, I'm going to a search engine to find a trustworthy site for that.

Putting the fuzzy, usually close enough (but sometimes not!) answers at the top when I'm looking for a site that'll give me a concrete answer is just mixing two different use cases for no good reason. If google wants to get into the AI game they should have a separate page from the search page for that.

[–] KairuByte@lemmy.dbzer0.com 7 points 5 months ago

Yeah it’s damn good for translating between languages, or things that are simple in concept but drawn out in execution.

Used it the other day to translate a complex EF method syntax statement into query syntax. It got it mostly right, did need some tweaking, but it saved me about 10 minutes of humming and hawing to make sure I did it correctly (honestly I don’t use query syntax often.)

[–] enleeten@discuss.online 24 points 5 months ago

They give zero fucks about their customers, they just want to pump that stock price so their RSUs vest.

This stuff could give you incurable highly viral brain cancer that would eliminate the human race and they'd spend millions killing the evidence.

[–] KevonLooney@lemm.ee 10 points 5 months ago

True, and it's excellent at generating basic lists of things. But you need a human to actually direct it.

Having Google just generate whatever text is like just mashing the keys on a typewriter. You have tons of perfectly formed letters that mean nothing. They make no sense because a human isn't guiding them.

[–] hellofriend@lemmy.world 8 points 5 months ago (1 children)

It's like the difference between being given a grocery list from your mum and trying to remember what your mum usually sends you to the store for.

[–] deadbeef79000@lemmy.nz 7 points 5 months ago* (last edited 5 months ago) (1 children)

... Or calling your aunt and having her yell things at you that she thinks might be on your Mum's shopping list.

[–] Malfeasant@lemmy.world 3 points 5 months ago (1 children)

That could at least be somewhat useful... It's more like grabbing some random stranger and asking what their aunt thinks might be on your mum's shopping list.

[–] deadbeef79000@lemmy.nz 3 points 5 months ago

... but only one word at a time. So you end up with:

  • Bread
  • Cheese
  • Cow eggs
  • Chicken milk
[–] ricecake@sh.itjust.works 5 points 5 months ago

I mean, it does learn, it just lacks reasoning, common sense or rationality.
What it learns is what words should come next, with a very complex a nuanced way if deciding that can very plausibly mimic the things that it lacks, since the best sequence of next-words is very often coincidentally reasoned, rational or demonstrating common sense. Sometimes it's just lies that fit with the form of a good answer though.

I've seen some people work on using it the right way, and it actually makes sense. It's good at understanding what people are saying, and what type of response would fit best. So you let it decide that, and give it the ability to direct people to the information they're looking for, without actually trying to reason about anything. It doesn't know what your monthly sales average is, but it does know that a chart of data from the sales system filtered to your user, specific product and time range is a good response in this situation.

The only issue for Google insisting on jamming it into the search results is that their entire product was already just providing pointers to the "right" data.

What they should have done was left the "information summary" stuff to their role as "quick fact" lookup and only let it look at Wikipedia and curated lists of trusted sources (mayo clinic, CDC, national Park service, etc), and then given it the ability to ask clarifying questions about searches, like "are you looking for product recalls, or recall as a product feature?" which would then disambiguate the query.

[–] platypus_plumba@lemmy.world -4 points 5 months ago* (last edited 5 months ago) (2 children)

It really depends on the type of information that you are looking for. Anyone who understands how LLMs work, will understand when they'll get a good overview.

I usually see the results as quick summaries from an untrusted source. Even if they aren't exact, they can help me get perspective. Then I know what information to verify if something relevant was pointed out in the summary.

Today I searched something like "Are owls endangered?". I knew I was about to get a great overview because it's a simple question. After getting the summary, I just went into some pages and confirmed what the summary said. The summary helped me know what to look for even if I didn't trust it.

It has improved my search experience... But I do understand that people would prefer if it was 100% accurate because it is a search engine. If you refuse to tolerate innacurate results or you feel your search experience is worse, you can just disable it. Nobody is forcing you to keep it.

[–] RageAgainstTheRich@lemmy.world 4 points 5 months ago (1 children)

I think the issue is that most people aren't that bright and will not verify information like you or me.

They already believe every facebook post or ragebait article. This will sadly only feed their ignorance and solidify their false knowledge of things.

[–] platypus_plumba@lemmy.world -3 points 5 months ago* (last edited 5 months ago)

The same people who didn't understand that Google uses a SEO algorithm to promote sites regardless of the accuracy of their content, so they would trust the first page.

If people don't understand the tools they are using and don't double check the information from single sources, I think it's kinda on them. I have a dietician friend, and I usually get back to him after doing my "Google research" for my diets... so much misinformation, even without an AI overview. Search engines are just best effort sources of information. Anyone using Google for anything of actual importance is using the wrong tool, it isn't a scholar or research search engine.

[–] rogue_scholar@eviltoast.org 2 points 5 months ago

you can just disable it

This is not actually true. Google re-enables it and does not have an account setting to disable AI results. There is a URL flag that can do this, but it's not documented and requires a browser plugin to do it automatically.