this post was submitted on 17 May 2024
496 points (94.8% liked)

Technology

59597 readers
2784 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Lmaydev@programming.dev 33 points 6 months ago (7 children)

Honestly I feel people are using them completely wrong.

Their real power is their ability to understand language and context.

Turning natural language input into commands that can be executed by a traditional software system is a huge deal.

Microsoft released an AI powered auto complete text box and it's genius.

Currently you have to type an exact text match in an auto complete box. So if you type cats but the item is called pets you'll get no results. Now the ai can find context based matches in the auto complete list.

This is their real power.

Also they're amazing at generating non factual based things. Stories, poems etc.

[–] noodlejetski@lemm.ee 52 points 6 months ago (2 children)

Their real power is their ability to understand language and context.

...they do exactly none of that.

[–] breakingcups@lemmy.world 23 points 6 months ago (1 children)

No, but they approximate it. Which is fine for most use cases the person you're responding to described.

[–] FarceOfWill@infosec.pub 19 points 6 months ago (1 children)

They're really, really bad at context. The main failure case isn't making things up, it's having text or image in part of the result not work right with text or image in another part because they can't even manage context across their own replies.

See images with three hands, where bow strings mysteriously vanish etc.

[–] FierySpectre@lemmy.world -1 points 6 months ago

New models are like really good at context, the amount of input that can be given to them has exploded (fairly) recently... So you can give whole datasets or books as context and ask questions about them.

[–] Lmaydev@programming.dev 0 points 6 months ago

They do it much better than anything you can hard code currently.

[–] Blue_Morpho@lemmy.world 26 points 6 months ago (1 children)

So if you type cats but the item is called pets get no results. Now the ai can find context based matches in the auto complete list.

Google added context search to Gmail and it's infuriating. I'm looking for an exact phrase that I even put in quotes but Gmail returns a long list of emails that are vaguely related to the search word.

[–] Lmaydev@programming.dev 2 points 6 months ago (1 children)

That is indeed a poor use. Searching traditionally first and falling back to it would make way more sense.

[–] Blue_Morpho@lemmy.world 3 points 6 months ago

It shouldn't even automatically fallback. If I am looking for an exact phrase and it doesn't exist, the result should be "nothing found", so that I can search somewhere else for the information. A prompt, "Nothing found. Look for related information?" Would be useful.

But returning a list of related information when I need an exact result is worse than not having search at all.

[–] Apytele@sh.itjust.works 23 points 6 months ago

Yes and no. I've had to insert a LOT of meaning to get a story worth any substance, and I've had to do a lot of editing to get good images. It's really good at giving me a figure that's 90% done, but that last 10% touching up still often takes me a day or so of work.

[–] hedgehogging_the_bed@lemmy.world 13 points 6 months ago (1 children)

Searching with synonym matching is almost.decades old at this point. I worked on it as an undergrad in the early 2000s.and it wasn't new then, just complicated. Google's version improved over other search algorithms for a long time.and then trashed it by letting AI take over.

[–] Lmaydev@programming.dev 3 points 6 months ago* (last edited 6 months ago) (1 children)

Google's algorithm has pretty much always used AI techniques.

It doesn't have to be a synonym. That's just an example.

Typing diabetes and getting medical services as a result wouldn't be possible with that technique unless you had a database of every disease to search against for all queries.

The point is AI means you don't have to have a giant lookup of linked items as it's trained into it already.

[–] hedgehogging_the_bed@lemmy.world 1 points 6 months ago

Yes, synonym searching doesn't strictly mean the thesaurus. There are a lot of different ways to connect related terms and some variation in how they are handled from one system to the next. Letting machine learning into the mix is a very new step in a process that Library and Information Sci has been working on for decades.

[–] Voroxpete@sh.itjust.works 9 points 6 months ago (1 children)

That's called "fuzzy" matching, it's existed for a long, long time. We didn't need "AI" to do that.

[–] Lmaydev@programming.dev -1 points 6 months ago* (last edited 6 months ago) (1 children)

No it's not.

Fuzzy matching is a search technique that uses a set of fuzzy rules to compare two strings. The fuzzy rules allow for some degree of similarity, which makes the search process more efficient.

That allows for mis typing etc. it doesn't allow context based searching at all. Cat doesn't fuzz with pet. There is no similarity.

Also it is an AI technique itself.

[–] hedgehogging_the_bed@lemmy.world 1 points 6 months ago (1 children)

Bullshit, fuzzy matching is a lot older than this AI LLM.

[–] Lmaydev@programming.dev 1 points 6 months ago

I didn't say LLM. AI has existed since the 50s/60s. Fuzzy matching is an AI technique.

[–] Th4tGuyII@kbin.social 9 points 6 months ago

Exactly. The big problem with LLMs is that they're so good at mimicking understanding that people forget that they don't actually have understanding of anything beyond language itself.

The thing they excel at, and should be used for, is exactly what you say - a natural language interface between humans and software.

Like in your example, an LLM doesn't know what a cat is, but it knows what words describe a cat based on training data - and for a search engine, that's all you need.

[–] not_amm@lemmy.ml 2 points 6 months ago

That's why I only use Perplexity. ChatGPT can't give me sources unless I pay, so I can't trust information it gives me and it also hallucinated a lot when coding, it was faster to search in the official documentation rather than correcting and debugging code "generated" by ChatGPT.

I use Perplexity + SearXNG, so I can search a lot faster, cite sources and it also makes summaries of your search, so it saves me time while writing introductions and so.

It sometimes hallucinates too and cites weird sources, but it's faster for me to correct and search for better sources given the context and more ideas. In summary, when/if you're correcting the prompts and searching apart from Perplexity, you already have something useful.

BTW, I try not to use it a lot, but it's way better for my workflow.