They're right
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
If I think of what causes the average person to consider another to be “smart,” like quickly answering a question about almost any subject, giving lots of detail, and most importantly saying it with confidence and authority, LLMs are great at that shit!
They might be bad reasons to consider a person or thing “smart,” but I can’t say I’m surprised by the results. People can be tricked by a computer for the same reasons they can be tricked by a human.
So LLMs are confident you say. Like a very confident man. A confidence man. A conman.
No one has asked so I am going to ask:
What is Elon University and why should I trust them?
i guess the 90% marketing (re: linus torvalds) is working
I wouldn't be surprised if that is true outside the US as well. People that actually (have to) work with the stuff usually quickly learn, that its only good at a few things, but if you just hear about it in the (pop-, non-techie-)media (including YT and such), you might be deceived into thinking Skynet is just a few years away.
It's a one trick pony.
That trick also happens to be a really neat trick that can make people think it's a swiss army knife instead of a shovel.
I don't think a single human who knows as much as chatgpt does exists. Does that mean chatgpt is smarter then everyone? No. Obviously not based on what we've seen so far. But the amount of information available to these LLMs is incredible and can be very useful. Like a library contains a lot of useful information but isn't intelligent itself.
Aside from the unfortunate name of the university, I think that part of why LLMs may be perceived as smart or 'smarter' is because they are very articulate and, unless prompted otherwise, use proper spelling and grammar, and tend to structure their sentences logically.
Which 'smart' humans may not do, out of haste or contextual adaptation.
While this is pretty hilarious LLMs don't actually "know" anything in the usual sense of the word. An LLM, or a Large Language Model is a basically a system that maps "words" to other "words" to allow a computer to understand language. IE all an LLM knows is that when it sees "I love" what probably comes next is "my mom|my dad|ect". Because of this behavior, and the fact we can train them on the massive swath of people asking questions and getting awnsers on the internet LLMs essentially by chance are mostly okay at "answering" a question but really they are just picking the next most likely word over and over from their training which usually ends up reasonably accurate.
Just a thought, perhaps instead of considering the mental and educational state of the people without power to significantly affect this state, we should focus on the people who have power.
For example, why don't LLM providers explicitly and loudly state, or require acknowledgement, that their products are just imitating human thought and make significant mistakes regularly, and therefore should be used with plenty of caution?
It's a rhetorical question, we know why, and I think we should focus on that, not on its effects. It's also much cheaper and easier to do than refill years of quality education in individuals heads.
This is sad. This does not spark joy. We're months from someone using "but look, ChatGPT says..." To try to win an argument. I can't wait to spend the rest of my life explaining to people that LLMs are really fancy bullshit generator toys.
Already happened in my work. People swearing an API call exists because an LLM hallucinated it. Even as the people who wrote the backend tells them it does not exist
Given the US adults I see on the internet, I would hazard a guess that they're right.
There’s a lot of ignorant people out there so yeah, technically LLM is smarter than most people.
Do the other half believe it is dumber than it actually is?
I wasn't sure from the title if it was "Nearly half of U.S. adults believe LLMs are smarter than [the US adults] are." or "Nearly half of U.S. adults believe LLMs are smarter than [the LLMs actually] are." It's the former, although you could probably argue the latter is true too.
Either way, I'm not surprised that people rate LLMs intelligence highly. They obviously have limited scope in what they can do, and hallucinating false info is a serious issue, but you can ask them a lot of questions that your typical person couldn't answer and get a decent answer. I feel like they're generally good at meeting what people's expectations are of a "smart person", even if they have major shortcomings in other areas.
oh my god 49% of LLM users are pathologically stupid.
and still wrong.
"US".... Even LLM won't vote for Trump
The funny thing about this scenario is by simply thinking that’s true, it actually becomes true.
This is hard to quantify. I use them constantly throughout my work day now.
Are they smarter than me? I'm not sure. Haven't thought too much about it.
What they certainly are, and by a long shot, is faster. Given a set of data, I could analyze it and pull out insights and conclusions. It might take me a week or a month depending on the size and breadth of the data set. An LLM can pull out insights and conclusions in seconds.
I can read error stacks coming from my code, but before I've even read the first few lines the LLM has ingested all of them, checked the code, and reached a conclusion about the necessary fix. Is it right, optimal, and avoid creating other bugs? Like 75% at this point. I can coax it, interate on the solution my self, or do it entirely myself with the understanding of the bug that it granted me. This same bug might have taken hours to figure out myself.
My point is, I'm not sure how to compare smarter vs orders of magnitude faster.