this post was submitted on 13 Dec 2023
97 points (92.2% liked)

Technology

59578 readers
3381 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Mind-reading AI can translate brainwaves into written text: Using only a sensor-filled helmet combined with artificial intelligence, a team of scientists has announced they can turn a person’s thou...::A system that records the brain's electrical activity through the scalp can turn thoughts into words with help from a large language model – but the results are far from perfect

you are viewing a single comment's thread
view the rest of the comments
[–] knightly@pawb.social 1 points 11 months ago (1 children)

But they don't "answer questions", they just respond to prompts. You can't use them to learn anything without checking their responses against authoritative sources you should have used in the first place.

There's no intelligence there, just a plagirism laundromat and some rules for formatting text like a 7th grader.

[–] Not_mikey@lemmy.world 0 points 11 months ago (1 children)

It can answer questions as well as any person. Just because you may need to check with another source doesn't mean it didn't answer the question it just means you can't fully trust it. If I ask someone who's the fourth u.s. president and they say Jefferson they still answered the question, they just answered it wrong. You also don't have to check with another source in the same way you do with asking a person a question, if it sounds right. If that person answered Madison and I faintly recall it and think it sounds right I will probably not check their answer and take it as fact.

For example I asked chatgpt for a chocolate chip cookie recipe once. I make cookies pretty often so would know if the recipe seemed off but the one it provided seemed good, I followed it and made some pretty good cookies. It answered the question correctly as shown by the cookies. You could argue it plagiarized but while the ingredients and steps were pretty close to some I found later none were a perfect match which is about as good as you can get with recipes which tend to converge in the same thing. The only real difference between most of them is the dumb story they give at the beginning which thankfully chatgpt doesn't do.

The 7th grader and plagiarism comment make me think you haven't played with them much or really tested them. I have had it write contracts, one of which I had reviewed by a lawyer who only had some small comments, as well as other letters and documents I needed for my mortgage and buying a home. All of these were looked over by proffesionals and none of them realized it was a bot. None of them were plagiarized too because the parameters I gave it and the output it created were way too unique to be in its training set.

[–] knightly@pawb.social 1 points 11 months ago (1 children)

It can answer questions as well as any person.

The 7th grader and plagiarism comment make me think you haven't played with them much or really tested them.

Of course I have, my employer has me shoehorning ChatGPT into everything, and I agree with what the research says: Children can answer questions better than LLMs can.

https://techxplore.com/news/2023-12-artificial-intelligence-excel-imitation.html

Stochastic plagirism is still plagirism.

[–] Not_mikey@lemmy.world 0 points 11 months ago

That study is like giving a written test to an illiterate adult, seeing them do worse than a child and saying they aren't intelligent or innovative. Like I said earlier intelligence is multi-faceted, and chatgpt excels at rhetorical, conversational and other types of written intelligence. It does not, as that study shows, do well in spatial manipulation, that doesn't mean it's not intelligent. If you gave that same test to a paralyzed blind person with little to no concept of spatial reality they'd probably do just as bad. If you asked them to compose a short story or an essay they might be good at it because that's where they're capabilities lye. That short story could still be innovative in its composition and characters, and could be way better than anything a child wrote.

You have to measure different types of intelligence with different tests. If you asked chatgpt and a set of adults and children to write a short story about a wholey new subject chatgpt would beat most of the children and probably some of the adults.

And if that short story is about a new subject matter completey out of its training set what/who is it plagiarizing from? You could say it's taking common tropes, themes and story elements from other stories, but that's fundamentally what a lot of writing and culture is. If that's plagiarism then you should be more worried about the marvel franchise as it's a plagiarism machine that has way more cultural impact.