this post was submitted on 26 Aug 2025
85 points (98.9% liked)

Technology

40283 readers
414 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
 

There tend to be three AI camps. 1) AI is the greatest thing since sliced bread and will transform the world. 2) AI is the spawn of the Devil and will destroy civilization as we know it. And 3) "Write an A-Level paper on the themes in Shakespeare's Romeo and Juliet."

I propose a fourth: AI is now as good as it's going to get, and that's neither as good nor as bad as its fans and haters think, and you're still not going to get an A on your report.

You see, now that people have been using AI for everything and anything, they're beginning to realize that its results, while fast and sometimes useful, tend to be mediocre.

My take is LLMs can speed up some work, like paraphrasing, but all the time that gets saved is diverted to verifying the output.

you are viewing a single comment's thread
view the rest of the comments
[–] pglpm@lemmy.ca 26 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

They can be useful, used "in negative". In a physics course at an institution near me, students are asked to check whether the answers to physics questions given by an LLM/GPT are correct or not, and why.

On the one hand, this puts the students with their back against the wall, so to speak, because clearly they can't use the same or another LLM/GPT to answer, or they'd be going in circles.

But on the other hand, they actually feel empowered when they catch the errors in the LLM/GPT; they really get a kick out of that :)

As a bonus, the students see for themselves that LLMs/GPTs are often grossly or subtly wrong when answering technical questions.

[–] Megaman_EXE@beehaw.org 11 points 2 weeks ago

I've heard of this kind of AI usage a few times now and it seems so smart. You're learning by teaching, but also being trained in AI literacy and the downfalls of AI. It encourages critical thinking and genuine learning at the same time.

[–] Powderhorn@beehaw.org 3 points 2 weeks ago

In addition to being a fucking brilliant idea for that course, this should be adapted more widely. I suspect, once being young, that you're going to get far more buy-in from showing how often it's wrong than telling them not to use it.