this post was submitted on 21 May 2025
-3 points (40.0% liked)

Futurology

2602 readers
38 users here now

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Opinionhaver@feddit.uk 2 points 3 days ago (2 children)

Have you actually tested these tools? Because while they’re not flawless, I’d still say that even in their current form, they’re pretty damn good.

I love how quickly we get used to things. ChatGPT would’ve been considered straight-up magic ten years ago - and now we’re just shitting on it for the edge cases where it gets something blatantly wrong, while completely dismissing the countless times it does exactly what it’s supposed to.

[–] SchizoDenji@lemm.ee 1 points 2 days ago

I have, they degrade very significantly whenever the context goes to 8000+ tokens.

[–] nimpnin@sopuli.xyz 5 points 3 days ago (1 children)

It’s approaching the lower levels of human reasoning, which, as we have realized over the past few years, isn’t that impressive.

[–] Infinite@lemmy.zip 2 points 3 days ago (1 children)

Yeah, I bet at least half of the major models would fall for disinformation and vote for fascists. The Turing test is seeing if the AI will commit hate crime using legal justification, right? Definitely by next year.

[–] Opinionhaver@feddit.uk 0 points 3 days ago* (last edited 3 days ago)

I bet at least half of the major models would fall for disinformation and vote for fascists.

And what is this assumption based on?