this post was submitted on 22 Aug 2023
166 points (94.1% liked)

Technology

34987 readers
170 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] bh11235@infosec.pub 62 points 1 year ago (6 children)

Reading this comment section is so strange. Skepticism about generative AI seems to have become some kind of professional sport on the internet.

Consensus in our group is that generative AI is a great tool. Maybe not perfect, but the comparison to the metaverse is absurd: no one asked for the metaverse or needed it for anything, as opposed to several cases where GPT has literally bailed us out of a difficult situation. e.g. some proof of concept needed to be written in a programming language that no one in the group had enough experience with. With no GPT, this could have easily cost someone a week. With GPT assistance -- proof of concept ready in less than a day.

Generative AI does suffer from a host of problems. Hallucinations, jailbreaks, injections, reality 101 failures, believe me I've encountered all these intimately as I've had to utilize GPT for some of my day job tasks, often against its own better judgment and despite its own woefully lacking capacity to deal with the task. What I think is interesting is a candid discussion: why do these issues persist? What have we tried? What techniques can we try next? Are these issues intractable in some profound sense, and constitute a hard ceiling for where generative AI can go? Is there an "impossibility theorem for putting AI on autopilot"? Or are these limitations just artifacts we can engineer away and route around?

It seems like instead of having this discussion, it's become in vogue to wave around the issues triumphantly and implicitly declare the field successfully dunked on, and the discussion over. That's, to be blunt, reductive. Smartphones had issues, the early internet had issues. Sure, "they also laughed at Bozo the clown" and all that, but without a serious discussion of the landscape right now, of how far away we are from mitigating these issues and why, a lot of this "ha ha suck it AI" discourse strikes me as deeply performative. Like, suppose a year from now OpenAI solves hallucinations. The issue is just gone. Do all the cool kids who sneered at the invented legal precedents, crafted their image as knowing better than the OpenAI dweebs, elegantly implied how hallucinations are a cornerstone in how the entire field is a stupid useless dead end -- do they lose any face? I think they don't. I think this is why this sneering has become such a lucrative online professional sport.

[–] joe@lemmy.world 10 points 1 year ago (1 children)

It's anecdotal but I have found that the people who are "skeptical" (to use your word) about generative AI often turn out to be financially dependent on something that generative AI can do.

That it to say, they're worried it will replace them at their job and so they very much want it to fail.

[–] nuxetcrux@lemmy.world 6 points 1 year ago

You have to have some skin in the game for that kind of cognitive dissonance. I think some are even resentful they can't understand it. A 21st century cotton gin.

load more comments (4 replies)