this post was submitted on 24 Feb 2024
238 points (92.2% liked)
Technology
59666 readers
2723 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
There are quite a lot of AI-sceptics in this thread. If you compare the situation to 10 years ago, isn't it insane how far we've come since then?
Image generation, video generation, self-driving cars (Level 4 so the driver doesn't need to pay attention at all times), capable text comprehension and generation. Whether it is used for translation, help with writing reports or coding. And to top it all off, we have open source models that are at least in a similar ballpark as the closed ones and those models can be run on consumer hardware.
Obviously AI is not a solved problem yet and there are lots of shortcomings (especially with LLMs and logic where they completely fail for even simple problems) but the progress is astonishing.
I think a big obstacle to meaningfully using AI is going to be public perception. Understanding the difference between CHAT-GPT and open source models means that people like us will probably continue to find ways of using AI as it continues to improve, but what I keep seeing is botched applications, where neither the consumers nor the investors who are pushing AI really understand what it is or what it's useful for. It's like trying to dig a grave with a fork - people are going to throw away the fork and say it's useless, not realising that that's not how it's meant to be used.
I'm concerned about the way the hype behaves because I wouldn't be surprised if people got so sick of hearing about AI at all, let alone broken AI nonsense, that it hastens the next AI winter. I worry that legitimate development may be held back by all the nonsense.
I actually think public perception is not going to be that big a deal one way or the other. A lot of decisions about AI applications will be made by businessmen in boardrooms, and people will be presented with the results without necessarily even knowing that it's AI.
I've seen a weird aspect of it from the science side, where people writing grant applications or writing papers feel compelled to incorporate AI into it, because even if they know that their sub-field has no reliable use-cases for AI yet, they're feeling the pressure of the hype.
Specifically, when I say the pressure of the hype, I mean that some of the best scientists I have known were pretty bad at the academic schmoozing that facilitates better funding and more prestige. In practice, businessmen in boardrooms are often the ones holding the purse strings and sometimes it's easier to try to speak their language than to "translate" one's research to something they'll understand.
Businessmen are just the public but with money.
Fair point. I personally think that AI lives up to enough parts of the hype so that there won't be another AI winter but who knows. Some will obviously get disillusioned but not enough.
Lol. It doesn't do video generation. It just takes existing video and makes it look weird. Image generation is about the same: they just take existing works and smash them together, often in an incoherent way. Half the text generation shit is just fine by underpaid people in Kenya Ave and similar places.
There are a few areas where llm could be useful, things like trawling large data sets, etc, but every bit of the stuff that is being hyped as "AI" is just spam generators.
That's totally not how it works. Not only nobody has the need for such tools, but the technology got there much before the current state of AI
Confidently incorrect.