this post was submitted on 25 Jul 2025
397 points (98.3% liked)
memes
16566 readers
3212 users here now
Community rules
1. Be civil
No trolling, bigotry or other insulting / annoying behaviour
2. No politics
This is non-politics community. For political memes please go to !politicalmemes@lemmy.world
3. No recent reposts
Check for reposts when posting a meme, you can only repost after 1 month
4. No bots
No bots without the express approval of the mods or the admins
5. No Spam/Ads
No advertisements or spam. This is an instance rule and the only way to live.
A collection of some classic Lemmy memes for your enjoyment
Sister communities
- !tenforward@lemmy.world : Star Trek memes, chat and shitposts
- !lemmyshitpost@lemmy.world : Lemmy Shitposts, anything and everything goes.
- !linuxmemes@lemmy.world : Linux themed memes
- !comicstrips@lemmy.world : for those who love comic stories.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
At my work, it's become common for people to say "AI level" when giving a confidence score. Without saying anything else, everyone seems to perfectly understand the situation, even if hearing it for the first time.
Keep in mind, we have our own in-house models that are bloody fantastic, used for different sciences and research. We'd never talk ill of those, but it's not the first thing that comes to mind when people hear "AI" these days.
I'm a scientist who has become super interested in this stuff in recent years, and I have adopted the habit of calling the legit stuff "machine learning", reserving "AI" for the hype machine bullshit
This hits hard. I was in college when I first learned a machine solved the double pendulum problem. Problem is we have no idea how the equation works though. I remember thinking all the stuff machine learning could solve. Then they over hyped these LLMs that are good at ::checks my notes:: chatting with you.....
It's certainly becoming easier, but I don't like it.
We have a cool AI that's a big problem solver, but its outputs are complex. We've attached a GPT onto it purely to act kind of like a translator or summariser to save time trying to understand what the AI's done and why. It's great. But we definitely don't see the GPT as offering any sort of intelligence, it's just a reference based algorithmic protocol bolted onto an actual AI. Protocols are, afterall, a set of rules or processes to follow. The GPT isn't offering any logic, reasoning, planning, etc. which are still the conditions of intelligence in computer science. But it certainly can give off the impression of intelligence as it's literally designed to impersonate it.
Lmao I'm doing the exact thing. I'm a ChemE and I have been doing a lot of work on AI based process controls and I have coached members of my team to use "ML" and "Machine Learning" to refer to these systems because things like ChatGPT that most people see as toys are all that people think about with "AI." The other day someone asked me "So have you gotten ChatGPT running the plant yet?" And I laughed and said no and explained the difference between what we're doing and AI that you see in the news. I even have had to include slides in just about every presentation I've done on this to say "no, we are not just asking ChatGPT how to run the process" because that's the first thing that comes to mind and it scares them because ChatGPT is famously very prone to making shit up.