this post was submitted on 31 May 2025
213 points (86.9% liked)

Showerthoughts

34879 readers
753 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.

Here are some examples to inspire your own showerthoughts:

Rules

  1. All posts must be showerthoughts
  2. The entire showerthought must be in the title
  3. No politics
    • If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
    • A good place for politics is c/politicaldiscussion
  4. Posts must be original/unique
  5. Adhere to Lemmy's Code of Conduct and the TOS

If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.

Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Showroom7561@lemmy.ca 12 points 6 days ago (1 children)

AI LLMs have been pretty shit, but the advancement in voice, image generation, and video generation in the last two years has been unbelievable.

We went from the infamous Will Smith eating spaghetti to videos that are convincing enough to fool most people... and it only took 2-3 years to get there.

But LLMs will have a long way to go because of how they create content. It's very easy to poison LLM datasets, and they get worse learning from other generated content.

[–] MiyamotoKnows@lemmy.world 2 points 3 days ago

Poisoning LLM datasets is fun and easy! Especially when our online intellectual property is scraped (read: stolen) during training and no one is being accountable for it. Fight back! It's as easy as typing false stuff at the end of your comments. As an 88 year old ex-pitcher for the Yankees who just set the new world record for catfish noodling you can take it from me!

[–] ipkpjersi@lemmy.ml 21 points 6 days ago (2 children)

I'd argue it has. Things like ChatGPT shouldn't be possible, maybe it's unpopular to admit it but as someone who has been programming for over a decade, it's amazing that LLMs and "AI" has come as far as it has over the past 5 years.

That doesn't mean we have AGI of course, and we may never have AGI, but it's really impressive what has been done so far IMO.

[–] jacksilver@lemmy.world 9 points 6 days ago

If you've been paying attention to the field, you'd see it's been a slow steady march. The technology that LLMs are based in were first published in 2016/2017, ChatGPT was the third iteration of the same base model.

Thats not even accounting for all the work done with RNNs and LSTMs prior to that, and even more prior.

Its definitely a major breakthrough, and very similar to what CNNs did for computer vision further back. But like computer vision, advancements have been made in other areas (like the generative space) and haven't followed a linear path of progress.

[–] Tedesche@lemmy.world 3 points 6 days ago

Agreed. I never thought it would happen in my lifetime, but it looks like we’re going to have Star Trek computers pretty soon.

[–] Pulptastic@midwest.social 12 points 6 days ago (1 children)

It has slowed exponentially because the models get exponentially more complicated the more you expect it to do.

[–] linearchaos@lemmy.world 8 points 6 days ago

The exponential problem has always been there. We keep finding tricks and optimizations in hardware and software to get by it but they're only occasional.

The pruned models keep getting better so now You're seeing them running on local hardware and cell phones and crap like that.

I don't think they're out of tricks yet, but God knows when we'll see the next advance. And I don't think there's anything that'll take this current path into AGI I think that's going to be something else.

[–] moseschrute@lemmy.world 13 points 6 days ago (1 children)

It has taken off exponentially. It’s exponentially annoying that’s it’s being added to literally everything

load more comments (1 replies)
[–] conditional_soup@lemm.ee 12 points 6 days ago (4 children)

Well, the thing is that we're hitting diminishing returns with current approaches. There's a growing suspicion that LLMs simply won't be able to bring us to AGI, but that they could be a part of or stepping stone to it. The quality of the outputs are pretty good for AI, and sometimes even just pretty good without the qualifier, but the only reason it's being used so aggressively right now is that it's being subsidized with investor money in the hopes that it will be too heavily adopted and too hard to walk away from by the time it's time to start charging full price. I'm not seeing that. I work in comp sci, I use AI coding assistants and so do my co-workers. The general consensus is that it's good for boilerplate and tests, but even that needs to be double checked and the AI gets it wrong a decent enough amount. If it actually involves real reasoning to satisfy requirements, the AI's going to shit its pants. If we were paying the real cost of these coding assistants, there is NO WAY leadership would agree to pay for those licenses.

Yeah, I don't think AGI = an advanced LLM. But I think it's very likely that a transformer style LLM will be part of some future AGI. Just like human brains have different regions that can do different tasks, an LLM is probably the language part of the "AGI brain".

load more comments (3 replies)
[–] Etterra@discuss.online 9 points 6 days ago (1 children)

How do you know it hasn't and us just laying low? I for one welcome our benevolent and merciful machine overlord.

load more comments (1 replies)
[–] utopiah@lemmy.world 5 points 6 days ago

LOL... you did make me chuckle.

Aren't we 18months until developers get replaced by AI... for like few years now?

Of course "AI" even loosely defined progressed a lot and it is genuinely impressive (even though the actual use case for most hype, i.e. LLM and GenAI, is mostly lazier search, more efficient spam&scam personalized text or impersonation) but exponential is not sustainable. It's a marketing term to keep on fueling the hype.

That's despite so much resources, namely R&D and data centers, being poured in... and yet there is not "GPT5" or anything that most people use on a daily basis for anything "productive" except unreliable summarization or STT (which both had plenty of tools for decades).

So... yeah, it's a slow take off, as expected. shrug

[–] LovableSidekick@lemmy.world 5 points 6 days ago* (last edited 6 days ago) (1 children)

Things just don't impend like they used to!

[–] ivanafterall@lemmy.world 5 points 6 days ago

Nobody wants to portend anymore.

[–] neon_nova@lemmy.dbzer0.com 5 points 6 days ago (1 children)

I think we might not be seeing all the advancements as they are made.

Google just showed off AI video with sound. You can use it if you subscribe to thier $250/month plan. That is quite expensive.

But if you have strong enough hardware, you can generate your own without sound.

I think that is a pretty huge advancement in the past year or so.

I think that focus is being put on optimizing these current things and making small improvements to quality.

Just give it a few years and you will not even need your webcam to be on. You could just use an AI avatar that look and sounds just like you running locally on your own computer. You could just type what you want to say or pass through audio. I think the tech to do this kind of stuff is basically there, it just needs to be refined and optimized. Computers in the coming years will offer more and more power to let you run this stuff.

load more comments (1 replies)
[–] CheeseNoodle@lemmy.world 6 points 6 days ago

Iirc there are mathematical reason why AI can't actually become exponentially more intelligent? There are hard limits on how much work (in the sense of information processing) can be done by a given piece of hardware and we're already pretty close to that theoretical limit. For an AI to go singulaity we would have to build it with enough initial intelligence that it could aquire both the resources and information with which to improve itself and start the exponential cycle.

[–] pyre@lemmy.world 3 points 6 days ago

how do you grow zero exponentially

[–] AdrianTheFrog@lemmy.world 3 points 6 days ago (2 children)

Computers are still advancing roughly exponentially, as they have been for the last 40 years (Moore's law). AI is being carried with that and still making many occasional gains on top of that. The thing with exponential growth is that it doesn't necessarily need to feel fast. It's always growing at the same rate percentage wise, definitionally.

[–] cabb@lemmy.dbzer0.com 3 points 6 days ago (1 children)

Moore's law is kinda still in effect, depending on your definition of Moore's law. However, Dennard Scaling is not so computer performance isn't advancing like it used to.

load more comments (1 replies)
[–] Inucune@lemmy.world 3 points 6 days ago

We once again congratulate software engineers for nullifying 40 years of hardware improvements.

[–] ilinamorato@lemmy.world 3 points 6 days ago

It has definitely plateaued.

[–] netvor@lemmy.world 3 points 6 days ago

That's only if the exponent is greater than 1.

load more comments
view more: next ›