It's going to be great when the AI hype bubble crashes
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
I didn't have the US becoming a banana republic on my bingo card tbf
why not
Yeah ten years seems like plenty of notice
When this puppy pops it's gonna splatter all of us with chunky bits.
Open models are going to kick the stool out. Hopefully.
GLM 4.5 is already #2 on lm arena, above Grok and ChatGPT, and runnable on homelab rigs, yet just 32B active (which is mad). Extrapolate that a bit, and it’s just a race to the zero-cost bottom. None of this is sustainable.
I did not understand half of what you've written. But what do I need to get this running on my home PC?
I am referencing this: https://z.ai/blog/glm-4.5
The full GLM? Basically a 3090 or 4090 and a budget EPYC CPU. Or maybe 2 GPUs on a threadripper system.
GLM Air? Now this would work on a 16GB+ VRAM desktop, just slap in 96GB+ (maybe 64GB?) of fast RAM. Or the recent Framework desktop, or any mini PC/laptop with the 128GB Ryzen 395 config, or a 128GB+ Mac.
You’d download the weights, quantize yourself if needed, and run them in ik_llama.cpp (which should get support imminently).
https://github.com/ikawrakow/ik_llama.cpp/
But these are…not lightweight models. If you don’t want a homelab, there are better ones that will fit on more typical hardware configs.
You can probably just use ollama and import the model.
It’s going to be slow as molasses on ollama. It needs a better runtime, and GLM 4.5 probably isn’t supported at this moment anyway.
So is it smart to short on the ai bubble ? 👉👈
The question is when, not if. But it's an expensive question to guess the "when" wrong. I believe the famous idiom is: the market can stay irrational longer than you can stay solvent.
Best of luck!
Ooowee, they are setting up the US for a major bust aren't they. I guess all the wealthy people will just have to buy up everything when it becomes dirt cheap. Sucks to have to own everything I guess.
Recognizing from history the possibilities of where this all might lead, the prospect of any serious economic downturn being met with a widespread push of mass automation—paired with a regime overwhelmingly friendly to the tech and business class, and executing a campaign of oppression and prosecution of precarious manual and skilled laborers—well, it should make us all sit up and pay attention.
Your kids will enjoy their new Zombie Twitter AI teacher with fabulous lesson plans like, "Was the Holocaust real or just a hoax?"
Not only the tech bubble is doing that.
It's also the tech bubble ow and the pyramide scheme of the US housing sector will cause more financial issues as well and so is the whole creditcard system
propo dabogda
So what sound should it make when this bubble pops?