this post was submitted on 28 Oct 2024
1524 points (98.7% liked)

Technology

59555 readers
4537 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] brucethemoose@lemmy.world 327 points 3 weeks ago* (last edited 3 weeks ago) (32 children)

As a fervent AI enthusiast, I disagree.

...I'd say it's 97% hype and marketing.

It's crazy how much fud is flying around, and legitimately buries good open research. It's also crazy what these giant corporations are explicitly saying what they're going to do, and that anyone buys it. TSMC's allegedly calling Sam Altman a 'podcast bro' is spot on, and I'd add "manipulative vampire" to that.

Talk to any long-time resident of localllama and similar "local" AI communities who actually dig into this stuff, and you'll find immense skepticism, not the crypto-like AI bros like you find on linkedin, twitter and such and blot everything out.

[–] falkerie71@sh.itjust.works 99 points 3 weeks ago (2 children)

For real. Being a software engineer with basic knowledge in ML, I'm just sick of companies from every industry being so desperate to cling onto the hype train they're willing to label anything with AI, even if it has little or nothing to do with it, just to boost their stock value. I would be so uncomfortable being an employee having to do this.

[–] Mikelius@lemmy.world 31 points 3 weeks ago (1 children)

For sure, it seems like 90% of ai startups are nothing more than front end wrappers for a gpt instance.

[–] dan@upvote.au 21 points 3 weeks ago* (last edited 3 weeks ago)

They're all built on top of OpenAI which is very unprofitable at the moment. Feels like the whole industry is built on a shaky foundation.

Putting the entire fate of your company in a different company (OpenAI) is not a great business move. I guess the successful AI startups will eventually transition to self-hosted models like Llama, if they survive that long.

[–] Badland9085@lemm.ee 6 points 3 weeks ago (1 children)

As someone who was working really hard trying to get my company to be able use some classical ML (with very limited amounts of data), with some knowledge on how AI works, and just generally want to do some cool math stuff at work, being asked incessantly to shove AI into any problem that our execs think are “good sells” and be pressured to think about how we can “use AI” was a terrible feel. They now think my work is insufficient and has been tightening the noose on my team.

[–] falkerie71@sh.itjust.works 3 points 3 weeks ago

This. Exactly.

[–] Blackmist@feddit.uk 29 points 3 weeks ago (1 children)

TSMC are probably making more money than anyone in this goldrush by selling the shovels and picks, so if that's their opinion, I feel people should listen...

There's little in the AI business plan other than hurling money at it and hoping job losses ensue.

[–] brucethemoose@lemmy.world 9 points 3 weeks ago

TSMC doesn't really have official opinions, they take silicon orders for money and shrug happily. Being neutral is good for business.

Altman's scheme is just a whole other level of crazy though.

[–] conciselyverbose@sh.itjust.works 19 points 3 weeks ago

Seriously, I'd love to be enthusiastic about it because it's genuinely cool what you can do with math.

But the lies that are shoved in our faces are just so fucking much and so fucking egregious that it's pretty much impossible.

And on top of that LLMs are hugely overshadowing actual interesting approaches for funding.

[–] WoodScientist@lemmy.world 17 points 3 weeks ago (1 children)

I think we should indict Sam Altman on two sets of charges:

  1. A set of securities fraud charges.

  2. 8 billion counts of criminal reckless endangerment.

He's out on podcasts constantly saying the OpenAI is near superintelligent AGI and that there's a good chance that they won't be able to control it, and that human survival is at risk. How is gambling with human extinction not a massive act of planetary-scale criminal reckless endangerment?

So either he is putting the entire planet at risk, or he is lying through his teeth about how far along OpenAI is. If he's telling the truth, he's endangering us all. If he's lying, then he's committing securities fraud in an attempt to defraud shareholders. Either way, he should be in prison. I say we indict him for both simultaneously and let the courts sort it out.

[–] FlyingSquid@lemmy.world 4 points 3 weeks ago

"When you're rich, they let you do it."

[–] paddirn@lemmy.world 15 points 3 weeks ago (2 children)

I really want to like AI, I’d love to have an intelligent AI assistant or something, but I just struggle to find any uses for it outside of some really niche cases or for basic brainstorming tasks. Otherwise, it just feels like alot of work for very little benefit or results that I can’t even trust or use.

[–] brucethemoose@lemmy.world 10 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

It's useful.

I keep Qwen 32B loaded on my desktop pretty much whenever its on, as an (unreliable) assistant to analyze or parse big texts, to do quick chores or write scripts, to bounce ideas off of or even as a offline replacement for google translate (though I specifically use aya 32B for that).

It does "feel" different when the LLM is local, as you can manipulate the prompt syntax so easily, hammer it with multiple requests that come back really fast when it seems to get something wrong, not worry about refusals or data leakage and such.

[–] brbposting@sh.itjust.works 3 points 3 weeks ago (1 children)

Attractive. You got some pretty solid specs?

Rue the day I cheaped out on RAM. soldered RAMmmm

[–] brucethemoose@lemmy.world 1 points 3 weeks ago* (last edited 3 weeks ago)

Soldered is better! It's sometimes faster, definitely faster if it happens to be lpddr.

But TBH the only thing that really matters his "how much VRAM do you have," and Qwen 32B slots in at 24GB, or maybe 16GB if the GPU is totally empty and you tune your quantization carefully. And the cheapest way to that (until 2025) is a used MI60, P40 or 3090.

[–] dan@upvote.au 2 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

I receive alerts when people are outside my house, using security cameras, Blue Iris, CodeProject AI, Node-RED and Home Assistant, using a Google Coral for local AI. Entirely local - no cloud services apart from Google's notification system to get notifications to my phone while I'm not home (which most Android apps use). That's a good use case for AI since it avoids false positives that occur with regular motion detection.

[–] WalnutLum@lemmy.ml 1 points 3 weeks ago (1 children)

I've been curious about google coral, but their memory is so tiny I'm not sure what kinds of models you can run on them

[–] dan@upvote.au 1 points 3 weeks ago

A lot of people use them for the use case I described (object detection for security cameras), using either Blue Iris or Frigate. They work pretty well for that use case.

Wake word detection is a good use case too (eg if you're making your own smart assistant).

The Coral site lists a few use cases.

[–] just_an_average_joe@lemmy.dbzer0.com 14 points 3 weeks ago (1 children)

The saddest part is, this is going to cause yet another AI winter. The first few ones were caused by genuine over-enthusiasm but this one is purely fuelled by greed.

[–] sploosh@lemmy.world 9 points 3 weeks ago

The AI ecosystem is flooded, we need a good bubble pop to slow down the massive waste of resources that our current info-remix-based-on-what-you-will-likely-react-positively-to shit-tier AI represents.

[–] tacosanonymous@lemm.ee 9 points 3 weeks ago (1 children)

Agreed that’s why it’s so dangerous. These tech bros are going to do damage with their shitty products. It seems like it's Altman's goal, honestly.

He wants money/power, and he is getting it. The rest of the AI field will forever be haunted by his greed.

[–] KSPAtlas@sopuli.xyz 6 points 3 weeks ago (3 children)

After getting my head around the basics of the way LLMs work I thought "people rely on this for information?", the model seems ok for tasks like summarisation though

[–] brbposting@sh.itjust.works 9 points 3 weeks ago

I don’t love it for summarization. If I read a summary, my takeaway may be inaccurate.

Brainstorming is incredible. And revision suggestions. And drafting tedious responses, reformatting, parsing.

In all cases, nothing gets attributed to me unless I read every word and am in a position to verify the output. And I internalize nothing directly, besides philosophy or something. Sure can be an amazing starting point especially compared to a blank page.

[–] dan@upvote.au 1 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

It's good for coding if you train it on your own code base. Not great for writing very complex code since the models tend to hallucinate, but it's great for common patterns, and straightforward questions specific to your code base that can be answered based on existing code (eg "how do I load a user's most recent order given their email address?")

[–] brbposting@sh.itjust.works 3 points 3 weeks ago

It's wild when you only know how to use SELECT in SQL, but after a dollar worth of prompting and 10 minutes of your time, you can have a significantly complex query you end up using multiple times a week.

load more comments (1 replies)
[–] Damage@feddit.it 3 points 3 weeks ago (1 children)

TSMC's allegedly calling Sam Altman a 'podcast bro' is spot on, and I'd add "manipulative vampire" to that.

What's the source for that? It sounds hilarious

[–] brucethemoose@lemmy.world 13 points 3 weeks ago

https://web.archive.org/web/20240930204245/https://www.nytimes.com/2024/09/25/business/openai-plan-electricity.html

When Mr. Altman visited TSMC’s headquarters in Taiwan shortly after he started his fund-raising effort, he told its executives that it would take $7 trillion and many years to build 36 semiconductor plants and additional data centers to fulfill his vision, two people briefed on the conversation said. It was his first visit to one of the multibillion-dollar plants.

TSMC’s executives found the idea so absurd that they took to calling Mr. Altman a “podcasting bro,” one of these people said. Adding just a few more chip-making plants, much less 36, was incredibly risky because of the money involved.

[–] Evotech@lemmy.world 3 points 3 weeks ago (2 children)

It's selling the future, but nobody knows if we can actually get there

[–] brucethemoose@lemmy.world 6 points 3 weeks ago

It's selling an anticompetitive dystopia. It's selling a Facebook monopoly vs selling the Fediverse.

We dont need 7 trillion dollars of datacenters burning the Earth, we need collaborative, open source innovation.

[–] ininewcrow@lemmy.ca 3 points 3 weeks ago

The first part is true .... no one cares about the second part of your statement.

load more comments (22 replies)