this post was submitted on 15 Nov 2024
118 points (99.2% liked)

Futurology

1823 readers
56 users here now

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] MudMan@fedia.io 2 points 2 weeks ago (1 children)

That is a weird proposal.

It's definitely weird that everyone is panicking about data center processing costs but not about the exact same hardware powering high end gaming devices that have skyrocketed from 100W to 450W in a few years, but ultimately if you want to run a model locally you can run a model locally. I'm not sure how you'd regulate that, it's just software.

Hell, I don't even think distributing the load is a terrible idea, it's just that the models you can run locally in 40 TOPS kinda suck compared to the order of magnitude more processing you get on modern GPUs.

[–] TheBat@lemmy.world 1 points 2 weeks ago (1 children)

I'm not talking about stable diffusion or anything like that.

I meant whatever Twitter, or any similar chatbots, or AI assistant features of apps should be run on server-side, not put a load on customers' devices.

[–] MudMan@fedia.io 1 points 2 weeks ago

Yeah, no, I get the spirit of the thing. I'm just saying that... well, for one that it wouldn't be a bad idea if it worked, it just doesn't at the moment. But more importantly that regulations don't work like that. You can't just make rules that go "hey you guys specifically have to run this software on a server specifically". You can already run assistants locally using a whole bunch of downloadable models, it'd be a huge overreach to tell people and companies that they CAN make the software and run it, but only remotely. That's just... not how rules and regulations are put together.