this post was submitted on 26 May 2024
729 points (98.4% liked)
Technology
59613 readers
2844 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Just fucking ban AI. The solution is so simple. AI will NEVER be a good solution for anything and it's just theft of information at its core. Fuck AI and fuck any company that uses the garbage.
AI, used in small, local models, as an assistance tool, is actually somewhat helpful. AI is how Google Translate got so good a decade or so ago, for instance; and how assistive image recognition has become good enough that visually-impaired people can potentially access the web just as proficiently as sighted people. LLM-assisted spell check, grammar check, and autocomplete show a lot of promise. LLM-assisted code completion is already working decently well for common programming languages. There are potentially other halfway decent uses as well.
Basically, if you let computers do what they're good at (objective, non-creative, repetitive, large-dataset tasks that don't require reasoning or evaluation), they can make humans better at what they're good at (creativity, pattern-matching, ideation, reasoning). And AI can help with that, even though they can't get humans out of the loop.
But none of those things put dollar signs in VC's eyes. None of those use cases get executives thinking, "hey, maybe we can fire people and save on the biggest single recurring expense any corporation puts on their balance sheet." None of these make worried chip manufacturers breathe a sigh of relief that they can continue making the line go up after Moore's Law finally kicks the bucket. None of those things make headlines in late-stage capitalism. Elon Musk can't use any of those things as smokescreens to distract from his mismanagement of the (formerly) most consequential social media brand in history. None of that gives former crypto bros that same flutter of superiority.
So the hype gets pumped up to insane levels, which makes the valuations inflate, which makes them suck up more data heedless of intellectual property, which makes them build more power-hungry data centers, which means they have to generate more hype (based on capabilities the technology emphatically does not have and probably never will) to justify all of it.
Like with crypto. Blockchain showed some promise in extremely niche, low-trust environments; but that wasn't sexy, or something that anyone could sell.
Once the AI bubble finally breaks, we might actually get some useful tools out of it. Maybe. But you can't sell that.
Ai is already hugely useful and will continue to get more useful as the tech evolves. I know that change upsets you but the reality is you were not born at the highpoint of humanity or the endpoint of history.
It's going to make the world better for a lot of people just like the internet did despite the endless assertions that it was a gmick, scam, and mistake from people who were likely your age now when the internet was emerging.
The funniest thing to me is seeing this community which holds people like Aaron Swartz up as a hero demand the exact opposite of everything he believed in and fought for. Information wants to be free - you want to lock every piece in perpetual impenetrable copyright just to halt the development of tech.
No one wants eternal copyright, but copyright does deserve to exist in a limited form. I always have advocated for a 14 or 17 year copyright term. But AI discards ANY copyright and takes not only copyrighted ideas, but random people's posts (e.g. scraping posts from Lemmy) and from one's own computer/device thanks to Microsoft or Google or Apple and integrates that into the AI hell. This is absolutely a terrible thing no matter your stance on copyright, and the fact that it abuses that to generate laughably wrong answers (and at times, dangerous answers, like ones seen that suggest people off themselves, or do something that is very harmful or lethal and present it as safe) and given that the whole thing is simply a piece-fitting algorithm (calling it "AI" is just laughable, really), it will NEVER -- EVER -- be able to improve to the point where it's useful. That's not how computers are able to work. We've spent decades trying to get self-driving working and it's still just as dangerous and unreliable as it was on day one.
It's a thief and won't ever get anything reliably correct. Plain and simple. The only recourse is to ban it.