this post was submitted on 04 Nov 2024
96 points (95.3% liked)
Asklemmy
43939 readers
747 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
"I've asked ChatGPT about xyz" , and "how to use chatGPT for xyz" in my experience gets me downvotes fast.
People are quick to presume you have no ability to fact check anything and that you will be following its advice blindly, (which mind you - you were never asking for in the first place) instead of asking a human, ever ( for example about medical conditions but not limited to that topic). People presume you are trying to eliminate the human factor out of the equation completely and are quick to remind you of your sins, god forbid you ever use a chatbot to test ideas, ask for a summary on a topic so you can expand your research later or get creative with it in any way. If you do, most people don't like to know.
I think the bigger problem is that each answer it gives basically destroys a forest
That, and it's filling once-useful search engines with useless and even dangerous gibberish.
Yeah, that's a big one. Search engines had been getting worse, but the decline was turbocharged after all the LLM hype. Search engines are practically unusable now
LLMs, in their primary and most common uses, are planet-burning trash, but the important thing is the contrarian in this thread found a way to feel superior to those that don't like when material conditions get worse.
well then its all been worth it
To be fair: "For each answer it gives", nah. You can run a model on your home computer even. It might not be so bad if we just had an established model and asked it questions.
The "forest destroying" is really in training those models.
Of course at this point I guess it's just semantics, because as long as it gets used, those companies are gonna be non-stop training those stupid models until they've created a barren wasteland and there's nothing left....
So yeah, overall pretty destructive and it sucks...
Training a model takes more power than what? Generating a single poem? Using it to generate an entire 4th grade class's essays? To answer all questions in Hawaii for 6th months? What is the scale? The break even point for training is far far less than total usage.
Have you ever used one locally? Depending on your hardware it's anywhere between glacially to a morgue's AC slow. To the average person on the average computer it is nearly unusable, relative to the instant gratification of the web interface.
That gives you a sense of the resources required to do the task at all, but it doesn't scale linearly. 2 computers aren't twice as fast as one. It's logarithmic. With diminishly returns. In the end, this means one 100 word response uses the equivalent of 3 bottles of water.
How many queries are made per hour? How does that scale over time with increased usage of the same model? More than training a model. A lot more.
Yeah you make a really good point there! I was perhaps thinking too simplistically and scaling from my personal experience with playing around on my home machine.
Although realistically, it seems the situation is pretty bad because freaky-giant-mega-computers are both training models AND answering countless silly queries per second. So at scale it sucks all around.
Minus the terrible fad-device-cycle manufacturing aspect, if they're really sticking to their guns on pushing this LLM madness, do you think this wave of onboard "Ai chips" will make any impact on lessening natural resource usage at scale?
(Also offtopic but I wonder how much a sweet juicy exploit target these "ai modules" will turn out to be.)
It's really opaque. We won't know the environmental impact right away. Part of the larger problem is, while folks like you and I make a sizable impact, it's nothing compared to enterprise usage at scale. Every website, app, and operating system with an AI button makes it even easier for users to interface with AI leading to more queries. Not only that, those queries and responses are collected and used to further make queries.
Should the usage of AI stay stable, improved hardware would decrease carbon output. We should be cautious coming to that conclusion. What is more likely is that increased efficiency will lead to increased usage. Perhaps at an accelerated rate with the anticipation of even more technological breakthroughs down the line.
All that said, I'm really not a doomer. It's important we all consider the cost of our choices. The way I see it, we are all going to die eventually. I'm old enough it will probably be from something else.
Okay that's a valid point and one so far nobody comes up with. Congrats
ChatGPT is awful for the environment and should not be used
If you have fact-checked it, why not just say that wherever you did that is where you got the answer from? People are right to be skeptical of "ChatGPT says so", and if you've used it as the start of your research rather than as your entire research then just saying "I asked ChatGPT" is no different to "I googled it", and nobody would much like you saying that either. How you found the information is less important than where you found it.
This are precisely the kind of presumptions people make. I'm never making an argument "because ChatGPT says so". And yes you are absolutely right - chatbot answers are on par with search engine results if not even less reliable in occasions. My point is that I'm not using any of the information as evidence, counterpoints or even advice. People take a stand as if I were.
For example, once I asked ChatGPT about a sensation I feel on my skin after heavy exercise, because googling didn't give me satisfactory results. GPT didn't either, but it gave me a list of close matches. The sensation itself was never a problem for me, never something I intended to change, was never something I would consider going to a doctor for and if I never knew what was causing it my life would carry on just the same. I was simply curious. And out of curiosity I asked here, and the majority of the answers were "you shouldn't be asking to randoms online, how dare you", "this is a question for a doctor, don't ask for medical advice to a chatbot" - both stances baffled me. Never in my post I said anything that suggested I was in pain, discomfort, or that I wanted to change anything about it, or that I was expecting people to tell me how to make it go away- nothing. I just wanted to know what it was, period. People presume.
On the other hand, it's totally cool and good to drag around a big cross of contrarianism in a totally-not-self-righteous way because your treat printers were criticized, amirite?