I find this very offensive, wait until my chatgpt hears about this! It will have a witty comeback for you just you watch!
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Quickly, ask AI how to improve or practice critical thinking skills!
Chat GPT et al; "To improve your critical thinking skills you should rely completely on AI."
That sounds right. Lemme ask Gemini and DeepSink just in case.
“Deepsink” lmao sounds like some sink cleaner brand
Sounds a bit bogus to call this a causation. Much more likely that people who are more gullible in general also believe AI whatever it says.
This isn't a profound extrapolation. It's akin to saying "Kids who cheat on the exam do worse in practical skills tests than those that read the material and did the homework." Or "kids who watch TV lack the reading skills of kids who read books".
Asking something else to do your mental labor for you means never developing your brain muscle to do the work on its own. By contrast, regularly exercising the brain muscle yields better long term mental fitness and intuitive skills.
This isn't predicated on the gullibility of the practitioner. The lack of mental exercise produces gullibility.
Its just not something particular to AI. If you use any kind of 3rd party analysis in lieu of personal interrogation, you're going to suffer in your capacity for future inquiry.
Seriously, ask AI about anything you have expert knowledge in. It's laughable sometimes... However you need to know, to know it's wrong. At face value, if you have no expertise it sounds entirely plausible, however the details can be shockingly incorrect. Do not trust it implicitly about anything.
Corporations and politicians: "oh great news everyone... It worked. Time to kick off phase 2..."
- Replace all the water trump wasted in California with brawndo
- Sell mortgages for eggs, but call them patriot pods
- Welcome to Costco, I love you
- All medicine replaced with raw milk enemas
- Handjobs at Starbucks
- Ow my balls, Tuesdays this fall on CBS
- Chocolate rations have gone up from 10 to 6
- All government vehicles are cybertrucks
- trump nft cartoons on all USD, incest legal, Ivanka new first lady.
- Public executions on pay per view, lowered into deep fried turkey fryer on white house lawn, your meat is then mixed in with the other mechanically separated protein on the Tyson foods processing line (run exclusively by 3rd graders) and packaged without distinction on label.
- FDA doesn't inspect food or drugs. Everything approved and officially change acronym to F(uck You) D(umb) A(ss)
that "ow, my balls" reference caught me off-guard
I love how you mix in the Idiocracy quotes :D
I hate how it just seems to slide in.
- Handjobs at Starbucks
Well that's just solid policy right there, cum on.
You mean an AI that literally generated text based on applying a mathematical function to input text doesn't do reasoning for me? (/s)
I'm pretty certain every programmer alive knew this was coming as soon as we saw people trying to use it years ago.
It's funny because I never get what I want out of AI. I've been thinking this whole time "am I just too dumb to ask the AI to do what I need?" Now I'm beginning to think "am I not dumb enough to find AI tools useful?"
Good. Maybe the dumbest people will forget how to breathe, and global society can move forward.
Oh you can guarantee they won't forget how to vote 😃
Microsoft will just make a subscription AI for that, BaaS.
You can either use AI to just vomit dubious information at you or you can use it as a tool to do stuff. The more specific the task, the better LLMs work. When I use LLMs for highly specific coding tasks that I couldn't do otherwise (I'm not a [good] coder), it does not make me worse at critical thinking.
I actually understand programming much better because of LLMs. I have to debug their code, do research so I know how to prompt it best to get what I want, do research into programming and software design principles, etc.
Let me ask chatgpt what I think about this
Well thank goodness that Microsoft isn't pushing AI on us as hard as it can, via every channel that it can.
Learning how to evade and disable AI is becoming a critical thinking skill unto itself. Feels a bit like how I've had to learn to navigate around advertisements and other intrusive 3rd party interruptions while using online services.
No shit.
I grew up as a kid without the internet. Google on your phone and youtube kills your critical thinking skills.
AI makes it worse though. People will read a website they find on Google that someone wrote and say, "well that's just what some guy thinks." But when an AI says it, those same people think it's authoritative. And now that they can talk, including with believable simulations of emotional vocal inflections, it's going to get far, far worse.
Humans evolved to process auditory communications. We did not evolve to be able to read. So we tend to trust what we hear a lot more than we trust what we read. And companies like OpenAI are taking full advantage of that.
Also your ability to search information on the web. Most people I've seen got no idea how to use a damn browser or how to search effectively, ai is gonna fuck that ability completely
Gen Zs are TERRIBLE at searching things online in my experience. I’m a sweet spot millennial, born close to the middle in 1987. Man oh man watching the 22 year olds who work for me try to google things hurts my brain.
To be fair, the web has become flooded with AI slop. Search engines have never been more useless. I've started using kagi and I'm trying to be more intentional about it but after a bit of searching it's often easier to just ask claude
Damn. Guess we oughtta stop using AI like we do drugs/pron/ 😀
Unlike those others, Microsoft could do something about this considering they are literally part of the problem.
And yet I doubt Copilot will be going anywhere.
Remember the:
Personal computers were “bicycles for the mind.”
I guess with AI and social media it's more like melting your mind or something. I can't find another analogy. Like a baseball bat to your leg for the mind doesn't roll off the tongue.
I know Primeagen has turned off copilot because he said the "copilot pause" daunting and affects how he codes.
Of course. Relying on a lighter kills your ability to start a fire without one. Its nothing new.
Really? I just asked ChatGPT and this is what it had to say:
This claim is misleading because AI can enhance critical thinking by providing diverse perspectives, data analysis, and automating routine tasks, allowing users to focus on higher-order reasoning. Critical thinking depends on how AI is used—passively accepting outputs may weaken it, but actively questioning, interpreting, and applying AI-generated insights can strengthen cognitive skills.
Not sure if sarcasm..
The one thing that I learned when talking to chatGPT or any other AI on a technical subject is you have to ask the AI to cite its sources. Because AIs can absolutely bullshit without knowing it, and asking for the sources is critical to double checking.
I consider myself very average, and all my average interactions with AI have been abysmal failures that are hilariously wrong. I invested time and money into trying various models to help me with data analysis work, and they can't even do basic math or summaries of a PDF and the data contained within.
I was impressed with how good the things are at interpreting human fiction, jokes, writing and feelings. Which is really weird, in the context of our perceptions of what AI will be like, it's the exact opposite. The first AI's aren't emotionless robots, they're whiny, inaccurate, delusional and unpredictable bitches. That alone is worth the price of admission for the humor and silliness of it all, but certainly not worth upending society over, it's still just a huge novelty.
Idk man. I just used it the other day for recalling some regex syntax and it was a bit helpful. However, if you use it to help you generate the regex prompt, it won't do that successfully. However, it can break down the regex and explain it to you.
Ofc you all can say "just read the damn manual", sure I could do that too, but asking an generative a.i to explain a script can also be as effective.
When it was new to me I tried ChatGPT out of curiosity, like with any tech, and I just kept getting really annoyed at the expansive bullshit it gave to the simplest of input. "Give me a list of 3 X" lead to fluff-filled paragraphs for each. The bastard children of a bad encyclopedia and the annoying kid in school.
I realized I was understanding it wrong, and it was supposed to be understood not as a useful tool, but as close to interacting with a human, pointless prose and all. That just made me more annoyed. It still blows my mind people say they use it when writing.
Tinfoil hat me goes straight to: make the population dumber and they’re easier to manipulate.
It’s insane how people take LLM output as gospel. It’s a TOOL just like every other piece of technology.
I mostly use it for wordy things like filing out review forms HR make us do and writing templates for messages to customers
I was talking to someone who does software development, and he described his experiments with AI for coding.
He said that he was able to use it successfully and come to a solution that was elegant and appropriate.
However, what he did not do was learn how to solve the problem, or indeed learn anything that would help him in future work.
I'm a senior software dev that uses AI to help me with my job daily. There are endless tools in the software world all with their own instructions on how to use them. Often they have issues and the solutions aren't included in those instructions. It used to be that I had to go hunt down any references to the problem I was having though online forums in the hopes that somebody else figured out how to solve the issue but now I can ask AI and it generally gives me the answer I'm looking for.
If I had AI when I was still learning core engineering concepts I think shortcutting the learning process could be detrimental but now I just need to know how to get X done specifically with Y this one time and probably never again.
Weren't these assholes just gung-ho about forcing their shitty "AI" chatbots on us like ten minutes ago? Microsoft can go fuck itself right in the gates.
Counterpoint - if you must rely on AI, you have to constantly exercise your critical thinking skills to parse through all its bullshit, or AI will eventually Darwin your ass when it tells you that bleach and ammonia make a lemon cleanser to die for.