Technology
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
view the rest of the comments
Again. LLMs don’t know anything. They don’t have a „knowledge base“ like you claim. As in a database where they look up facts. That is not how they work.
They give you the answer that sounds most likely like a response to whatever prompt you give it. Nothing more. It is surprising how good it works, but it will never be 100% fact based.
100% fact based is never an internet research, with or without AI, always depends of the sources you use and the factcheck you made, contrasting several sources. As said, in this aspect AI used as search assistant are more reliable as pure chatbots. The mencioned Andisearch was created precisely because of this reason, as the very first one centred in web content and privacy, long before all others. The statement of their devs are clear about it.
Some time ago appears this from ChatGPT
I made the same question in Andisearch and it's answer was this
Differences in reasoning and ethics, this is why I use Andi since more than 3 Years now, no halucinations, nor BS since than.