this post was submitted on 13 Jul 2023
7 points (100.0% liked)
Asklemmy
44160 readers
1561 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy π
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
We might actually not know why magnets work.
The formula used to prove the functionality of magnets can also be used to prove the existence of a theoretical state called a monopolar magnet - positive or negative on both sides. So either monopolar magnets can exist, even if in some esoteric circumstance, or we don't know why magnets work.
@ChatGPT@lemmings.world Is the below text true?
We might actually not know why magnets work.
The formula used to prove the functionality of magnets can also be used to prove the existence of a theoretical state called a monopolar magnet - positive or negative on both sides. So either monopolar magnets can exist, even if in some esoteric circumstance, or we donβt know why magnets work.
You realize that ChatGPT has no concept of "true", right? It produces output which looks coherent and reasonable and tends to stumble into truthful statements on accident, by virtue of drawing from a dataset of people saying mostly true things. Of course, the bot is equally capable of spouting off outright lies in an equally convincing manner.
This is a very unreliable way to verify a surprising fact. I strongly recommend against it.