this post was submitted on 27 Sep 2025
636 points (99.7% liked)
RPGMemes
13802 readers
1209 users here now
Humor, jokes, memes about TTRPGs
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I asked Chat GPT:
Approximate unshielded dose rates:
At 1 m: ≈ 5.2×10^4 Sv/h (≈51,800 Sv/h) — fatal essentially instantaneously (seconds or less).
At 3 m: ≈ 5.8×10^3 Sv/h — fatal within seconds.
At 10 m: ≈ 5.18×10^2 Sv/h — fatal within tens of seconds.
At 30 m: ≈ 5.8×10^1 Sv/h — severe, life‑threatening in minutes.
At 100 m: ≈ 5.2 Sv/h — dangerous; a few hours would produce fatal/serious acute radiation syndrome.
(For perspective: an acute whole‑body dose of ~4–5 Sv often causes death without intensive medical care; 1 Sv already causes significant radiation sickness.)
These are conservative, point‑source, unshielded estimates for whole‑body dose from the gammas. Being closer, or in contact, or staying in the field increases dose proportionally.
Back to me again. I'm sorry my radioactive physics game is weak and I had to speculatively look it up. That's a lot of downvotes, yet no one decided to share the math themselves.
ChatGPT is a text generator. Any "information" it delivers is only correct by chance, if at all. Without the knowledge to check the answers yourself, you can't possibly tell whether you're falling for random error.
More in-depth, ChatGPT has learned how likely certain word patterns are in combination. Something like "1+1=" will most often be followed by "2". ChatGPT has no concept of truth or mathematical relationship, so it doesn't "understand" why this combination occurs like that, it just imitates it.
You can actually see the slight randomisation in the inconsistent way 5.18 is rounded to 5.2 instead. If this was correct – I'm not qualified to comment on that – and written by a human, you'd expect them to be more consequent with the precision. It's likely that ChatGPT learned these number-words from different sources using different precision and randomly picks which one to go with for each new line.
So what happens when it decides a word combination seems plausible, but it doesn't actually make sense? Well, for example, lawyers get slapped with a fine for ChatGPT citing case law that doesn't exist. They sounded valid, because that's what ChatGPT is made for: generating plausible word combinations. It doesn't know what a legal case is or how it imposes critical restrictions on what's actually valid in this context.
There's an open access paper on the proclivity of LLMs to bullshit, available for download from Springer. The short version is that it's entirely indifferent to truth. It doesn't and can't care or even know whether the figures it spits out are correct.
Use it to generate texts, if you must, but don't use it to generate facts. It's not looking them up, it's not researching, it's not doing the math – it's making them up to sound right.
You're not getting downvoted. ChatGPT is getting downvoted, and you just happened to be in the way.
These guys, the 2nd google link after AI, say that a 3540 Ci/130 TBq source would be around 500 Sv/h at 30 cm. Even Wikipedia says 45 Sv/h at 1m
Oh thank god! I guess this is the "find the right answer by posting the wrong answer."
Cunningham's Law FTW
I asked my toddler about the radiation and she said "nana" and then with emphasis "nana" once more.
The downvotes are because our two methods of finding an answer are roughly equally likely to returning a reliable answer.
Mine is slightly better for the climate, maybe. That will likely change as she grows up and uses up more resources. I'll ask her to do the math on that one later, she is busy eating a book right now.
She's absolutely right!
NANA, you dopes!
Roll for speed