this post was submitted on 11 Feb 2024
16 points (80.8% liked)

Futurology

1809 readers
191 users here now

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] kwedd@feddit.nl 4 points 9 months ago (2 children)

Is there no risk of the LLM hallucinating cases or laws that don't exist?

[–] RedditWanderer@lemmy.world 6 points 9 months ago

How to use Chat GPT to ruin your legal career.

AI does help with discovery and they don't need to spend 8 days scanning emails before the trial, but they'll still need lawyers and junior lawyers.

[–] Bipta@kbin.social 2 points 9 months ago* (last edited 9 months ago)

GPT4 is dramatically less likely to hallucinate than 3.5, and we're barely starting the exponential growth curve.

Is there a risk? Yes. Humans do it too though if you think about it, and all AI has to do is better than humans, which is a milestone it's already got within sight.