this post was submitted on 20 Oct 2024
69 points (96.0% liked)

Futurology

1794 readers
67 users here now

founded 1 year ago
MODERATORS
top 44 comments
sorted by: hot top controversial new old
[–] hendrik@palaver.p3x.de 30 points 1 month ago* (last edited 1 month ago) (1 children)

We'll see about that. AI is currently approaching the trough of disillusionment on the gartner hype cycle. That's certainly not something one of the largest AI companies will admit to, but probably still true.

And btw, the article doesn't load for me. Not sure if it's my browser or if I'm getting geo blocked... But the page is just white. No text.

[–] BluesF@lemmy.world 4 points 1 month ago (1 children)

This headline certainly seems sensational, but I've also started seeing some really nice uses of LLMs cropping up. Some of the newer API features make them a lot more practical for development of things other than simple chat bots. It remains to be seen if the value delivered is worth the energy/data costs long term, but LLMs in general seems to be finding their feet in some ways.

[–] hendrik@palaver.p3x.de 3 points 1 month ago* (last edited 1 month ago) (1 children)

Sure. I'm mainly basing my opinion on some more recent research (which I can't find right now) that had some disheartening numbers on AI use in programming. As far as I remember it said at the end of the day it saves some time, but not a lot, but on the flipside the code that has been produced by programmers with help of AI has significantly more bugs in it. Which makes me doubt it's a good fit to replace professionals (at this time).

And secondly, the stock prices of companies like Nvidia tell us, some of the hot air in the AI bubble is escaping. I'd say things are calming down a bit, not accellerating.

And regarding law, there is this funny story from a bit ago: https://www.forbes.com/sites/mollybohannon/2023/06/08/lawyer-used-chatgpt-in-court-and-cited-fake-cases-a-judge-is-considering-sanctions/
Well, maybe funny for everyone except that lawyer and his client. And science hasn't made fundamental progress on hallucinations since then. I'd say it's going to start replacing professionals once we get that solved. And that'll be when AI will become massively useful.

And of course it's already very useful within some more narrow use cases.

[–] BluesF@lemmy.world 2 points 1 month ago (1 children)

Oh yeah, I'm talking about calling the LLM with code, not using the LLM to help write the code. They still suck at providing anything reliant on factual accuracy. What they are very good at is extracting meaning from text, e.g. taking a user's natural language request and deciding what to do with it from a set of options.

[–] hendrik@palaver.p3x.de 3 points 1 month ago

Sure. I believe that's called "intent classification" and has been around in natural language processing for quite some time.

[–] HexesofVexes@lemmy.world 24 points 1 month ago* (last edited 1 month ago) (2 children)

A simple lawyer AI bot almost indistinguishable from the real thing:

while(True):

fees+=250.00

sleep(60)
[–] glimse@lemmy.world 5 points 1 month ago (1 children)

I think lawyers actually charge per 15 minutes

[–] mosiacmango@lemm.ee 4 points 1 month ago

15 min or 6 min intervals are common. Both divide evenly into 60 minutes.

[–] stiephelando@discuss.tchncs.de 3 points 1 month ago

Make it say: "It depends." after each loop and you're set.

[–] just_another_person@lemmy.world 21 points 1 month ago (1 children)

Lolz, no. Like they were going to revolutionize the engineering space and get rid of all of them?

[–] sunzu2@thebrainbin.org 1 points 1 month ago (1 children)

Exactly the same prolly... LLM is useful for any "learned" profession but so far I have not seen perform beyond college level type thing.

I guess they can be developed better but there isn't training data or not enough to train the model to be as good as a proper professional.

Once that dataset is available then I can see LLMs to start taking some real jobs, legal or anyone else whose job is jockeying paper or spreadsheets or code on a computer.

[–] just_another_person@lemmy.world 8 points 1 month ago

LLM is a sorting tool. It's not capable of novel ideation, only derivative. The only thing this might help with is research. Not to mention federal and state regulations require human representation to file anyway.

[–] Taleya@aussie.zone 13 points 1 month ago (1 children)

Wordsalad batshit nonsensical lawyer - dude if i wanted that i'd just rep myself

[–] MattMatt@lemmy.world 3 points 1 month ago

Let's prompt inject a Sovereign Citizen lawyer

[–] Wanderer@lemm.ee 9 points 1 month ago

Good.

Sounds like we need to start talking about the four day work week and we can move from there.

[–] Lugh 8 points 1 month ago (2 children)

People have often tended to think about AI and robots replacing jobs in terms of working-class jobs like driving, factories, warehouses, etc.

When it starts coming for the professional classes, as this is now starting to, I think things will be different. It's been a long-observed phenomena that many well-off sections of the population hate socialism, except when they need it - then suddenly they are all for it.

I wonder what a small army of lawyers in support of UBI could achieve?

[–] Eldritch@lemmy.world 14 points 1 month ago (1 children)

The legal profession won't touch it till it's been 100% proven that hallucinations have been completely eliminated. And when it comes to anything Sam Altmann says. People rarely regret doubting him.

[–] Hacksaw@lemmy.ca 5 points 1 month ago

Not all lawyers are that smart or careful. And these are just the ones who let the AI do the work without checking out validating anything! For every lawyer THIS dumb, there are hundreds who let AI do the grunt work, but actually validate the output for hallucination. The main problem is the AI makes them worse lawyers to have because if the AI missed something in the research so will the lawyer.

https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/

https://bc.ctvnews.ca/a-b-c-lawyer-submitted-fictitious-cases-generated-by-chatgpt-to-the-court-now-she-has-to-pay-for-that-mistake-1.6788128

[–] Umbrias@beehaw.org 1 points 1 month ago

llm are no path to socialism and being tricked into believing that a small collection of ultrarich capitalists having ownership of middle and upper class jobs in a more literal sense is somehow going to bring that about is unfortunate. It's neither here nor there, llm will never get there, but still unfortunate.

[–] HootinNHollerin@lemmy.world 7 points 1 month ago* (last edited 1 month ago) (1 children)

Sam Altman watched Terminator and rooted for the machines.

Then Sam Altman watched The Matrix and rooted for the machines.

I mean if you watch the second renaissance it's hard not to root for the machines in the matrix

[–] Etterra@lemmy.world 7 points 1 month ago

Why, did they give it the launch codes?

[–] Steve@startrek.website 5 points 1 month ago
[–] LordJer@beehaw.org 3 points 1 month ago* (last edited 1 month ago) (1 children)

Do you think the average consumer is going to want to have an AI represent them in court? People are still going to need lawyers to explain the law in laymen terms. For example, I work in tax law. And clients already struggle to understand what inventory capitalization ala code sections 263(a) is. And Why they need to adhere to it. I see how large language models can be useful. But I wonder if the hype is akin to crypto currency, or NFTs?

[–] Rogue@feddit.uk 2 points 1 month ago (1 children)

There is a hell of hype but some of it is justified.

Chat GPT is really good at explaining stuff. Try asking to explain inventory capitalization, and just repeatedly ask it to explain it simpler and simpler and simpler. Then ask why repeatedly. It has a hell of lot more patience than a human and the client is going to be far less embarrassed repeatedly asking an AI than a human.

I'd also expect it to be pretty good at picking out relevant case law if to feed it a specific issue. However, where issues will arise is it will just make shit up at some point and it'll seem absolutely legit so you'll accept it without question.

[–] Umbrias@beehaw.org 1 points 1 month ago

infinite patience to produce bullshit has extremely limited utility