this post was submitted on 21 Mar 2024
128 points (97.8% liked)

Futurology

1798 readers
65 users here now

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] LanternEverywhere@kbin.social 1 points 8 months ago (1 children)

He missed seeing it by just a few years. We're clearly in the early stages of it starting to happen for real

[–] bane_killgrind@kbin.social 3 points 8 months ago (1 children)

There's an optimistic and a cynical perspective there.

Optimist yes.

Cynicism says, these LLMs are just statistical generation models that create outputs that are statistically similar to the training data in relation to a prompt.

That's not AI, that's derivatives automated.

[–] meyotch@slrpnk.net 1 points 8 months ago* (last edited 8 months ago) (1 children)

But it’s really good at spitting out JavaScript code that works the first time you run it. Of all the languages I have tried an LLM assistant with, the JavaScript output is the best. Im guessing that’s because it had almost every working webpage on the internet to learn from.

I mention this because how is being able to construct working code from a plain language description not a type of intelligence? Perhaps a narrow form, but the proof is in the pudding, it outputs working code that fits an arbitrary purpose.

Just bringing that up for discussion. I don’t really care if LLM are ‘intelligent’ or not, but the utility is obvious. Even if the LLM isn’t smart, it still speeds progress by acting as an extension of my own so called intelligence.

[–] bane_killgrind@kbin.social 1 points 8 months ago

It's just another set of grammar. It's telling a story about variables.