this post was submitted on 24 Jan 2025
93 points (100.0% liked)

technology

23495 readers
513 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS
(page 2) 40 comments
sorted by: hot top controversial new old
[–] AvocadoVapelung@hexbear.net 55 points 1 week ago (1 children)
[–] culpritus@hexbear.net 62 points 1 week ago (2 children)

A second Chinese AI has hit the western hype bubble.

load more comments (2 replies)
[–] peppersky@hexbear.net 31 points 1 week ago (5 children)

These things suck and will literally destroy the world and the human spirit from the inside out no matter who makes them

[–] xiaohongshu@hexbear.net 31 points 1 week ago* (last edited 1 week ago) (1 children)

I think this kind of statement needs to be more elaborate to have proper discussions about it.

LLMs can really be summarized as “squeezing the entire internet into a black box that can be queried at will”. It has many use cases but even more potential for misuse.

All forms of AI (artificial intelligence in the literal sense) as we know it (i.e., not artificial general intelligence or AGI) are just statistical models that do not have the capacity to think, have no ability to reason and cannot critically evaluate or verify a certain piece of information, which can equally come from legitimate source or some random Reddit post (the infamous case of Google AI telling you to put glue on your pizza can be traced back to a Reddit joke post).

These LLM models are built by training on the entire internet’s datasets using a transformer architecture that has very good memory retention, and more recently, with reinforcement learning with human input to reduce their tendency to produce incorrect output (i.e. hallucinations). Even then, these dataset require extensive tweaking and curation and OpenAI famously employ Kenyan workers at less than $2 per hour to perform the tedious work of dataset annotation used for training.

Are they useful if you just need to pull up a piece of information that is not critical in the real world? Yes. Is it useful if you don’t want to do your homework and just let the algorithm solve everything for you? Yes (of course, there is an entire discussion about future engineers/doctors who are “trained” by relying on these AI models and then go on to do real things in the real world without developing the capacity to think/evaluate for themselves). Would you ever trust it if your life depends on it (i.e. building a car, plane or a house, or treating an illness)? Hell no.

A simple test case is to ask yourself if you would ever trust an AI model over a trained physician to treat your illness? A human physician has access to real-world experience that an AI will never have (no matter how much medical literature it can devour on the internet), has the capacity to think and reason and thus the ability to respond to anomalies which have never been seen before.

An AI model needs thousands of images to learn the difference between a cat and a dog, a human child can learn that with just a few examples. Without a huge input dataset (helped annotated by an army of underpaid Kenyan workers), the accuracy is simply crap. The fundamental process of learning is very different between the two, and until we have made advances on AGI (which is as far as you could get from the current iterations of AI), we’ll always have to deal with the potential misuses of AI in our lives.

load more comments (1 replies)
[–] yogthos@lemmygrad.ml 22 points 1 week ago (2 children)

that's a deeply reactionary take

[–] peppersky@hexbear.net 11 points 1 week ago (2 children)

LLMs are literally reactionary by design but go off

[–] Outdoor_Catgirl@hexbear.net 14 points 1 week ago (2 children)
[–] shath@hexbear.net 18 points 1 week ago (2 children)

they "react" to your input and every letter after i guess?? lmao

[–] Hermes@hexbear.net 37 points 1 week ago (2 children)

Hard disk drives are literally revolutionary by design because they spin around. Embrace the fastest spinning and most revolutionary storage media gustavo-brick-really-rollin

[–] comrade_pibb@hexbear.net 13 points 1 week ago (1 children)

sorry sweaty, ssds are problematic

[–] Hermes@hexbear.net 17 points 1 week ago

Scratch a SSD and a NVMe bleeds.

[–] culpritus@hexbear.net 10 points 1 week ago

Sufi whirling is the greatest expression of revolutionary spirit in all of time.

[–] bobs_guns@lemmygrad.ml 12 points 1 week ago (1 children)

Pushing glasses up nose further than you ever thought imaginable *every token after

[–] shath@hexbear.net 10 points 1 week ago

hey man come here i have something to show you

[–] plinky@hexbear.net 9 points 1 week ago (2 children)

It's a model with heavy cold war liberalism bias (due to information being fed to it), unless you prompt it - you'll get freedom/markets/entrepreneurs out of it for any problem. As people are treating them as gospel of the impartial observer - shrug-outta-hecks

[–] xiaohongshu@hexbear.net 13 points 1 week ago* (last edited 1 week ago) (1 children)

The fate of the world will be ultimately decided on garbage answers spewed out by an LLM trained on Reddit posts. That’s just how the future leaders of the world will base their decisions on.

load more comments (1 replies)
[–] iByteABit@hexbear.net 10 points 1 week ago (2 children)

That's not the technology's fault though, it's just that the technology is produced by an imperialist capitalist society that treats cold war propaganda as indisputable fact.

Feed different data to the machine and you will get different results. For example if you just train a model on CIA declassified documents it will be able to answer questions about the real role of the CIA historically. Add a subjective point of view on these events and it can either answer you with right wing bullshit if that's what you gave it, or a marxist analysis of the CIA as an imperialist weapon that it is.

As with technology in general, it's effect on society lies with the hands that wield it.

load more comments (2 replies)
[–] peppersky@hexbear.net 4 points 1 week ago (3 children)

"let's just use autocorrect to create the future this is definitely cool and not regressive and reactionary and a complete recipe for disaster"

[–] crime@hexbear.net 26 points 1 week ago* (last edited 1 week ago) (2 children)

It's technology with many valid use-cases. The misapplication of the technology by capital doesn't make the tech itself inherently reactionary.

[–] peppersky@hexbear.net 1 points 6 days ago

LLMs literally cannot do anything else other than reproduce data it has been given. The closer the output is to the input, the better it is. Now if the input is "all the data that capitalism has produced" then the expected output is "an infinite amount of variations on that data". That's why it is reactionary.

[–] Dessa@hexbear.net 8 points 1 week ago (4 children)

It's incredibly power hungry.

[–] yogthos@lemmygrad.ml 20 points 1 week ago

The context of the discussion is that it's already 50x less power hungry than just a little while ago.

[–] crime@hexbear.net 14 points 1 week ago* (last edited 1 week ago) (1 children)

For now. We've been seeing great strides in reducing that power hunger recently, including by the LLM that's the subject of this post.

That also doesn't make it inherently reactionary.

[–] enkifish@hexbear.net 11 points 1 week ago (2 children)

We've been seeing great strides in reducing that power hunger recently, including by the LLM that's the subject of this post.

Due to the market economy in both the United State and China, further development of LLM efficiency is probably the worst thing that could possibly happen. Even if China did not want to subject LLMs to market forces, they are going to need to compete with the US. This is going further accelerate the climate disaster.

[–] crime@hexbear.net 15 points 1 week ago (1 children)

Again, an issue with capitalism and not the technology itself.

[–] enkifish@hexbear.net 8 points 1 week ago (1 children)

Well I agree with you there. Too bad there's all this capitalism.

[–] crime@hexbear.net 8 points 1 week ago (3 children)

For now. Are we supposed to just halt all technological progress because capitalism is inevitably going to misuse it? Should we stop trying to develop new medical treatments and drugs because capitalism is going to prevent all but the wealthiest from accessing them in our lifetime?

Regardless, my point was that the tech itself isn't inherently reactionary. Not that it won't be misused under capitalism.

load more comments (3 replies)
load more comments (1 replies)
[–] GaryLeChat@lemmygrad.ml 5 points 1 week ago

Vacuum tubes were too

load more comments (1 replies)
[–] tripartitegraph@hexbear.net 14 points 1 week ago* (last edited 1 week ago) (7 children)

This is a stupid take. I like the autocorrect analogy generally, but this veers into Luddite-ism.
Let me add, the way we're pushed to use LLMs is pretty dumb and a waste of time and resources, but the technology has pretty fascinating use-cases in material and drug discovery.

load more comments (7 replies)
[–] Pili@hexbear.net 8 points 1 week ago* (last edited 1 week ago) (3 children)

In the meantime, it's making my job a lot more bearable.

load more comments (3 replies)
load more comments (2 replies)
[–] Abracadaniel@hexbear.net 28 points 1 week ago
[–] makotech222@hexbear.net 22 points 1 week ago (3 children)

how do you measure performance of an llm? ask it how many 'r's there are in 'strawberry' and how many times you have to say 'no thats wrong' until it gets 3

[–] yogthos@lemmygrad.ml 27 points 1 week ago (8 children)

Basically speed and power usage to process a query. Also, there's been tangible progress in doing reasoning with unsupervised learning seen in DeepSeek R1 and approaches such as neurosymbolics. These types of models can actually explain the steps they take to arrive at the answer, and you can correct them.

load more comments (8 replies)
[–] Dyno@hexbear.net 22 points 1 week ago

it requires fewer tons of CO2 to tell you that 757 * 128 = 3042

[–] peppersky@hexbear.net 17 points 1 week ago (1 children)

They use synthetic AI generated benchmarks

It's computer silicon blowing itself basically

load more comments (1 replies)
load more comments
view more: ‹ prev next ›