this post was submitted on 06 Oct 2024
34 points (97.2% liked)

Futurology

1801 readers
45 users here now

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] j4k3@lemmy.world 7 points 1 month ago (2 children)

I don't think the issue is AI at all. The issue is a culture of wealth extraction instead of investment in growth. When companies are given free funds to reinvest in their skilled labor through increased efficiency, their actions are to reduce the workforce and throw away skilled labor instead of adding and expanding value. It is a culture of decay and irrelevance through decline. Things are already good enough and there is no room to grow or do better. Efficiency does not have to mean reduction. It is a value multiplier, but what people do with that value is the important factor. I think AI is a potential value multiplier, but any negative outlook has nothing to do with the tool and everything to do with a culture of incompetent decay through consolidation and parasitic manipulation.

[–] Lugh 3 points 1 month ago* (last edited 1 month ago) (2 children)

Most western countries have at least half of their economies ruled by free market principles - civil servants ,the military, healthcare in most countries, etc, etc being non-market parts the economy.

The logic of AI and robotics that can do most jobs for pennies on the hour, is that the free market part of the economy will just devour itself from within. It needs humans with incomes to survive, yet by its own internal logic it will destroy those incomes.

[–] j4k3@lemmy.world 2 points 1 month ago* (last edited 1 month ago)

No it does not. This is the cultural bias failure I was talking about. It assumes the present is some idiot's end game. We are still primitive and nowhere near even a remotely significant chunk of the age of scientific discovery. All of the hype about completeness and what is known is quite dubious. If you dig just below the surface of headlines you'll see how little humans actually know. One day, a very long time from now, all of your technology will be biological. We have only barely scratched the surface of an understand of the subject. This is where all future technological developments will expand. We will be a footnote in the stone age of silicon with our massively irresponsible energy use and waste. That is the distant civilization that will look back on us now as we look back at the early history of civilization in Mesopotamia. The present cultural stupidity is how we are totally blind to our place in the timeline and the enormous potential ahead long after we are gone. The assumption that AI and automation means reduction is a complete fallacy of fools. It is just as stupid as saying efficient farming techniques will make all humans lazy and stop working leading to extinction. Technology allows for further specialization. It always has had this effect. Imbeciles fail to further specialize and add value. These fools lead to decline and decay because they extract wealth instead of investing it. This extraction culture is the only problem. It has been growing like a cancer for decades now. AI is just the latest excuse for a culture of reductionist imbeciles.

[–] Trainguyrom@reddthat.com 1 points 1 month ago

It comes back to the apocryphal tale that Henry Ford paid his factory workers a living wage high enough to afford the vehicles they constructed. While there's good economics in respecting demand-side economics, eventually something is going to give in this overall decline that's been allowed to continue

[–] Kyrgizion@lemmy.world 3 points 1 month ago (1 children)

At my company, a marketeer recently left to pursue another opportunity elsewhere. I cautiously probed if they might be looking for a replacement.

They weren't. They just trained a local LLM on the hundreds of articles and copy texts she'd written, so she's effectively been replaced by AI 1:1 .

[–] j4k3@lemmy.world 4 points 1 month ago

Sounds like some stupid people to work for, or maybe she wasn't doing much of anything in the first place. Training on hundreds of articles is only going to create a style of prose. Tuning a model for depth of scope and analysis is much more challenging. An AI can't get into anything politically adjacent, and cannot abstract across subjects at all in the present. It can understand these aspects to a limited degree when the user prompts include them, but it cannot generate like this. It cannot write anything like I can. I suggest getting to know these limitations well. It will make training on your text useless and help you see how to differentiate. The way alignment bias works is the key aspect to understand. If you can see the patterns of how alignment filters and creates guttered responses, you can begin to intuit the limitations due to alignment and the inability to abstract effectively. The scope off focus in a model is very limited. It can be broad and shallow or focused and narrow, but it cannot do both at the same time. If it can effectively replace you, the same limitations must apply to the person.

An intelligent company would use the extra resources for better research and sources, or expanding their value in other ways instead of throwing it away or extracting it.