I don't think the issue is AI at all. The issue is a culture of wealth extraction instead of investment in growth. When companies are given free funds to reinvest in their skilled labor through increased efficiency, their actions are to reduce the workforce and throw away skilled labor instead of adding and expanding value. It is a culture of decay and irrelevance through decline. Things are already good enough and there is no room to grow or do better. Efficiency does not have to mean reduction. It is a value multiplier, but what people do with that value is the important factor. I think AI is a potential value multiplier, but any negative outlook has nothing to do with the tool and everything to do with a culture of incompetent decay through consolidation and parasitic manipulation.
Futurology
Most western countries have at least half of their economies ruled by free market principles - civil servants ,the military, healthcare in most countries, etc, etc being non-market parts the economy.
The logic of AI and robotics that can do most jobs for pennies on the hour, is that the free market part of the economy will just devour itself from within. It needs humans with incomes to survive, yet by its own internal logic it will destroy those incomes.
No it does not. This is the cultural bias failure I was talking about. It assumes the present is some idiot's end game. We are still primitive and nowhere near even a remotely significant chunk of the age of scientific discovery. All of the hype about completeness and what is known is quite dubious. If you dig just below the surface of headlines you'll see how little humans actually know. One day, a very long time from now, all of your technology will be biological. We have only barely scratched the surface of an understand of the subject. This is where all future technological developments will expand. We will be a footnote in the stone age of silicon with our massively irresponsible energy use and waste. That is the distant civilization that will look back on us now as we look back at the early history of civilization in Mesopotamia. The present cultural stupidity is how we are totally blind to our place in the timeline and the enormous potential ahead long after we are gone. The assumption that AI and automation means reduction is a complete fallacy of fools. It is just as stupid as saying efficient farming techniques will make all humans lazy and stop working leading to extinction. Technology allows for further specialization. It always has had this effect. Imbeciles fail to further specialize and add value. These fools lead to decline and decay because they extract wealth instead of investing it. This extraction culture is the only problem. It has been growing like a cancer for decades now. AI is just the latest excuse for a culture of reductionist imbeciles.
It comes back to the apocryphal tale that Henry Ford paid his factory workers a living wage high enough to afford the vehicles they constructed. While there's good economics in respecting demand-side economics, eventually something is going to give in this overall decline that's been allowed to continue
At my company, a marketeer recently left to pursue another opportunity elsewhere. I cautiously probed if they might be looking for a replacement.
They weren't. They just trained a local LLM on the hundreds of articles and copy texts she'd written, so she's effectively been replaced by AI 1:1 .
Sounds like some stupid people to work for, or maybe she wasn't doing much of anything in the first place. Training on hundreds of articles is only going to create a style of prose. Tuning a model for depth of scope and analysis is much more challenging. An AI can't get into anything politically adjacent, and cannot abstract across subjects at all in the present. It can understand these aspects to a limited degree when the user prompts include them, but it cannot generate like this. It cannot write anything like I can. I suggest getting to know these limitations well. It will make training on your text useless and help you see how to differentiate. The way alignment bias works is the key aspect to understand. If you can see the patterns of how alignment filters and creates guttered responses, you can begin to intuit the limitations due to alignment and the inability to abstract effectively. The scope off focus in a model is very limited. It can be broad and shallow or focused and narrow, but it cannot do both at the same time. If it can effectively replace you, the same limitations must apply to the person.
An intelligent company would use the extra resources for better research and sources, or expanding their value in other ways instead of throwing it away or extracting it.
When I think about this issue, I sometimes think about future scenarios on a scale of 1 - 10, with 10 being 'most confidence to predict will occur' and 1 being 'least able to definitively predict'.
I give UBI a 4 on that scale. It may well occur, but there are different ways of achieving the same goal, so who knows.
One of the few facts I rank at 10, is that the day is coming when AI and robotics will be able to do most work, even the jobs uninvented, but for pennies on the hour.
The logical follow-on is that the day will also have to come, when society realizes that this is happening, understands it, and begins to prepare for its new reality. This is going to seem scary for many people; they will just see the destructive aspects of it, as the old ways of running the world crumble.
This is how I look at what this research is talking about - signs of this awakening becoming more widespread. We badly need politicians who start telling us about what the world is going to be like afterwards, and painting a hopeful vision about it.
Seems reasonable, people who've used generative AI are more likely to know how good it really is. I find that most of the people who just dismissively call it all "slop" haven't actually tried using it much.
I'm using it more and more and find it very useful. I do a lot of writing for work, AI voice transcription and AI grammar checks are invaluable, not to mention having an AI voice read my writing back as a form of copy editing.
Also great for visual stuff, and for providing sound for videos.
However the hallucination problem is a real roadblock. I would never want to trust the current models of AI with an important decision.