Kirk
Oh yeah absolutely, but I also think the goal of the AI companies is not to actually create a functioning AI that could "do a job 20% as good as a human, but 90% cheaper", but to sell fancy software, whether it works or not, and leave the smaller companies holding the bag after they lay off their workforce.
Right? It actually makes me feel insane that the topic of "humans working less" is never in the selling points of these products.
Honestly I suspect that rather than some nefarious capitalist plot to enslave humanity, it is just more evidence that the software can't actually do what the people selling it to big corporations claim it can do.
This bit at the end, wow:
Gartner still expects that by 2028 about 15 percent of daily work decisions will be made autonomously by AI agents, up from 0 percent last year.
Agentic AI is wrong 70% of the time, but even assuming a human employee is barely correct most of the time and wrong 49% of the time, is it really still more efficient to replace them?
I like where your head's at, but Mastodon's system of verification seems much easier to me and doesn't rely on a third party.
Also this is not a news article, it's an opinion essay. It's perfectly reasonable to read this at any time.
For YouTube tutorial videos I have no issue with relying on GPT, but I think it's important to recognize that the translation of art is art. I don't feel good about the idea of something without a soul or perspective interpolating a work of art from one culture and language into another that might be wildly different from where it started.
That all said, I think Crunchyroll and anyone else using AI art without disclosing it absolutely should be honest about it.
Lemmy does feel more and more like 4chan every day...
The term "reasoning model" is as gaslighting a marketing term as "hallucination". When an LLM is "Reasoning" it is just running the model multiple times. As this report implies, using more tokens appears to increase the probability of producing a factually accurate response, but the AI is not "reasoning", and the "steps" of it "thinking" are just bullshit approximations.
You are the OP, you literally removed someone's tweet from it's original context (or reposted without fact checking) and presented it here with an entirely different, false context. The fact that it's being misinterpreted is 100% on you for presenting it inaccurately, not the guy who's words you misrepresented.
I actually upvoted this before deciding to fact check which took me no more than ten seconds.