this post was submitted on 11 Aug 2025
779 points (98.7% liked)
Fuck AI
3734 readers
546 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
And yet plenty of other ML experts will say that LLMs can't be the path to AGI simply by the limitations of how they're built. Meaning that experts in the field do think AGI could be possible, just not in the way it's being done with those products. So if you're ranting against the marketing of LLMs as some Holy Grail that will come alive... again, that was my initial point.
The interesting thing is that you went after my line about AGI>>>ASI, so I'm curious why you think a machine that could do anything a human can do thinking or otherwise would stop there? I'm assuming AGI happens, of course, but once that occurs, why is that the end?
well i don't assume agi is a thing that can feasibly happen, and the well deserved ai winter will get in the way at any rate
i'll say more, if you think that it's remotely possible you've fallen for openai propaganda
The concept of AGI has existed long before OpenAI. Long before Sam Altman was born even.
It's a dream that people have been actively working on for decades.
While I will say that Binary architecture is unlikely to get us there, AGI itself is not a pipe dream.
Humans will not stop until we create a silicon mind.
That said, a silicon mind would have a lot of processing power, but would not have any sort of special knowledge. If they pull knowledge from the Internet, they might end up dumber for it.
Without being an idiot about it, yes medical science is trying to conquer death, or at least old age.
Will we get there in my lifetime? Maybe. Maybe not. Is there someone alive today who might have life extension treatments to keep them you g and healthy at 100 year old? That's almost a guarantee.
funny thing that you say it, because my normal day job is in pharma. nobody serious tries to make people live forever, there's enough real problems with treating and curing diseases as it is. we don't know first thing about primary cause of alzheimers that would be actually useful in treating it or even from keeping it from getting worse; we might have just barely figured out, maybe, what is cause of depression; we are clueless about finer details of other mental diseases, and there's dazzling array of thousands upon thousands of cancers and autoimmune and degenerative diseases, and if you wanted immortality, you'd have to figure it all, and make it work, and then some. no matter if you like it or not, people will keep dying, maybe slightly later and maybe after enjoying more years of healthy life, but it will still happen, as long as climate change doesn't get too hard in the way that is, then even that won't happen
but also there are also grifters and fantasts and downright idiots who thought that their favourite scifi is a documentary and they do sincerely believe, or sell that belief, that cryonics or brain uploading or unrestricted use of magic pills or fusion with holy machine or variety of other overhyped bullshit is real and will save them, and they will become oligarchs eternal. this is especially true of current tech billionaires that grew on these scifi works and took them too seriously, and also have disposable money to be grifted from, and in particular peter thiel, who has downright pathological fear of death after being traumatized as a kid, when he was around a slave operated uranium mine in occupied Namibia that fueled South African nuclear weapons program (i'm not making this up, look this up on your own)
immortality is maybe the last great promise of alchemy that wasn't either solved by modern science or abandoned, and futurists and altmed and others will proudly carry this mantle, as long as cash flows that is
If a pharmacist tells me that no one is seriously working on life extension and age reversal, or more accurately, concurring the aging process, then who am I to disbelieve?
And your evidence for such, is that we don't currently understand Alzheimer's, a disease that has hundreds of teams, actively studying it. We will crack it. It's only a matter of time.
Then you pivot to mental health, as if that mattered to life extension or age reversal. Topics that have hundreds of teams doing active research.
Will every team produce results? No. Are there also grifters in the field? Yes. But given time, the research will win out. Because we as humans will keep at it until it does.
Maybe that's the part you don't understand, that human desire to keep pushing. To see that next hill, and concur it.
I'm not a pharmacist, i said i work in pharma. Specifically, I'm in drug development. To simplify my job just for you, what I do is designing and making tools for biologists to do whatever they want at whatever protein they want. These tools get gradually improved in multiple ways, in ways that are hard to predict without testing, and in an ardous process that can easily take multiple years and immense capital, some of these (again, hard to predict in the beginning) can go to clinical trials, which take even more money, last couple more years and where 90% of these fail anyway. There's no amount of pattern matching or simulations that can get you out of this problem, if you want to figure it out, you have to go to lab to get real world data, and if you don't do this, you won't figure it out ever.
I've mentioned Alzheimers for a reason. Not only now there are hundreds of teams studying it, it has been the case for maybe 30 years and bit and in this entire time, progress in understanding mechanism of this disease has been abysmal and perhaps misguided in the first place - while popular amyloid hypothesis has been completely barren in terms of finding treatment. All interventions based on it tried in the past 25 years failed. Unless you count approval of drugs that don't work as a sort of a success, then take it ig. It just seems that for all these years there are multiple pieces of the puzzle missing, and we might not even know what they might be. We don't even precisely know what's the role of amyloid plaque, whether it is cause of illness, result of adaptation to something else, inert side effect, or what. What we know is that it's associated with disease, but also removing it with antibodies (these failed approved drugs) does nothing. This state might very well continue into the future, maybe for decades more, and we have no way of knowing either way. There are some other hypotheses tested, but again nothing will ever be known about them before someone gets any kind of result. Of course you don't have to trust me, but consider what Derek Lowe has to say about it, as he's been in this field for thirty years now and oversaw development of many pharmaceuticals.
In terms of hypothetical extreme life extension, mental health would be pretty important because perspective of living for decades with incurable mental disorder would be downright miserable, but also things like Alzheimers would prevent that extreme lifetime extension in the first place. Also, I wanted to highlight how many gaps in knowledge are there in terms of neurobiology, which was talked about just two comments up, which would be pretty important if, maybe, someone wanted to make a silicon copy of human brain, perhaps,
Depression in particular, there's been a reasonable shot for a new mechanism a couple of years back, but it's touching on new things, and at any rate we might have new pharmaceutical out of this perhaps in 15 years or so. Then, maybe it'll turn out that it might be good for some kind of dementia or maybe autoimmune disease or something else, but before anyone tries that, and this is conditional on finding that pharmaceutical, it's completely unknown what it might bring. Or it might just turn out to not work for some other obscure reason anyway
Maybe for you perspective of solution to a problem being years or decades away in the best case, or maybe never, on the regular seems completely alien, but in many fields relying on real world data it's everyday reality and with the kind of background that I have, your tech solutionism comes off as extremely arrogant and misguided. But if you want to listen to Kurzweilite drivel instead, i won't stop you, have a nice day
Ah, so your whole point is, we don't have this tech today, so we never will.
Fuck man, there were people who had never even heard of heavier than air flight, watched it happen, and then before they died, watched men walk on the moon on their brand new TVs.
I never said that we'd have this shit today, but there's likely someone alive today who will live for multiple centuries, because we're starting to understand the mechanics of aging.
Mental health is not a roadblock to longer lives. Sure, we have to watch out, but it's not a show stopper.
The Alzheimer's drugs are mostly just greed. As far as late onset Alzheimer's goes, it's genetic, and we can edit our genomes. There's Youtube videos of a guy in his garage lab, (temporarily) curing his lactose intolerance through gene editing. A pinch of prevention is worth a pound of cure.
Just saying, well, we don't have this right this instant, so we never will, is lazy and ignores the hundreds of teams world wide who are working on this, and other problems.
We can already fully simulate ringworms and other small organisms, this is only a question of compute power. Even if you don't believe we can make a better architecture for the conventional neural networks, in the future we will have enough power to straight up simulate a normal biological brain.
you ignore that moore's law hit a brick wall couple years back and also you're vastly underestimating complexity of human brain, or even smaller mammalian brain
That you won't even discuss the hypotheticals or AGI in general indicates you've got a closed mind on the subject. I'm totally open to the idea that AGI is impossible, if it can be demonstrated that it's strictly a select biological phenomena. Which would mean showing that it has to be biological in nature. Where does intelligence come from? Can it be duplicated in other ways? Such questions led to the development of ML and AI research, and yes, even LLM development, trying to copy the way brains work. That might end up being the wrong direction, and silicon intelligence may come from other methods.
Saying you don't believe it can happen doesn't prove anything except your own disbelief that something else could be considered a person. I've asked many questions of you that you've ignored, so here's another: if you think only humans can ever have intelligence, why are they so special? I don't expect an answer of course, you don't seem to want to actually discuss it, only deny it.
no, i'm gonna stop you right there. llms weren't made to mimic human brain, or anything like this, llms were made as tools to study language. it's categorically impossible for llms to provide anything leading to agi; these things don't think, don't research, don't hallucinate, don't have agency, cognition, don't have working memory the way humans do; these things do one thing and one thing only: generate string of tokens that were most likely to follow given prompt, given what was in the training data. that's it; that's all that there's to it; i know you were promised superhuman intelligence in a box but if you're using a chatbot, all intelligence there is is your own; if you think otherwise you're falling for massive ELIZA effect, a thing that has been around for fifty years now, augmented by blizzard of openai marketing propaganda, helped by tech journalists that never questioned these hypesters, funded by fake nerd billionaires of silicon valley that misremembered old scifi and went around building torment nexii, but i digress
i'm not saying that intelligence is exclusively always entirely biological thing, but i do think that state of neuroscience, psychology, and also computational side of research is woefully short of anything resembling pathway to solution to this problem. instead, this is what i think it's going to happen:
llms are dead end in this sense, but also these things take bulk of ai/ml funding now, so all these other approaches are ignored in terms of funding. historically, after every period of intense hype of this nature comes ai winter; this one is bound to happen too, and it might be worse since it looks like it also fueled investment bubble propping up large part of american economy, so when bubble pops, on top of historically usual negative sentiment stemming from overpromising and underdelivering there's gonna be resentment about aibros worming their way to management and causing mass layoffs, replacing juniors with idiot boxes and lobotomizing any future seniors pipeline etc etc.
what typically happened next is that steady supply of research in cs/math departments of many universities accumulated over low tens of years, and when some new good enough development happened, and everyone forgot previous failures, hype train starts again. this step will be slowed down by both current american administration cutting off funding to many types of research, and incoming bubble crash that will make people remember what kind of thing aibros are up to for a long time.
when, not if, most credulous investors' money including softbank thrown into openai gets burnt through, which i think might take couple of years tops, i would be very surprised if any of these overgrown startups doesn't become a smoking crater within five years, very few people will want to have anything to do with this all, and when the next ai spring happens, it might be well into 40s, 50s, and by then i guess that climate change effects will be too strong to ignore and just try and catch another hype train, there are gonna be much more pressing issues. this is why i think that anything resembling agi won't come up during my lifetime, and if you want to discuss gpt41 overlords in year 3107, feel free to discuss it with someone else.
Neural nets weren't designed around human neuron behavior? I did not realize that.
The rest is a rant against LLMs and the companies using them to profit on ignorance, and I don't even disagree with the points. It's just unfortunate that it's not directed at me, since my very first comment was concerning the movie AGI and the theories around the what-ifs. It's almost as those who are militarily anti-AI don't even understand what they're arguing against sometimes.