The few things I'm not buying out of principle are such that I wouldn't even know if someone else bought it or not. But no, I don't care. There's nothing I'm not buying because I think the company that produces it is literally Hitler.
Perspectivist
You mean french fries sauce because that's all it's good for.
It's Finnish
I haven’t claimed that it is. The point is, the only two plausible scenarios I can think of where we don’t eventually reach AGI are: either we destroy ourselves before we get there, or there’s something fundamentally mysterious about the biological computer that is the human brain - something that allows it to process information in a way we simply can’t replicate any other way.
I don’t think that’s the case, since both the brain and computers are made of matter, and matter obeys the laws of physics. But it’s at least conceivable that there could be more to it.
Did you genuinely not understand the point I was making, or are you just being pedantic? "Silicon" obviously refers to current computing substrates, not a literal constraint on all future hardware. If you’d prefer I rewrite it as "in non-biological substrates," I’m happy to oblige - but I have a feeling you already knew that.
Älä välitä, ei se villekään välittänyt, vaikka sen väliaikaiset välihousut jäi väliaikaisen välitystoimiston väliaikaisen välioven väliin.
Rough translation: Don’t worry about it - Ville didn’t worry either when his temporary long johns got caught in the temporary side door of the temporary temp agency.
The fact that you have to completely rewrite my argument into a strawman before you can attack it tells me all I need to know about who I’m dealing with here. Have a great day.
We’re not even remotely close.
That’s just the other side of the same coin whose flip side claims AGI is right around the corner. The truth is, you couldn’t possibly know either way.
Don't confuse AGI with LLMs. Both being AI systems is the only thing they have in common. They couldn't be further apart when it comes to cognitive capabilities.
The path to AGI seems inevitable - not because it’s around the corner, but because of the nature of technological progress itself. Unless one of two things stops us, we’ll get there eventually:
-
Either there’s something fundamentally unique about how the biological brain processes information - something that cannot, even in principle, be replicated in silicon,
-
Or we wipe ourselves out before we get the chance.
Barring those, the outcome is just a matter of time. This argument makes no claim about timelines - only trajectory. Even if we stopped AI research for a thousand years, it’s hard to imagine a future where we wouldn’t eventually resume it. That's what humans do; improve our technology.
The article points to cloning as a counterexample but that’s not a technological dead end, that’s a moral boundary. If one thinks we’ll hold that line forever, I’d call that naïve. When it comes to AGI, there’s no moral firewall strong enough to hold back the drive toward it. Not permanently.
There’s nothing circular in what I said. I made a conditional claim: propaganda aimed at the regime, not the people, is justified when that regime is authoritarian. That’s not “assuming the conclusion” - it’s stating a position based on a distinction you seem eager to ignore. Disagree with it if you want, but at least engage with the actual logic.
Find an ETF index fund that’s highly diversified across both sectors and regions, with total expenses under 0.5%, and set up an automatic monthly investment into it. It’s the boring way to invest - but unless you’ve got a crystal ball and can predict the future, I wouldn’t start gambling on individual stocks. This is basically the same advice Warren Buffett would give you.