This is the right answer
JGrffn
Wait so you built a pool using removable USB media, and was surprised it didn't work? Lmao
That's like being angry that a car wash physically hurt you because you drove in on a bike, then using a hose on your bike and claiming that the hose is better than the car wash.
Zfs is a low level system meant for pcie or sata, not USB, which is many layers above sata & pcie. Rsync was the right choice for this scenario since it's a higher level program which doesn't care about anything other than just the data and will work over USB, Ethernet, wifi, etc., but you gotta understand why it was the right choice instead of just throwing shade at one of the most robust filesystems out there just because it wasn't designed for your specific usecase.
Hahaahahahahahhaahahahahhaahhahahahahahahahahhaha
We're so fucked
People judging parents doesn't sit that well with me. Another example is Brian Laundrie's parents, who did the exact opposite and attracted everyone's ire for it, as if it was a simple decision to turn in or cover for your homicidal son. Society vs family, where either choice is a failure towards the other side, is the kind of scenario you get snipped over.
Man, 8 years ago Elon was my literal hero, paving the road to a new space era and pushing for the solar & EV revolution. I ate that shit up right up till the cave incident.
How times have changed....now all I want is billionaire steak with my fava beans and an appropriated chianti
Right? Very politically charged title. Death to the IDF and all, but come on.
I'm from the Americas, but not a crazy ass gringo. I'm 32, engaged, got a good job, a good group of friends, don't struggle too much in life and everything's good. School and highschool still legitimately terrify me, I get nightmares over it, and I actually got snipped to avoid even the chance of having to put someone through that shit all over again....among a couple of other reasons.
If we can't say if something is intelligent or not, why are we so hell-bent on creating this separation from LLMs? I perfectly understand the legal underminings of copyright, the weaponization of AI by the marketing people, the dystopian levels of dependence we're developing on a so far unreliable technology, and the plethora of moral, legal, and existential issues surrounding AI, but this specific subject feels like such a silly hill to die on. We don't know if we're a few steps away from having massive AI breakthroughs, we don't know if we already have pieces of algorithms that closely resemble our brains' own. Our experiencing of reality could very well be broken down into simple inputs and outputs of an algorithmic infinite loop; it's our hubris that elevates this to some mystical, unreproducible thing that only the biomechanics of carbon-based life can achieve, and only at our level of sophistication, because you may well recall we've been down this road with animals before as well, claiming they dont have souls or aren't conscious beings, that somehow because they don't very clearly match our intelligence in all aspects (even though they clearly feel, bond, dream, remember, and learn), they're somehow an inferior or less valid existence.
You're describing very fixable limitations of chatgpt and other LLMs, limitations that are in place mostly due to costs and hardware constraints, not due to algorithmic limitations. On the subject of change, it's already incredibly taxing to train a model, so of course continuous, uninterrupted training so as to more closely mimick our brains is currently out of the question, but it sounds like a trivial mechanism to put into place once the hardware or the training processes improve. I say trivial, making it sound actually trivial, but I'm putting that in comparison to, you know, actually creating an LLM in the first place, which is already a gargantuan task to have accomplished in itself. The fact that we can even compare a delusional model to a person with heavy mental illness is already such a big win for the technology even though it's meant to be an insult.
I'm not saying LLMs are alive, and they clearly don't experience the reality we experience, but to say there's no intelligence there because the machine that speaks exactly like us and a lot of times better than us, unlike any other being on this planet, has some other faults or limitations....is kind of stupid. My point here is, intelligence might be hard to define, but it might not be as hard to crack algorithmically if it's an emergent property, and enforcing this "intelligence" separation only hinders our ability to properly recognize whether we're on the right path to achieving a completely artificial being that can experience reality or not. We clearly are, LLMs and other models are clearly a step in the right direction, and we mustn't let our hubris cloud that judgment.
What if the vampire limitation extends to the digital world? What if a vampire can't be a hacker because they need permission from the admin on the target system?
Musk losing it and calling that other dude a pedo during this event is what got me to start hopping off the musk train, so I kinda feel thankful for it?
Or tarot cards like ZA WARUDO