JGrffn

joined 2 years ago
[–] JGrffn@lemmy.world 1 points 1 week ago (1 children)

If we can't say if something is intelligent or not, why are we so hell-bent on creating this separation from LLMs? I perfectly understand the legal underminings of copyright, the weaponization of AI by the marketing people, the dystopian levels of dependence we're developing on a so far unreliable technology, and the plethora of moral, legal, and existential issues surrounding AI, but this specific subject feels like such a silly hill to die on. We don't know if we're a few steps away from having massive AI breakthroughs, we don't know if we already have pieces of algorithms that closely resemble our brains' own. Our experiencing of reality could very well be broken down into simple inputs and outputs of an algorithmic infinite loop; it's our hubris that elevates this to some mystical, unreproducible thing that only the biomechanics of carbon-based life can achieve, and only at our level of sophistication, because you may well recall we've been down this road with animals before as well, claiming they dont have souls or aren't conscious beings, that somehow because they don't very clearly match our intelligence in all aspects (even though they clearly feel, bond, dream, remember, and learn), they're somehow an inferior or less valid existence.

You're describing very fixable limitations of chatgpt and other LLMs, limitations that are in place mostly due to costs and hardware constraints, not due to algorithmic limitations. On the subject of change, it's already incredibly taxing to train a model, so of course continuous, uninterrupted training so as to more closely mimick our brains is currently out of the question, but it sounds like a trivial mechanism to put into place once the hardware or the training processes improve. I say trivial, making it sound actually trivial, but I'm putting that in comparison to, you know, actually creating an LLM in the first place, which is already a gargantuan task to have accomplished in itself. The fact that we can even compare a delusional model to a person with heavy mental illness is already such a big win for the technology even though it's meant to be an insult.

I'm not saying LLMs are alive, and they clearly don't experience the reality we experience, but to say there's no intelligence there because the machine that speaks exactly like us and a lot of times better than us, unlike any other being on this planet, has some other faults or limitations....is kind of stupid. My point here is, intelligence might be hard to define, but it might not be as hard to crack algorithmically if it's an emergent property, and enforcing this "intelligence" separation only hinders our ability to properly recognize whether we're on the right path to achieving a completely artificial being that can experience reality or not. We clearly are, LLMs and other models are clearly a step in the right direction, and we mustn't let our hubris cloud that judgment.

[–] JGrffn@lemmy.world 1 points 1 week ago (1 children)

What if the vampire limitation extends to the digital world? What if a vampire can't be a hacker because they need permission from the admin on the target system?

[–] JGrffn@lemmy.world 10 points 1 month ago (8 children)

Musk losing it and calling that other dude a pedo during this event is what got me to start hopping off the musk train, so I kinda feel thankful for it?

[–] JGrffn@lemmy.world 1 points 1 month ago

I think the "Terrible" lip sync is actually just french lip sync tbh

[–] JGrffn@lemmy.world 3 points 1 month ago (3 children)

I host a Plex server for close to 70 friends and family members, from multiple parts of the world. I have over 60TBs of movies, tv shows, anime, anime movies, and flac music, and everyone can connect directly to my server via my reverse proxy and my public IPs. This works on their phones, their tvs, their tablets and PCs. I have people of all ages using my server, from very young kids to very old grandparents of friends. I have friends who share their accounts with their families, meaning I probably have already hit 100+ people using my server. Everyone is able to request whatever they want through overseerr with their Plex account, and everything shows up pretty instantly as soon as it is found and downloaded. It works almost flawlessly, whether locally or remotely, from anywhere in the world. I myself don't even reside in the same home that my Plex server resides. I paid for my lifetime pass over 10 years ago.

Can you guarantee that I can move over to jellyfin and that every single person currently using my Plex server will continue having the same level of experience and quality of life that they're having with my Plex server currently? Because if you can't, you just answered your own question. Sometimes we self host things for ourselves and we can deal with some pains, but sometimes we require something that works for more people than just us, and that's when we have to make compromises. Plex is not perfect, and is actively becoming enshittified, but I can't simply dump it and replace it with something very much meant for local or single person use rather than actively serving tens to hundreds of people off a server built with OTC components.

[–] JGrffn@lemmy.world 7 points 1 month ago

I saw a brilliant explanation some time ago that I'm about to butcher back into a terrible one, bear with me:

Think about 2 particles traveling together. When one gets tugged, it in turns tugs the other one with it. This tug takes some time, since one particle essentially "tells" the other particle to come with it, meaning there's some level of information exchange happening between these two particles, and that exchange happens at the speed of light. Think about the travel distance between these two particles, it would be pretty linear, and pretty short, so you essentially do not notice this effect since it's so fast.

Now think about what happens when those 2 particles start going faster. The information exchange still happens, it still happens at the speed of light, but now that those particles are moving faster in some direction, the information exchange would seem to still go linearly from particle A to particle B, but in reality it would be traveling "diagonally", since it would have to cover that extra distance being added by the particles moving in certain direction. This is the crucial part: what happens when those particles start getting closer to the speed of light? Well, the information exchange would have to cover the very small distance between these particles, plus the added distance from traveling closer to the speed of light. At first it's pretty easy to cover this distance, but eventually you're having to cover the entire distance light would take to travel in a given moment, PLUS the distance between the two particles, which....can't happen since nothing can go faster than that speed.

That's essentially why you can never reach the speed of light, and why the more massive an object, the less speed it can achieve: all those particles have to communicate with each other, and that takes longer and longer the closer to the speed of light the whole object moves.

See, this also perfectly explains what you're asking: from the frame of reference of the particles, they're seeing the information go in a straight line to them, so time is acting normally for them, but from an external perspective, that information is moving in a vector, taking a long time to reach the other particle since it's having to cover the distance of near light speed in one direction, plus the distance between the two particles in another direction, for a total vector distance that is enormous rather than being negligible. At some point, you never see the information reach the other particle, or in other words, time for that whole object has slowed down to a near halt. This explains why time feels normal for the party traveling fast: they can't know they're slowed down since the information exchange is essentially the telling of time, but the external observer sees that slowdown happen, and in fact they get a compounded effect since those particles also communicate their state to the observer at the speed of light, and that distance between the observer and the particles keeps changing.

This also explains why the particles might be able to also see everything around them happening a lot faster than it should: not only is it taking them longer to get updates about themselves between themselves, but they're also running into the information from everything around them pretty fast, essentially receiving information from external sources faster than they do from themselves, thus causing this effect of seeing everything happening faster and faster, until it seems to all happen at once at the speed of light.

Here's the guy who made it all click for me, since I'm pretty sure I tangled more than one of you up with this long read: https://youtu.be/Vitf8YaVXhc

[–] JGrffn@lemmy.world 24 points 1 month ago

Is Mr Fink aware of the fate of the last CEO who tried to extract profits from those in need? Because I'm starting to think he and his lot are asking for a refresher on the matter.

[–] JGrffn@lemmy.world 0 points 1 month ago

I don't hate AI, I hate the system that's using AI for purely profit-driven, capitalism-founded purposes. I hate the marketers, the CEOs, the bought lawmakers, the people with only a shallow understanding of the implications of this whole system and its interactions who become a part of this system and defend it. You see the pattern here? We can take out AI from the equation and the problematic system remains. AI should've been either the beginning of the end for humanity in a terminator sort of way, or the beginning of a new era of enlightenment and technological advancements for humanity. Instead we got a fast-tracked late stage capitalism doubling down on dooming us all for text that we dont have to think about writing while burning entire ecosystems to achieve it.

I use AI on a near daily basis and find usefulness in it, it's helped me solve a lot of issues and it's a splendid rubber ducky for bouncing ideas, and I know people will disagree with me here but there are clear steps towards AGI here which cannot be ignored, we absolutely have systems in our brains which operate in a very similar fashion to LLMs, we just have more systems doing other shit too. Does anyone here actually think about every single word that comes out of their mouths? Has nobody ever experienced a moment where you clearly said something that you immediately have to backtrack on because you were lying for some inexplicable reason, or maybe you skipped too many words, slurred your speech or simply didn't arrive anywhere with the words you were saying? Dismissing LLMs as advanced autocomplete absolutely ignores the fact that we're doing exactly the same shit ourselves, with some more systems in place to guide our yapping.

[–] JGrffn@lemmy.world -2 points 2 months ago

Bitcoin went from under like 5k in 2020 to over 100k in 2024. The problem isn't Bitcoin, it's people thinking they can easily outperform Bitcoin by buying into shitcoins that are clearly Ponzi schemes and thinking they'll know when to get out. Ask me how I know.

Meanwhile, a friend wasn't tempted by the shitcoins, simply bought BTC and ETH and held, now he's easily 4x'd his money by doing as close to nothing as possible and most importantly, not touching fucking shitcoins.

Bitcoin isn't a gamble, a gamble is a gamble. Just like you can treat the SP500 as a retirement fund, or the source of your next options in a gamble.

[–] JGrffn@lemmy.world 1 points 2 months ago

I've heard of people having breakthroughs/ego deaths while meditating, so it can definitely get there by the looks of it

[–] JGrffn@lemmy.world 4 points 3 months ago (2 children)

Update your bios, or hope for an update to your bios if you're up to date. I had similar behavior with an sn850x 4tb on a new system with 2 of them. As soon as I got my motherboard up to date, the problems ended.

 

As if it wasn't bad enough that they want me to use a random internet service to add a keyboard to a usb wifi receiver, they have the balls to put this for Firefox users. I clicked out of pure curiosity, as I'm not even remotely interested in involving a corporate internet service in getting my keyboard connected to my computer. This is the message you get now on Logi Options software if you have a Unifying Receiver: This is the message you get now on Logi Options software if you have a Unifying Receiver

For the curious: https://logiwebconnect.com

EDIT: some people on the thread have brought up that the error message being displayed for Firefox users is due to the WebUSB API not being implemented by Firefox due to security concerns. This still does not justify having to use a web app to plug peripherals to a PC.

view more: next ›