just_another_person

joined 2 years ago
[–] just_another_person@lemmy.world 20 points 10 hours ago

So here we are...

[–] just_another_person@lemmy.world 12 points 11 hours ago (1 children)

That's why they have miniscule user bases. Ain't nobody gaming on nouveau, bruh

Thanks, I hate it

Still pretty rapey vibes

[–] just_another_person@lemmy.world 2 points 22 hours ago* (last edited 21 hours ago) (2 children)

What in the actual fuck is this? This looks like a rape scene from a movie.

[–] just_another_person@lemmy.world 11 points 23 hours ago

Here's their plan:

  1. Claim open investigations to not release certain files
  2. Stall for the holidays
  3. When someone calls yet another referendum or forces testimony in Congress again...stall
  4. Someone in Congress finally admits the files released are not complete because they have seen the the unredacted versions
  5. Stall again

They will ratchet up all the bullshit pain they are inflicting on Americans through ICE as much as they possibly can in this time, and try and force Representatives to back off any further action until they relent.

[–] just_another_person@lemmy.world 49 points 1 day ago (3 children)

Which means it's going to be bullshit, doctored files, or the same things we already have.

[–] just_another_person@lemmy.world 19 points 1 day ago (2 children)

Did it...not have that already? I swear it did, but honestly I thought Exchange was dead long ago.

Fedora 100% has acceleration, you just seen to be missing something. Starting from a clean distro isn't a good indication of where your issue is with your existing install.

Did you switch from an Nvidia card by chance? Did you check if you might have blacklisted AMD drivers?

Reboot and check dmesg for any obvious errors, and lsmod | grep amd to see what, if anything, is loaded. If nothing is loaded, I almost guarantee you have something blacklisted.

From your own linked paper:

To design a neural long-term memory module, we need a model that can encode the abstraction of the past history into its parameters. An example of this can be LLMs that are shown to be memorizing their training data [98, 96, 61]. Therefore, a simple idea is to train a neural network and expect it to memorize its training data. Memorization, however, has almost always been known as an undesirable phenomena in neural networks as it limits the model generalization [7], causes privacy concerns [98], and so results in poor performance at test time. Moreover, the memorization of the training data might not be helpful at test time, in which the data might be out-of-distribution. We argue that, we need an online meta-model that learns how to memorize/forget the data at test time. In this setup, the model is learning a function that is capable of memorization, but it is not overfitting to the training data, resulting in a better generalization at test time.

Literally what I just said. This is specifically addressing the problem I mentioned, and goes on further to exacting specificity on why it does not exist in production tools for the general public (it'll never make money, and it's slow, honestly). In fact, there is a minor argument later on that developing a separate supporting system negates even referring to the outcome as an LLM, and the supported referenced papers linked at the bottom dig even deeper into the exact thing I mentioned on the limitations of said models used in this way.

[–] just_another_person@lemmy.world 6 points 1 day ago (2 children)

It most certainly did not...because it can't.

You find me a model that can take multiple disparate pieces of information and combine them into a new idea not fed with a pre-selected pattern, and I'll eat my hat. The very basis of how these models operates is in complete opposition of you thinking it can spontaneously have a new and novel idea. New...that's what novel means.

I can pointlessly link you to papers, blogs from researchers explaining, or just asking one of these things for yourself, but you're not going to listen, which is on you for intentionally deciding to remain ignorant to how they function.

Here's Terrence Kim describing how they set it up using GRPO: https://www.terrencekim.net/2025/10/scaling-llms-for-next-generation-single.html

And then another researcher describing what actually took place: https://joshuaberkowitz.us/blog/news-1/googles-cell2sentence-c2s-scale-27b-ai-is-accelerating-cancer-therapy-discovery-1498

So you can obviously see...not novel ideation. They fed it a bunch of trained data, and it correctly used the different pattern alignment to say "If it works this way otherwise, it should work this way with this example."

Sure, it's not something humans had gotten to get, but that's the entire point of the tool. Good for the progress, certainly, but that's it's job. It didn't come up with some new idea about anything because it works from the data it's given, and the logic boundaries of the tasks it's set to run. It's not doing anything super special here, just very efficiently.

 
 

Been saying this for years, but the desperation has become insane. They aren't just sinking, they are throwing icebergs and mines out in front of the ship.

 

Found this interesting 15m short from a few years ago. Love both these actors, so thought I'd share.

 

Possibly one of the most perfect pop songs ever made

7
submitted 1 month ago* (last edited 1 month ago) by just_another_person@lemmy.world to c/videos@lemmy.world
view more: next ›