this post was submitted on 28 Jan 2025
1071 points (97.5% liked)

Microblog Memes

6317 readers
2868 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[โ€“] echodot@feddit.uk 3 points 1 day ago* (last edited 1 day ago) (1 children)

Yes I know but what I'm saying is they're just repackaging something that openAI did, but you still need openAI making advances if you want R1 to ever get any brighter.

They aren't training on large data sets themselves, they are training on the output of AIs that are trained on large data sets.

[โ€“] InputZero@lemmy.world 1 points 1 day ago

Oh I totally agree, I probably could have made my comment less argumentative. It's not truly revolutionary until someone can produce an AI training method that doesn't consume the energy of a small nation to get results in a reasonable amount of time. Which isn't even mentioning the fact that these large data sets already include everything and that's not enough. I'm glad that there's a competitive project even if I'm going to wait a while and let smarter people than me sus it out.