this post was submitted on 28 Jan 2025
866 points (94.4% liked)

memes

11279 readers
4175 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to !politicalmemes@lemmy.world

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/AdsNo advertisements or spam. This is an instance rule and the only way to live.

A collection of some classic Lemmy memes for your enjoyment

Sister communities

founded 2 years ago
MODERATORS
 

Office space meme:

"If y'all could stop calling an LLM "open source" just because they published the weights... that would be great."

top 50 comments
sorted by: hot top controversial new old
[–] Jocker@sh.itjust.works 114 points 2 days ago (1 children)

Even worse is calling a proprietary, absolutely closed source, closed data and closed weight company "OpeanAI"

[–] intensely_human@lemm.ee 42 points 1 day ago (2 children)

Especially after it was founded as a nonprofit with the mission to push open source AI as far and wide as possible to ensure a multipolar AI ecosystem, in turn ensuring AI keeping other AI in check so that AI would be respectful and prosocial.

[–] Prunebutt@slrpnk.net 25 points 1 day ago

Sorry, that was a PR move from the get-go. Sam Altman doesn't have an altruistic cell in his whole body.

[–] SoftestSapphic@lemmy.world 16 points 1 day ago (1 children)

It's even crazier that Sam Altman and other ML devs said that they reached the peak of what current Machine Learning models were capable of years ago

https://www.reuters.com/technology/artificial-intelligence/openai-rivals-seek-new-path-smarter-ai-current-methods-hit-limitations-2024-11-11/

But that doesn't mean shit to the marketing departments

[–] Hobbes_Dent@lemmy.world 11 points 1 day ago (1 children)

“Look at this shiny.”

Investment goes up.

“Same shiny, but look at it and we need to warn you that we’re developing a shinier one that could harm everyone. But think of how shiny.”

Investment goes up.

“Look at this shiny.”

Investment goes up.

“Same shiny, but look at it and we need to warn you that we’re developing a shinier one that could harm everyone. But think of how shiny.”

[–] SoftestSapphic@lemmy.world 14 points 1 day ago (1 children)

I like how when America does it we call it AI, and when China does it it's just an LLM!

[–] Prunebutt@slrpnk.net 7 points 1 day ago* (last edited 1 day ago)

I'm including Facebook's LLM in my critique. And I dislike the current hype on LLMs, no matter where they're developed.

And LLMs are not "AI". I've called them "so-called 'AIs'" waaay before.

[–] Xerxos@lemmy.ml 29 points 2 days ago* (last edited 1 day ago) (2 children)

The training data would be incredible big. And it would contain copyright protected material (which is completely okay in my opinion, but might invoce criticism). Hell, it might even be illegal to publish the training data with the copyright protected material.

They published the weights AND their training methods which is about as open as it gets.

[–] Prunebutt@slrpnk.net 20 points 2 days ago (1 children)

They could disclose how they sourced the training data, what the training data is and how you could source it. Also, did they publish their hyperparameters?

They could jpst not call it Open Source, if you can't open source it.

[–] Naia@lemmy.blahaj.zone 11 points 2 days ago (1 children)

For neural nets the method matters more. Data would be useful, but at the amount these things get trained on the specific data matters little.

They can be trained on anything, and a diverse enough data set would end up making it function more or less the same as a different but equally diverse set. Assuming publicly available data is in the set, there would also be overlap.

The training data is also by necessity going to be orders of magnitude larger than the model itself. Sharing becomes impractical at a certain point before you even factor in other issues.

[–] Poik@pawb.social 3 points 1 day ago

That... Doesn't align with years of research. Data is king. As someone who specifically studies long tail distributions and few-shot learning (before succumbing to long COVID, sorry if my response is a bit scattered), throwing more data at a problem always improves it more than the method. And the method can be simplified only with more data. Outside of some neat tricks that modern deep learning has decided is hogwash and "classical" at least, but most of those don't scale enough for what is being looked at.

Also, datasets inherently impose bias upon networks, and it's easier to create adversarial examples that fool two networks trained on the same data than the same network twice freshly trained on different data.

Sharing metadata and acquisition methods is important and should be the gold standard. Sharing network methods is also important, but that's kind of the silver standard just because most modern state of the art models differ so minutely from each other in performance nowadays.

Open source as a term should require both. This was the standard in the academic community before tech bros started running their mouths, and should be the standard once they leave us alone.

load more comments (1 replies)
[–] maplebar@lemmy.world 7 points 1 day ago

Yeah, this shit drives me crazy. Putting aside the fact that it all runs off stolen data from regular people who are being exploited, most of this "AI" shit is basically just freeware if anything, it's about as "open source" as Winamp was back in the day.

[–] Ugurcan@lemmy.world 18 points 2 days ago (3 children)

There are lots of problems with the new lingo. We need to come up with new words.

How about “Open Weightings”?

[–] Agent641@lemmy.world 5 points 2 days ago

That's fat shaming

[–] billwashere@lemmy.world 5 points 2 days ago

That sounds like a segment on “My 600lb Life”

load more comments (1 replies)
[–] surph_ninja@lemmy.world 12 points 1 day ago (1 children)

Judging by OP’s salt in the comments, I’m guessing they might be an Nvidia investor. My condolences.

[–] Prunebutt@slrpnk.net 7 points 1 day ago

Nah, just a 21st century Luddite.

[–] theacharnian@lemmy.ca 14 points 2 days ago (5 children)

Arguably they are a new type of software, which is why the old categories do not align perfectly. Instead of arguing over how to best gatekeep the old name, we need a new classification system.

[–] Poik@pawb.social 4 points 1 day ago

... Statistical engines are older than personal computers, with the first statistical package developed in 1957. And AI professionals would have called them trained models. The interpreter is code, the weights are not. We have had terms for these things for ages.

[–] Prunebutt@slrpnk.net 5 points 2 days ago* (last edited 2 days ago) (1 children)

There were e|forts. Facebook didn't like those. (Since their models wouldn't be considered open source anymore)

[–] theacharnian@lemmy.ca 4 points 1 day ago

I don't care what Facebook likes or doesn't like. The OSS community is us.

load more comments (3 replies)
[–] Dkarma@lemmy.world 10 points 2 days ago (30 children)

I mean that's all a model is so.... Once again someone who doesn't understand anything about training or models is posting borderline misinformation about ai.

Shocker

[–] FooBarrington@lemmy.world 19 points 1 day ago

A model is an artifact, not the source. We also don't call binaries "open-source", even though they are literally the code that's executed. Why should these phrases suddenly get turned upside down for AI models?

[–] intensely_human@lemm.ee 15 points 1 day ago

A model can be represented only by its weights in the same way that a codebase can be represented only by its binary.

Training data is a closer analogue of source code than weights.

load more comments (28 replies)
[–] verstra@programming.dev 17 points 2 days ago (1 children)
[–] Preflight_Tomato@lemm.ee 3 points 1 day ago

Yes please, let's use this term, and reserve Open Source for it's existing definition in the academic ML setting of weights, methods, and training data. These models don't readily fit into existing terminology for structure and logistic reasons, but when someone says "it's got open weights" I know exactly what set of licenses and implications it may have without further explanation.

[–] WraithGear@lemmy.world 59 points 2 days ago* (last edited 2 days ago) (24 children)

Seems kinda reductive about what makes it different from most other LLM’s. Reading the comments i see the issue is that the training data is why some consider it not open source, but isn’t that just trained from the other AI? It’s not why this AI is special. And the way it uses that data, afaik, is open and editable, and the license to use it is open. Whats the issue here?

[–] Prunebutt@slrpnk.net 37 points 2 days ago (18 children)

Seems kinda reductive about what makes it different from most other LLM’s

The other LLMs aren't open source, either.

isn’t that just trained from the other AI?

Most certainly not. If it were, it wouldn't output coherent text, since LLM output degenerates if you human-centipede its' outputs.

And the way it uses that data, afaik, is open and editable, and the license to use it is open.

From that standpoint, every binary blob should be considered "open source", since the machine instructions are readable in RAM.

load more comments (18 replies)
load more comments (23 replies)
[–] KillingTimeItself@lemmy.dbzer0.com 30 points 2 days ago (4 children)

i mean, if it's not directly factually inaccurate, than, it is open source. It's just that the specific block of data they used and operate on isn't published or released, which is pretty common even among open source projects.

AI just happens to be in a fairly unique spot where that thing is actually like, pretty important. Though nothing stops other groups from creating an openly accessible one through something like distributed computing. Which seems to be a fancy new kid on the block moment for AI right now.

[–] fushuan@lemm.ee 14 points 2 days ago* (last edited 2 days ago)

The running engine and the training engine are open source. The service that uses the model trained with the open source engine and runs it with the open source runner is not, because a biiiig big part of what makes AI work is the trained model, and a big part of the source of a trained model is training data.

When they say open source, 99.99% of the people will understand that everything is verifiable, and it just is not. This is misleading.

As others have stated, a big part of open source development is providing everything so that other users can get the exact same results. This has always been the case in open source ML development, people do provide links to their training data for reproducibility. This has been the case with most of the papers on natural language processing (overarching branch of llm) I have read in the past. Both code and training data are provided.

Example in the computer vision world, darknet and tool: https://github.com/AlexeyAB/darknet

This is the repo with the code to train and run the darknet models, and then they provide pretrained models, called yolo. They also provide links to the original dataset where the tool models were trained. THIS is open source.

[–] FooBarrington@lemmy.world 10 points 2 days ago* (last edited 2 days ago)

But it is factually inaccurate. We don't call binaries open-source, we don't even call visible-source open-source. An AI model is an artifact just like a binary is.

An "open-source" project that doesn't publish everything needed to rebuild isn't open-source.

load more comments (2 replies)
load more comments
view more: next ›