abruptly8951

joined 2 years ago
[–] abruptly8951@lemmy.world 1 points 1 week ago

They were invented *by 9k bc :)

[–] abruptly8951@lemmy.world 1 points 3 weeks ago (2 children)

Can you go into a bit more details on why you think these papers are such a home run for your point?

  1. Where do you get 95% from, these papers don't really go into much detail on human performance and 95% isn't mentioned in either of them

  2. These papers are for transformer architectures using next token loss. There are other architectures (spiking, tsetlin, graph etc) and other losses (contrastive, RL, flow matching) to which these particular curves do not apply

  3. These papers assume early stopping, have you heard of the grokking phenomenon? (Not to be confused with the Twitter bot)

  4. These papers only consider finite size datasets, and relatively small ones at that. I.e. How many "tokens" would a 4 year old have processed? I imagine that question should be somewhat quantifiable

  5. These papers do not consider multimodal systems.

  6. You talked about permeance, does a RAG solution not overcome this problem?

I think there is a lot more we don't know about these things than what we do know. To say we solved it all 2-5 years ago is, perhaps, optimistic

[–] abruptly8951@lemmy.world 3 points 1 month ago

Unfortunately not, here is a little kitchen sink type demo though https://myst-nb.readthedocs.io/en/latest/authoring/jupyter-notebooks.html

Myst-nb is probably the place to start looking btw - forgot to mention it in previous post

[–] abruptly8951@lemmy.world 3 points 1 month ago (2 children)

I use sphinx with Myst markdown for this, and usually plotly express to generate the js visuals. Jupyterbook looks pretty good as well

[–] abruptly8951@lemmy.world 3 points 4 months ago

Feeding the troll 🤷‍♂️ "agenda driven" what does that even mean 😆

No one said other languages aren't allowed. Submit a patch and prepare yourself for years of painstaking effort.

[–] abruptly8951@lemmy.world 6 points 9 months ago (4 children)

Devils advocate: Splatting, dlss, neural codecs to name a few things that will change the way we make games

[–] abruptly8951@lemmy.world 12 points 1 year ago

I don't really follow your logic, how else would you propose to shape the audio that is not "just an effect".

Your analogy to real life does not take into account that the audio source itself is moving, so their is an extra variable outside of just stereo signal -which is what spatial audio is modelling

And your muffling example sounds a bit over simplified maybe? My understanding is that the spatial stuff is produced by phase shifting the LR signals slightly

Finally why not go further? "I don't listen to speaker audio because it's all just effects and mirages to sound like a real sound, what only 2^16 discrete positions the diaphragm can be in" :p

[–] abruptly8951@lemmy.world 15 points 1 year ago

There's is a huge difference though.

That being one is making hardware and the other is copying books into your training pipeline though

The copy occurs in the dataset preparation.

[–] abruptly8951@lemmy.world 10 points 1 year ago (1 children)

Privacy preserving federated learning is a thing - essentially you train a local model and send the weight updates back to Google rather than the data itself....but also it's early days so who knows what vulnerabilities may exist

[–] abruptly8951@lemmy.world 4 points 2 years ago (3 children)

You need rebase instead. Merge just creates useless commits and makes the diffs harder to comprehend (all changes are shown at once, but with rebase you fix the conflicts in the commit where they happened)

Then instead of your branch of branch strat you just rebase daily into main and you're golden when it comes time to PR

[–] abruptly8951@lemmy.world 1 points 2 years ago

For me the infinity subscription bypass stopped working so I finally made the switch

view more: next ›