bignate31

joined 1 year ago
[–] bignate31@lemmy.world -1 points 2 days ago

It's hype like this that breaks the back of the public when "AI doesn't change anything". Don't get me wrong: AlphaFold has done incredible things. We can now create computational models of proteins in a few hours instead of a decade. But the difference between a computational model and the actual thing is like the difference between a piece of cheese and yellow plastic: they both melt nicely but you'd never want one of them in your quesadilla.

[–] bignate31@lemmy.world 10 points 3 weeks ago

oh. go get a therapist--not physical; mental. they're insanely expensive, but you can spend the next three months shopping around and by the new year you'll have found someone you like!

[–] bignate31@lemmy.world 1 points 4 weeks ago (1 children)

invite me and I'll bring my own alcohol. spread looks delicious!

[–] bignate31@lemmy.world 4 points 4 weeks ago

Another great example (from DeepMind) is AlphaFold. Because there's relatively little amounts of data on protein structures (only 175k in the PDB), you can't really build a model that requires millions or billions of structures. Coupled with the fact that getting the structure of a new protein in the lab is really hard, and that most proteins are highly synonymous (you share about 60% of your genes with a banana).

So the researchers generated a bunch of "plausible yet never seen in nature" protein structures (that their model thought were high quality) and used them for training.

Granted, even though AlphaFold has made incredible progress, it still hasn't been able to show any biological breakthroughs (e.g. 80% accuracy is much better than the 60% accuracy we were at 10 years ago, but still not nearly where we really need to be).

Image models, on the other hand, are quite sophisticated, and many of them can "beat" humans or look "more natural" than an actual photograph. Trying to eek the final 0.01% out of a 99.9% accurate model is when the model collapse happens--the model starts to learn from the "nearly accurate to the human eye but containing unseen flaws" images.

[–] bignate31@lemmy.world 8 points 1 month ago

Yeah, I grew up in Fahren-wasteland, but have lived in Celsi-heaven for 7 years. I embraced it, and now when someone says "40 FUCKING DEGREES!!" I know exactly what they're talking about. It's hot. You probably don't have an air con. It's misery.

[–] bignate31@lemmy.world 3 points 4 months ago

oooh. design intricate sandwiches! sounds like a lovely holiday!!

[–] bignate31@lemmy.world 34 points 4 months ago (2 children)

Favourite part of the whole article:

A spokesperson for Truth Social said, “It’s hard to believe that Reuters, once a respected news service, has fallen so low as to publish such a manipulative, false, defamatory and transparently stupid article as this one purely out of political spite.”

"You never saw what you thought you saw. And even if you did, it was entirely justified and your interpretation was extreme."

[–] bignate31@lemmy.world 1 points 4 months ago (1 children)

Yeah, the problem is how to sanitise effectively. You've gotta be able to find a way to automatically strip out "bad" things from your training data (via an "oracle"). But if you already had that oracle, you could just slap it on your final product (e.g. Search) and make all the "bad" things disappear before they hit the user (via some sort of filter).

[–] bignate31@lemmy.world 2 points 6 months ago

it's just reliable. especially with remote work, everything is "over ssh", and you can create a very consistent environment with only a few config files

the amount of AI you can get into these IDEs is impressive, though. probably the only reason I'd ever make the switch

[–] bignate31@lemmy.world 5 points 7 months ago (1 children)

I left the first time for Trump... but moved to the UK just in time for Brexit. Should've picked Taiwan I guess

view more: next ›