this post was submitted on 31 Jan 2024
9 points (61.5% liked)

Futurology

1853 readers
6 users here now

founded 1 year ago
MODERATORS
top 12 comments
sorted by: hot top controversial new old
[–] mozz@mbin.grits.dev 18 points 10 months ago* (last edited 10 months ago) (1 children)

What the heck is this source. Excerpts:

recall the recent condensing of regulation and timescales in the biomedical industry, ‘Operation Warp Speed’

in chess or Go I expect that in the overwhelming majority of positions (even excluding the opening book & endgame databases), the best move is known and the engines aren't going to change the choice no matter how long you run them

(that second one is him quoting some researcher, but this transparently absurd statement simply slides past unnoticed, and indeed is cited as support for something similar he's saying which also seem dubious to me.)

This month, the US was fairly quiet. In fact, only a few US-based AI labs announced new models (generally, I don’t count Llama finetunes, nor OpenAI’s minor model updates).

I barely pay attention to this stuff, and I noticed the CodeLlama 70B release, which I would describe as significant -- simply stacking up the number of papers and saying one side of the equation is making more progress because they're releasing more things (and specifically saying that he "doesn't count" the two most prolific US sources in terms of commonly-used models), is very weird. You can look at benchmarks, or try the models yourselves or see what results they claim in their papers, if you're going to write an article saying something about the comparative output.

There is a whole conversation to be had about AI research in China, and I'm 100% open to the idea that I and the rest of the West is missing something important, but it would have been nice to see this citation-less statement:

many of them significantly outperforming ChatGPT

Backed up by something more than:

I’m not sure what China’s prolific output means

He also compares things to GPT 3.5 (in his mind, not by testing). Personally I dislike using 3.5 for anything, because there's something already available to consumers that's clearly way better and has been for quite some time. GPT-4 is clearly the model to beat and the model that most US researchers compare their stuff to when they're publishing stuff.

Etc etc. In short:

BOOOOOOOOOOOOOOO

BOOOOOOOOOOOOOOOO

[–] Endward23 2 points 10 months ago
[–] possiblylinux127@lemmy.zip 5 points 10 months ago* (last edited 10 months ago) (1 children)

Honestly these metrics are pretty useless. "Innovation" is not a valid unit of measurement.

It is a bit concerning that China could use machine learning to censor images automatically country wide. Imagine a protest where the police just disappear out of the photo. All you would see is people running.

[–] sik0fewl@kbin.social 3 points 10 months ago

I did 7 innovations just by myself.

[–] MrCookieRespect@reddthat.com 4 points 10 months ago (2 children)

And all Chinese ones are propaganda Mashines.

[–] mozz@mbin.grits.dev 2 points 10 months ago

Kind of yeah. There's a whole conversation about how the Chinese government was at one point hamstringing all Chinese AI development by trying to punish researchers (with for-real, Chinese-prison punishment) if their models ever say anything politically suspect. I don't know if that's still going on, but past and present it is asinine and counterproductive to a degree that's honestly a little hard to believe. At the same time, China's command economy has the ability to muster resources towards a priority like AI development in a way that simply doesn't happen in the West, so I could easily believe that there are good things coming out that people here are unaware of. So it'd be good to learn about these things; I wish this article was reporting some of the substance of them so we could.

[–] Lugh 0 points 10 months ago (1 children)

Saying all Chinese language media is propaganda just seems like a conspiracy theory.

[–] MrCookieRespect@reddthat.com 2 points 10 months ago (1 children)

But it isn't. Its literally written in Chinese law.

[–] mozz@mbin.grits.dev 1 points 10 months ago* (last edited 10 months ago) (1 children)

I don't think it's literally mandated to be propaganda for the CCP, but it's definitely illegal to write anything against them, which is a little bit of a false distinction if you look at it for long enough.

Interestingly enough, I tried to find the story I remember of the Chinese AI researcher who actually did get punished because his model said something about Chairman Mao, and I can't find it now. The more recent stories I found made it sound like they're taking a lot more sensible tack now (although still a state-mandated-oppression friendly one; it's just not wildly illogical and counterproductive like it used to be according to this):

https://www.technologyreview.com/2024/01/17/1086704/china-ai-regulation-changes-2024/

https://www.technologyreview.com/2023/10/18/1081846/generative-ai-safety-censorship-china/

[–] MrCookieRespect@reddthat.com 0 points 10 months ago

If your llm isn't against china it's either not working, ethically questionable or made for propaganda

[–] Endward23 1 points 10 months ago
[–] Lugh 1 points 10 months ago* (last edited 10 months ago)

Most English language media is poor at reporting on China, so sometimes you come across facts like this that sit up and make you take notice; though I wish I had more context for them.