ConstableJelly

joined 1 year ago
[–] ConstableJelly@kbin.social 25 points 11 months ago (5 children)

First two books in the series were "Fellowship of the King" and "The Two Trees" so...I'm not entirely convinced they were even very original stories...

[–] ConstableJelly@kbin.social 11 points 11 months ago (1 children)

One of the earliest pieces of media I can remember consuming was the mid-90s TV show Viper, where James played the main character. I remember very little about the show except James's face and that he played his character cool as fuck.

I've been replaying Alan Wake and Control recently, and I have such a soft spot for his roles in them because I loved that stupid show when I was a kid.

[–] ConstableJelly@kbin.social 7 points 11 months ago* (last edited 11 months ago)

I...don't think that's what the referenced paper was saying. First of all, Toner didn't co-author the paper from her position as an OpenAI board member, but as a CSET director. Secondly, the paper didn't intend to prescribe behaviors to private sector tech companies, but rather investigate "[how policymakers can] credibly reveal and assess intentions in the field of artificial intelligence" by exploring "costly signals...as a policy lever."

The full quote:

By delaying the release of Claude until another company put out a similarly capable product, Anthropic was showing its willingness to avoid
exactly the kind of frantic corner-cutting that the release of ChatGPT appeared to spur. Anthropic achieved this goal by leveraging installment costs, or fixed costs that cannot be offset over time. In the framework of this study, Anthropic enhanced the credibility of its commitments to AI safety by holding its model back from early release and absorbing potential future revenue losses. The motivation in this case was not to recoup those losses by gaining a wider market share, but rather to promote industry norms and contribute to shared expectations around responsible AI development and deployment.

Anthropic is being used here as an example of "private sector signaling," which could theoretically manifest in countless ways. Nothing in the text seems to indicate that OpenAI should have behaved exactly this same way, but the example is held as a successful contrast to OpenAI's allegedly failed use of the GPT-4 system card as a signal of OpenAI's commitment to safety.

To more fully understand how private sector actors can send costly signals, it is worth considering two examples of leading AI companies going beyond public statements to signal their commitment to develop AI responsibly: OpenAI’s publication of a “system card” alongside the launch of its GPT-4 model, and Anthropic’s decision to delay the release of its chatbot, Claude.

Honestly, the paper seems really interesting to an AI layman like me and a critically important subject to explore: empowering policymakers to make informed determinations about regulating a technology that almost everyone except the subject-matter experts themselves will *not fully understand.

[–] ConstableJelly@kbin.social 14 points 1 year ago (1 children)

Google may be evil, but you can't deny they still attract top talent.

[–] ConstableJelly@kbin.social 3 points 1 year ago

A generation living too late to explore the Earth and too early to explore space--also doomed to live so long in the era between a fledgling, pre-corporatized internet and a free and open post-corporatized internet (which I consider inevitable, eventually, because a capitalist, enshittified internet can't sustain indefinitely...right?).