Lugh

joined 1 year ago
MODERATOR OF
[–] Lugh 1 points 5 months ago* (last edited 5 months ago)

What concerns me is that a lot of these efforts seem to be political in nature and are tied to mitigating the inevitable decline in the fossil fuel industry. More often it makes more sense to speed up the use of renewables and dropping the use of fossil fuels. Fossil fuel use still hasn't peaked. That is mainly driven by China, who are still building new coal and gas electricity plants. However the peak year of fossil fuel use is very near, there is some speculation it may even be this year for oil use. From then on it will be in steady decline, so of course that industry is going to do everything they can to delay.

[–] Lugh 10 points 5 months ago* (last edited 5 months ago) (1 children)

This is interesting as it runs counter to what many people think about current AI. Its performance seems directly linked to the quality of the training data it has. Here the opposite is happening; it has poor training data and still outperforms humans. It's not surprising the humans would do badly in this situation too; it's hard to keep up to date on things that you may only encounter once or twice in your entire career. It's interesting to extrapolate from this observation as it applies to many other fields.

One of the authors of the paper goes into more detail on Twitter.

[–] Lugh 4 points 5 months ago (4 children)

Four months is a long time in 2020's AI development. OpenAI debuted Sora in February this year but hasn't publicly released it. Now a Chinese company called Kuaishou has got ahead of them with a model it calls Kling. Kuaishou is TikTok's biggest competitor in China and has a video-sharing app used by 200 million people. Presumably, that is where all its training data came from. Unlike OpenAI, Kling is available to some of the public.

This tech still doesn't look ready to level the TV and movie industry. It does 5-second clips, but who wants a 90-minute movie made up of nothing but 5-second clips?

[–] Lugh 1 points 5 months ago (1 children)

This technology is still very much at the proof of concept stage. What's fascinating is that they did not expect the neuron tissue to be healed in the way it was, and don't even understand how the stem cell robots did so.

That is a problem though. How do you develop the potential for something, when you don't really know what it may be able to do, or how it may be able to do it?

[–] Lugh 17 points 5 months ago (3 children)

Not sure I consider this futurology,

Yeah, its borderline. But I posted it as the the excuse/reasoning centers around AI. Microsoft's plan for 'Recall' are a huge invasion of privacy and stem from use of AI too. It's the topic of AI & privacy that merits discussion.

[–] Lugh 1 points 5 months ago

Although I wouldn't want the final decision to rest with AI, it makes a lot of sense that this is used for preparatory and research work leading to decisions.

[–] Lugh 1 points 5 months ago* (last edited 5 months ago) (1 children)

I gave you a shout out in the main sidebar as an extra thank you.

[–] Lugh 8 points 5 months ago* (last edited 5 months ago) (1 children)

Lots of people in the world of SEO, marketing, and copywriting are excited about the possibility of creating vast quantities of AI-generated text. They have a problem. Human waking hours are finite, and many of us may be near our upper limit for absorbing new content.

OP examines the other side of this. How new AI's advances allow us to examine text. It seems obvious to me this will have more profound effects than the ability to generate text. Consider one aspect of this.

AI should allow us to analyze the logic in politicians' speeches in real-time. There are over a hundred logical fallacies, and they are a standard part of political debate. So much so, if you took all the logical fallacies out of political debate - what would you have left? Soon people may have the ability to easily find out.

[–] Lugh 3 points 5 months ago

a lot more to writing than a prompt

This particular tech seems a lot more than merely providing a prompt. The users also write the story and dialogue. The AI just produces the visuals.

[–] Lugh 20 points 5 months ago (3 children)

Palantir's panel at a recent military conference where they joked and patted themselves on the back about the work their AI tools are doing in Gaza was like a scene with human ghouls in the darkest of horror movies.

Estimates vary as to how many of the 30,000-40,000 dead in Gaza are military combatants, but they seem to average about 20%. This seems like a terrible record of failure for an AI tool that touts its precision.

Why does the US government want to reward and endorse this tech? Why aren't people more alarmed? By any measure, surely Palantir's demonstrated track record is one of failure. The Israel-Hammas war is the first time the world has seen AI used in significant warfare. It's a grim indication for the future.

[–] Lugh 4 points 5 months ago

I think Youtube shows what a world is like when everyone can make video content. Most is bad, but some is good, and a small amount is very good. Like any human endeavor.

[–] Lugh 20 points 5 months ago

A particularly aggressive form of colorectal cancer runs in my family. My grandmother, an aunt, and other relatives have all died of it in their fifties.

This is still at the clinical trial stage, but the approach could work for many other types of cancer too. Fingers crossed it's as successful as possible, and available as a treatment very soon.

view more: ‹ prev next ›