Lugh

joined 2 years ago
MODERATOR OF
 

"Swiss firm Novartis’s radioligand therapy, which delivers radioactive isotopes directly to tumours, has completely cleared metastatic cancers in trial patients - an unprecedented result. And, US researchers found that blocking an immune protein (IL-23) makes HPV vaccines effective against existing tumours, raising hopes for therapeutic vaccines."

Quote from Fixthenews newsletter

How Novartis got ahead on ‘incredible’ cancer breakthrough

Preventive HPV vaccines work. Now a new discovery could also help eliminate existing cancers too

 

380 GW of new solar power has been installed globally in the first six months of 2025; 64% up on the same period last year. GWEC projects that 2025 will see 139 GW of new wind installations. Assuming solar keeps increasing at the near rate in the second half of 2025, the total renewables figure for 2025 will top 1,000 GW for the first time ever. Even if solar slowed down to half its current rate of growth, that will still be true.

Three times the entire global nuclear capacity. Let that sink in. That took decades to build. Now renewables can do three times more in just one year.

Consider something else. Renewables growth has years, if not decades, of further growth ahead of it. Economies of scale mean that as more of it gets built, it keeps getting cheaper. And it's already the cheapest electricity there is. When will the first 2,000 GW year be?

 

Scaling hasn’t gotten us to AGI, or 'superintelligence”, let alone AI we could trust. The field is overdue for a rethink. What do we next?

 

India has a pretty good track record on following through on space commitments, so this all seems achievable to me. It's already landed on the Moon with Chandrayaan-3. I wonder by 2040 will there be anyone in permanent habitation at the International Lunar Research Station? Who knows how many space stations there will be in ten year's time (2035). China will have one, the ISS will have de-orbited, but presumably there will be Western commercial ones too.

India unveils its space vision to 2040

 

This is a paper which argues that the true path to a safe, dependable AI system is to take what we've learned from meditation and Buddhism and apply it to AI systems: "Robust alignment strategies need to focus on developing an intrinsic, self-reflective adaptability that is constitutively embedded within the system’s world model, rather than using brittle top-down rules", the authors write.

Contemplative Artificial Intelligence - PDF 37 pages

 

"The trade-off is profound: by socializing the infrastructure of abundance, we eliminate the need for centralized economic control and bureaucracy. "

This is an interesting essay, though I don't agree with it all. For a start, bureaucracies are not all bad. The countries with the highest standards of living all have well-oiled bureaucracies. But it's interesting to see how other people think.

[–] Lugh 3 points 1 month ago* (last edited 1 month ago)

Beijing’s complex night-time road conditions, characterized by low lighting, environmental interference – heavy rain in the summer and snow in the winter – pose significant challenges for autonomous driving systems.

People often question Level 4 self-driving and snowy conditions, it will be interesting to see how this goes.

[–] Lugh 1 points 1 month ago* (last edited 1 month ago) (3 children)

The idea being pushed forth by YOUR link is that there is a concerted effort by an “AI” to push something subliminal.

Your assertion is contradicted by real world facts. There is lots of research showing AI engaging in deceptive and manipulative behavior.

Now it has another method to do that. As the article points out, we don't why it's doing this. But that's not the point. The point is it can, without us knowing.

[–] Lugh 1 points 1 month ago* (last edited 1 month ago) (5 children)

Subliminal refers to stimuli that are presented below the threshold of conscious perception, meaning they are not consciously recognized but can still influence the mind or behavior

It's not subliminal to the AI, but then again, AI isn't analogous to human brains. But it is correct to say its subliminal to the humans building and designing the AI.

[–] Lugh 2 points 1 month ago

Interestingly in Game Theory, when everyone can lie and go undetected, its almost always bad outcomes for everyone, that range from inefficiency to collapse.

[–] Lugh 3 points 1 month ago

I think you can find ethically good, bad and gray uses for AI.

The top commenter here mentions Youtube content creators using it. Most of them are on YT to make money. So its a rational smart choice to let AI do your writing, if it makes you more efficient and means you can earn more.

[–] Lugh 2 points 1 month ago

Sounds more like YouTube “content producers” are likely using AI to generate the words they read aloud.

I've noticed this too, and it sounds like a an example of what Marshall McLuhan was talking about when he said "The Medium is the Message”. The form of a medium (e.g., TV, print, digital) has a more profound effect on society than the actual content it carries.

[–] Lugh 13 points 1 month ago* (last edited 1 month ago) (2 children)

Stupider people with weaker senses of self are more likely to use chatgpt.

No. AI use correlates with being younger and more educated.

Characteristics of ChatGPT users from Germany: implications for the digital divide from web tracking data

[–] Lugh 3 points 1 month ago* (last edited 1 month ago)

It would have been more accurate to say well-paying jobs for all of them.

[–] Lugh 4 points 1 month ago (2 children)

There is only a limited amount of engineers available.

U.S. universities award roughly 150,000 to 200,000 bachelor’s degrees in engineering each year, whereas the EU produces 500,000 engineering graduates per year.

Europe's problem is getting enough jobs for them all.

[–] Lugh 4 points 1 month ago (1 children)

there’s very little industry in the EU

The EU's Total Manufacturing Output & Global Share of Manufacturing is bigger than the US's.

[–] Lugh 4 points 1 month ago (1 children)

Or maybe AGI turns out to be harder than some people thought.

Yes. It seems very unlikely to arise from current LLMs. AGI-Hypers keep expecting signs of independent reasoning to arise, and it keeps not happening.

view more: ‹ prev next ›