xxce2AAb

joined 2 weeks ago
[–] xxce2AAb@feddit.dk 24 points 1 week ago (3 children)

It's like reading an article about a petrol refining company, who, having prior experience with gasoline as a useful and profitable substance, decides to seek venture capital for the development of a petrol-based fire-extinguisher. They obtain the funding - presumably because some people with money just wants to see the world burn and / or because being rich and having brains is not necessarily strongly correlated - but after having developed the product, tests conclusively prove the project's early detractors right: The result is surprisingly always more fire, not less. And they "don't know how to fix it, while still adhering to the vision of a petrol-based fire-extinguisher".

[–] xxce2AAb@feddit.dk 89 points 2 weeks ago (15 children)

Eh. It's not like we'd be getting anywhere at sub-light speeds regardless and a working Alcubierre drive isn't exactly right around the corner.

On the other hand, it might make it harder for anybody with working FTL to get to us, which is probably a good thing. If they saw how we're conducting ourselves at the moment, orbital bombardment would be the best we could hope for.

[–] xxce2AAb@feddit.dk 19 points 2 weeks ago

Does he love his wife? I can't recall a single sign of actual affection. A wife is just one more thing to check off the "list of things to have", along with a house, a job and a son to carry on... Well, I don't think Vernon knows what exactly. Probably some ill-defined set of Proper British Traditions. To be fair Petunia is using him in the exact same way for for precisely the same reason. And the money, of course.

[–] xxce2AAb@feddit.dk 2 points 2 weeks ago (1 children)

Ah yes, the printing press: Likely the most cumulatively subversive memetic infection vector ever conceived.

[–] xxce2AAb@feddit.dk 1 points 2 weeks ago (1 children)

Here I'm imprecisely using "LLM" as a general stand-in for "machine learning". The only role I see for LLMs in that kind of endeavor is to allow researchers to ask natural questions about the dataset and get results. But with that correction made, yes, even simple polynomial partitioning of hyper-dimensional datasets is incredibly good at detecting clustering/corelations/patterns no human would ever be able to perceive, and is helpful in other ways - predicting (i.e. guessing) at properties for hitherto unknown compounds or alloys based on known properties of existing ones, which has been very helpful in everything from chemistry over material sciences to plasma physics. Point is, there's plenty of useful and constructive uses for these technologies, but those are not the ones actually being funded. What investors are throwing money at is either tools that rip off other people's work without compensation, enable positive (in the bad cybernetic sense) feedback loops with users or aim to replace large amounts of the workforce with nothing to replace the jobs lost, which will obviously do nothing good for societal or economic stability.

[–] xxce2AAb@feddit.dk 19 points 2 weeks ago (6 children)

Define 'mind-control'. Trans-cranial magnetic stimulation has been perfectly capable of changing people's broad moods since 1985 and is being actively used to treat depression right now. The underlying technology is only going to get more precise, especially as more research on spintronics is done for other purposes. Sure, right now our understanding of how what goes on in a given brain translates to 'thoughts' is insufficient to change those thoughts in any reliable way, but there's little doubt that when we do, the technology to make it happen will almost certainly be around.

[–] xxce2AAb@feddit.dk 5 points 2 weeks ago (3 children)

Yeah. While I agree that "Europe isn't the US" and that we definitely need "smarter AI rules", I highly doubt my idea of what that means matches that of those corporate entities.

By all means, use a LLM to chew through huge scientific datasets to search for correlations a human would never have noticed or come up with a 400 page mathematical "proof" that can at least inform a human-driven refinement process to achieve actual understanding, but practically ever other use of "AI" I've seen so far is a blursed waste of power at best and societally corrosive at worst.

view more: ‹ prev next ›