this post was submitted on 25 Apr 2025
142 points (83.8% liked)

Not The Onion

15979 readers
1518 users here now

Welcome

We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!

The Rules

Posts must be:

  1. Links to news stories from...
  2. ...credible sources, with...
  3. ...their original headlines, that...
  4. ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

Please also avoid duplicates.

Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.

And that’s basically it!

founded 2 years ago
MODERATORS
 

One of the stupidest things I've heard.

I also would like to attach the bluesky repost I found this from:

@leyawn.bsky.social‬ says: “is my calculator horny?“ our tech columnist asks. “i entered 5318008 into it and turned it upside down. what i saw surprised me” https://bsky.app/profile/leyawn.bsky.social/post/3lnldekgtik27

all 44 comments
sorted by: hot top controversial new old
[–] nandeEbisu@lemmy.world 3 points 10 hours ago

I have a preconceived conclusion about my anthropomorphized view of a statistical model with some heuristics around it. People who know what they're talking about say I'm wrong, but I need an idea for an article to write that people will read.

[–] minorkeys@lemmy.world 5 points 13 hours ago (1 children)

If we lose perspective that computer systems are machines, we're fucked. Stop personifying computer systems just because they make you feel things. JFC.

"Many of you feel bad for this lamp. That is because you crazy [sic]. It has no feelings..."

[–] IndustryStandard@lemmy.world 2 points 12 hours ago

Ai lamp turns off and on with free will!

[–] cypherpunks@lemmy.ml 2 points 11 hours ago* (last edited 11 hours ago)

Do tech journalists at the New York Times have any idea what they're talking about? (spoiler)

'We’re going to talk about these stories.'

The author of this latest advertorial, Kevin Roose, has a podcast called "Hard Fork".

Here he and his co-host attempt to answer the question "What’s a Hard Fork?":

kevin roose: Casey, we should probably explain why our podcast is called “Hard Fork.”

casey newton: Oh, yeah. So our other names didn’t get approved by “The New York Times” lawyers.

kevin roose: True.

casey newton: And B, it’s actually a good name for what we’re going to be talking about. A “hard fork” is a programming term for when you’re building something, but it gets really screwed up. So you take the entire thing, break it, and start over.

kevin roose: Right.

casey newton: And that’s a little bit what it feels like right now in the tech industry. These companies that you and I have been writing about for the past decade, like Facebook, and Google, and Amazon, they’re all kind of struggling to stay relevant.

kevin roose: Yeah. We’ve noticed a lot of the energy and money in Silicon Valley is shifting to totally new ideas — crypto, the metaverse, AI. It feels like a real turning point when the old things are going away and interesting new ones are coming in to replace them.

casey newton: And all this is happening so fast, and some of it’s so strange. I just feel like I’m texting you constantly, “What is happening? What is this story? Explain this to me. Talk with me about this, because I feel like I’m going insane.”

kevin roose: And so we’re going to try to help each other feel a little bit less insane. We’re going to talk about these stories. We’re going to bring in other journalists, newsmakers, whoever else is involved in building this future, to explain to us what’s changing and why it all matters.

casey newton: So listen to Hard Fork. It comes out every Friday starting October 7.

kevin roose: Wherever you get your podcasts.

This is simply not accurate.

Today the term "hard fork" is probably most often used to refer to blockchain forks, which I assume is where these guys (almost) learned it, but the blockchain people borrowed the term from forks in software development.

In both cases it means to diverge in such a way that re-converging is not expected. In neither case does it mean anything is screwed up, nor does it mean anything about starting over.

These people who's job it is to cover technology at one of the most respected newspapers in the United States are actually so clueless that they have an entirely wrong definition for the phrase which they chose to be the title of their podcast.

"Talk with me about this, because I feel like I’m going insane."

But, who cares, right? "Hard fork" sounds cool and the times is ON IT.

[–] Dadifer@lemmy.world 16 points 20 hours ago (2 children)

Do humans deserve human rights? A more relevant debate.

[–] easily3667@lemmus.org 3 points 14 hours ago

They don't have them so...

[–] possiblylinux127@lemmy.zip 4 points 19 hours ago* (last edited 19 hours ago)

I wonder if Gemma is actually a white man

It is saddly common for LLMs to be racist and biased against people of color so maybe they are all secretly white racist males

[–] rockerface@lemm.ee 25 points 22 hours ago

I wish ads felt pain when I skipped them

[–] stoly@lemmy.world 1 points 13 hours ago

This could potentially be a concept in 100 years but is a stupid question for now.

[–] SouthFresh@lemmy.world 9 points 23 hours ago

"Does autocorrect cry when I don't use its corrections?"

[–] Thedogdrinkscoffee@lemmy.ca 24 points 1 day ago

We can't even give humans human rights. AI will have to get in line.

[–] Mammothmothman@lemmy.ca 56 points 1 day ago (2 children)

What kind of fluff "journalism" is this?

[–] toy_boat_toy_boat@lemmy.world 35 points 1 day ago* (last edited 1 day ago) (1 children)

thanks. my first thought was, "are you fucking kidding me?"

but this is what all the money wants us to think about "AI", which is definitely not intelligence. they want everyone to accept that pattern recognition is indistinguishable from intelligence.

edit - alcohol makes me talk in cicles

[–] CluckN@lemmy.world 5 points 22 hours ago

It sucks but they do have an audience. I have older family members who swear ChatGPT has a “personality” because it will reply when they thank it.

[–] pwalshj@lemmy.world 30 points 1 day ago

The pride of cancelling my 20 year subscription continues to swell.

[–] Anarki_@lemmy.blahaj.zone 12 points 1 day ago (1 children)

Does my phone feel pain when I drop it?

[–] wabafee@lemmy.world 2 points 1 day ago

Why don't you ask it?

[–] cronenthal@discuss.tchncs.de 23 points 1 day ago (1 children)

Wow, and in the NYT no less. This will make a lot of people a lot more stupid. I guess the AI grift needs to go on for a while longer.

[–] leisesprecher@feddit.org 20 points 1 day ago (1 children)

I really wonder what's going on in the editors minds here.

The entire premise of the article is "All experts say no, but I think yes" - why would anyone about any topic publish this? If it would be an actual debate, maybe some contrarian but actual experts arguing in favor of sentience, you could get into an argument here. But this article is blatant science denial. Climate change deniers and antivaxxers use the exact same approach "facts say X, but my feelings say Y".

[–] cronenthal@discuss.tchncs.de 7 points 1 day ago (1 children)

I guess articles like this create high engagement, they are the very definition of rage-bait.

What's saddening is the complete lack of integrity on every level of the publisher. Surely they must know that this is blatant misinformation, but they just don't care.

Stuff like this does have consequences, it shapes the discussion and leads to bad decisions and outcomes. But like in so many instances, everyone is fine with it as long as they can convince themselves that they won't be affected by the results of their own actions.

[–] futatorius@lemm.ee 2 points 22 hours ago

It shows that the East Coast metropolitan elite that is the source of most top-line journalists is collectively pig-ignorant about tech matters. NYT's tech coverage is mainly puff pieces tracking the hype cycle of the tech du jour. I've never seen anything insightful from them. It's like listening to lawyers discuss tech. Without my iron self-control, there would have been so many defenestrations.

[–] futatorius@lemm.ee 4 points 22 hours ago

I know from the way it looks at me that my spreadsheet loves me.

These are the same type of people who believed ELIZA to be sentient.

Ok so: Measure of a Man is one of my all time favorite Star Trek episodes, but come the fuck on. We are so, so far away from that. Maybe worry more about humans, right now, and the world we live in, instead of some nebulous fucking future that we won’t even goddamn reach if we don’t pay attention to, you know, humans and the world we live in.

[–] jmcs@discuss.tchncs.de 12 points 1 day ago* (last edited 1 day ago) (1 children)

Before we even get close to have this discussion, we would need to have an AI capable of experiencing things and developing an individual identity. And this goes completely opposite of the goals of corporations that develop AIs because they want something that can be mass deployed, centralised, and as predictable as possible - i.e. not individual agents capable of experience.

If we ever have a truly sentient AI it's not going to be designed by Google, OpenAI, or Deepmind.

[–] pennomi@lemmy.world 5 points 1 day ago (1 children)

Yep, an AI can’t really experience anything if it never updates the weights during each interaction.

Training is simply too slow for AI to be properly intelligent. When someone cracks that problem, I believe AGI is on the horizon.

[–] cjoll4@lemmy.world 1 points 17 hours ago (1 children)
[–] pennomi@lemmy.world 1 points 15 hours ago

Artificial General Intelligence, or basically sobering that can properly adapt to whatever situation it’s put into. AGI isn’t necessarily smart, but it is very flexible and can learn from experience like a person can.

[–] possiblylinux127@lemmy.zip 1 points 19 hours ago (3 children)

I don't see any reason why this can't be discussed. I think people here are just extremely anti AI. It is almost like forcing AI on people was a bad idea.

[–] nandeEbisu@lemmy.world 3 points 10 hours ago

I think there's a useful discussion for why these technologies can be effective at getting people to connect with them emotionally, but they themselves don't experience emotions any more than a fictional character in a book experiences emotion.

Our mental model of them can, but the physical representation is just words. In the book I'm reading there was a brutal torture scene. I felt bad for the character, but if there was an actual being experiencing that kind of torment, making and reading the book would be horrendously unethical.

[–] lime@feddit.nu 5 points 13 hours ago (1 children)

i don't even understand why it's worth discussing in the first place. "can autocomplete feel?" "should compilers form unions?" "should i let numpy rest on weekends?"

wake me up when what the marketers call "ai" becomes more than just matrix multiplication in a loop.

[–] possiblylinux127@lemmy.zip 1 points 12 hours ago

If it a broad discussion of intelligence then I could see it.

I do agree that we are no where close anything that resembles actual intelligence

[–] easily3667@lemmus.org 1 points 14 hours ago

Nobody forced it on anybody. My work uses gdocs, I just never turned Gemini on. Easy.

[–] zbyte64@awful.systems 8 points 1 day ago

Can our AI fall in love with a human? Scientists laughed at me when I asked them but I found this weird billionaire to pay me to have sex with his robot.

[–] LanguageIsCool@lemmy.world 1 points 20 hours ago

Everybody poops

[–] Rhoeri@lemmy.world 3 points 1 day ago

So… the headline answered the question and people still read the article?

[–] cm0002@lemmy.world 3 points 1 day ago (1 children)

Gemini in it's current form? No, but it is a fair question to ask for the future

[–] justOnePersistentKbinPlease@fedia.io 2 points 1 day ago (1 children)

Yeah, twenty years from now at the very least.

[–] KernelTale@programming.dev 7 points 1 day ago (1 children)

Yeah, but it's like fusion. It's always 20 years away for the last 60 years.

Realistically, as a dev who watched AI develop from cheap parlor tricks to very expensive and ecosystem crunching fancy parlor tricks that mangers think will replace all of their expensive staff who actually know how to design and create:

Modern "AI" is fundamentally incapable of actual thought. They are very advanced and impressive statistical engines, but the technology is incapable of thinking at a fundamental level.