SunsetFruitbat

joined 2 years ago
[–] SunsetFruitbat@lemmygrad.ml 11 points 16 hours ago* (last edited 16 hours ago)

I'm going to feel sorry considering that generally people using it are those usually not having anyone else to talk to or feel like they can't, and they shouldn't be punished by cops coming in and coming to kill them since some people reviewing a flagged conversation decided it was worth calling the pigs. Just because they used ChatGPT to talk about something, shouldn't mean they should be punished for that.

I think it is worth looking into as to why someone would go to talk to ChatGPT about issues. In which, If anything, this just shows a heightening contradiction in a lot of societies where mental health is just not really taken seriously or punished, plus a few other factors like alienation. In a way it sort of reminds me of how people say to reach out if your struggling, but if you start talking about suicidal thoughts, it just tends to be "go away" or "go talk to a therapist" who, much like openai, will call the police to depending on how specific you are with said suicidal thoughts or if it thoughts of harming others. Mainly since lots of people don't know how to handle such things or don't take it seriously or belittle someone going through a mental health crises. Least not to mention to go into other things, assuming someone even can afford to go see a therapist or mental health professional.

[–] SunsetFruitbat@lemmygrad.ml 17 points 16 hours ago* (last edited 16 hours ago)

It's not too surprising that ChatGPT will resort to flagging conversations to people to decide whether to call police or not, especially considering when holding places like OpenAI accountable for things like suicide, which seems to be somewhat a response to that or other mental health episodes. It is worth noting is less the llm, and more of a human team doing that. I think it is worth pointing out that since a lot of other online platforms are also like this to? Like if I posted very specific information about harming myself or others to some other place, it definitely will be reviewed by people and may or may not be reported to police.

More reasons to use other things like DeepSeek or even better, local llms for privacy. Especially since OpenAI can get fucked for lots of reasons.

Last month, the company's CEO Sam Altman admitted during an appearance on a podcast that using ChatGPT as a therapist or attorney doesn't confer the same confidentiality that talking to a flesh-and-blood professional would — and that thanks to the NYT lawsuit, the company may be forced to turn those chats over to courts.

Also this is funny! Therapists will do the exact same thing if you come in with specific plans on hurting yourself or others, they will call the police. They have to. Speaking from personal experience of having a psychiatrist call the police on me. It would be nice if mental health systems could become detached from the carceral system. Since as it stands, with it being linked to the carceral system like it is now, it just makes it linked to a form of social control and oppression that punishes someone for being in a mental health crises.

[–] SunsetFruitbat@lemmygrad.ml 2 points 3 days ago* (last edited 3 days ago)

I also reported it because seeing comments like that is generally upsetting. This isn't to defend the shooter because I don't care too much about the shooter. It's more I had a relative who almost went to prison for life because of three strike bs and he wasn't stupid.

[–] SunsetFruitbat@lemmygrad.ml 16 points 6 days ago

Least not to mention how the American Psychology Association for example, contributed to the CI.A torture program in general and like in Abu Ghraib. Then to go to something else, how psychiatry was used in colonization for example, to quote what Fanon said here

[...]Since 1954 we have drawn the attention of French and international psychiatrists in scientific works to the difficulty of “curing” a colonized subject correctly, in other words making him thoroughly fit into a social environment of the colonial type.

[–] SunsetFruitbat@lemmygrad.ml 12 points 2 weeks ago* (last edited 2 weeks ago)

China is literaly Israel`s biggest trading partner as of end 2024.

Nonsense and misleading. China isn't "Israel" largest trading partner. To go to your source, the United States is, with Ireland at second, which to quote "The biggest importers of Israeli products were the United States with $17.3bn, Ireland with $3.2bn and China with $2.8bn." Also important to note how the U.S is there at 17.3bn compared to China 2.8bn.

If your saying in terms of what "Israel" buys from other countries, overlooking that in this, lots of countries buy from China. Then lets go into specifics. Where "Israel" buys from China is "China primarily exported electric vehicles, mobile phones, computers and metals." Compared to the United States where "Israel" buys "The United States sold Israel explosive munitions, diamonds, electronics and chemical products. Israel receives billions in US military aid, much of which is spent on American-made weapons, effectively boosting US exports."

Which is a world of difference. To add, it feels silly linking that one article from 2018 about questioning if China abandoning Palestine, considering the Beijing Declaration from last year. Perhaps it would be more better to be more focused on the United States that uses "Israel" as a forwarding base? Even if China stopped trading, It's not going to change much from how much support "Israel" gets from the United States and other global north countries that continue to prop it up. Since the issue isn't with China, but the United States.

[–] SunsetFruitbat@lemmygrad.ml 1 points 1 month ago* (last edited 1 month ago)

Mostly, the pith of what I wrote, which has little to do with value judgement, quality of diagnosis, or even patient outcomes, and more to do with the similarity between the neurological effects on the practitioners associated with using descriminative models to do object detection or image segmentation in endoscopy and those of using generative models to accomplish other tasks

You claimed they had nothing to do with each other. I disagree and stated one way in which they are similar: both involve the practitioner forfeiting the deliberate guidance of their attention to solve a problem in favor having a machine learning model do it and maybe doing something else with their attention (maybe not, only the practitioner can know what else they might do).

But were specifically talking about discriminative models and less to do with generative models in this instance. They aren't using generative models to help with this. If you want to talk about generative models with other tasks that actually use generative models, go for it, but it has nothing to do with CADe since CADe doesn't use generative models, but I do understand that your trying to connect these two with how you said they're "forfeiting the deliberate guidance of their attention to solve a problem in favor having a machine learning model"

In which this has less to do with AI but with tools in general does it not, since tools can also cause 'neurological effects'? Lots of other things can fall into this, especially if we leave out the machine learning portion. Like, health professionals tend to use pulse oximeters to also take heart rate besides oxygen levels, while they're busy doing something else like asking questions or whatever, before looking at it and determining whether that seems right or wrong before noting it down. But they are also using a tool that forfeits their attention to solve a problem, because to go to heart rate, they could just easily take it without it but usually they don't, not unless if something warrants further investigation. Except obviously, pulse oximeters aren't generative or discriminative AI, mainly just giving an example where attention is "forfeited".

Either way, it's just seems disingenuous. Since this is just focusing on a superficial similarity with "practitioner forfeiting the deliberate guidance of their attention to solve a problem in favor having a machine learning model do it" or equating the two as if the two are the same. Both have a distinctive purpose. Sure they're the same if were focusing on the overall form or general appearance of these tools since they are both resultant from machine learning, then sure they are the same, but on their content or essence, there is clear difference in what they're built to do. They function different and do different things, otherwise there would be no such distinctions between discriminative and generative, and only does that happen if we go by that superficial similarity like taking attention away, which also shares other things in common with other tools as well that isn't just discriminative or generative ai.

I didn’t mean that a human is necessarily no longer doing anything with their attention. Specifically, when a human uses a machine learning model to solve some problem (e.g., which region of an image to look at during a colonoscopy), this changes what happens in their mind. They may still do that function themselves, compare their own ideas of where to look in the image with the model’s output and evaluate both regions, or everywhere near both regions, or they might do absolutely nothing beyond looking solely in the region(s) output by their model. We don’t know and this is totally immaterial to my claim, which is that any outsourcing of the calculation of that function alters what happens in the mind of the practitioner. It’s probable that there are methodologies that generally enhance performance and protect practitioners from the so-called deskilling. However, merely changing the function performed by the model in question from generative to discriminative does not necessarily mean it will be used in way that avoids eroding the user’s competence.

I have to ask but what "both regions"? This is all happening on a live camera feed, there is only one region, that of the camera feed or rather, the person's insides that they're investigating, is the only region. If CADe sees something it thinks it's a polyp and highlights it on the camera feed for the health professional to look further, there isn't any other region. CADe isn't producing a different model of someones insides and putting that on the camera feed. However It is producing a rectangle just to highlight something, but it still all falls within the region of the camera feed of someones insides. Either way a health professional still has to look further inside someone and investigate.

I can see the argument of like, someone performing a colonscopy with CADe, mainly relying on this program to highlight things while just passively looking around, but that is rather speaking of the individual than the AI and that amounts more to negligence. Since another thing to note is that there are also false positives as well, so even if someone just relying on it to highlight something, they still have to investigate further instead of just taking it at face value. Which still requires competency like to determine if that mass is a polyp or something normal. However another thing but like, nothing stopping a health professional without CADe being negligent as well. Like missing more subtle polyps because they didn't immediately recognized it since it's not what they're used to seeing and moving on, or alternatively, at first glance seeing what they think is a polyp, but then it turns out it not after further investigation, or if their being neglect, deciding it just needs to go without further investigation despite it not being a polyp but calling a polyp at the end of the day.

Either way, I don't really see what CADe changes much besides just trying to get health professionals to further investigate something. The only thing that I can see this being an issue in regards to CADe, is if they couldn't access the service due to technical issues or other reasons, then yes, deskilling in this instance would be a problem, but on other hand they could just reschedule if it's a tech issue or just wait until it's back up, but that still sucks for the patient considering the preparation for such exams. However that's only an issue if over reliance is an issue, which I can also see maybe over reliance being a problem to an extent, but at the same time, CADe makes health professionals investigate further since it not solving anything, and just because they use it doesn't mean their competency is all thrown out the window.

Besides those things can likely be addressed in different ways to lower deskilling while still using CADe. Since CADe in general does seem to be helpful to an extent, since it's just another tool in the tool box, and a lot of this criticisms like with attention or deskilling, just falls outside of AI and into more general stuff that isn't unique to AI, but tools and technology in general.

[–] SunsetFruitbat@lemmygrad.ml 1 points 1 month ago* (last edited 1 month ago) (2 children)

How so? How is it related to generative AI then? It's not generating images or text. Sure it trained like any other machine learning stuff, but from various videos on computer aided polyp detection, it just literally putting green rectangle box over what it thinks it's a polyp for said health professionals looking to investigate or check out. Looking up on other videos on computer aided polyp detection also just show this to. It just literal image recognition and that it. So genuinely, how is that generative "ai" or related to? What images is it generating? Phones and camera also have this function with using image recognition to detect faces and such. Is there something wrong with image recognition? I'm genuinely confused here.

in both cases, a human is deliberately forfeiting their opportunity to exercise their attention to solve some problem in favor of pressing a “machine learning will do it” button

That is such a bad framing of this, especially if you willing overlook how from even this news article and the study, mentions how CADe has helped detection numbers

AI helped health professionals to better detect pre-cancerous growths in the colon, but when the assistance was removed, their ability to find tumors dropped by about 20% compared with rates before the tool was ever introduced, according to findings published Wednesday.

and to go that one comments from Omar that the news article linked

A recently published meta-analysis of 44 RCTs suggested an absolute increase of 8% in the adenoma detection rate (ADR) with CADe-assisted colonoscopy

To add, just watch that video linked or find other videos of computer aided polyp detection, and tell me how a machine is solving the problem? Or this video that covers this more in depth. https://www.youtube.com/watch?v=n9Gd8wK04k0 Titled "Artificial Intelligence for Polyp Detection During Colonoscopy: Where We Are and Where We Are He..." from SAGES

But again, the health professional is looking at a monitor with a camera with a live feed, and all said technology is doing is highlighting things in a green rectangular box for health professionals to investigate further. How is the machine taking their attention away when it is rather telling health professionals "hey might want to take a closer look at this!" and bring their attention to investigate further on something? Seriously I don't understand. How is that bad?

Another thing is just because a health professional is putting their attention on something, doesn't mean they won't miss something, and like again, I don't see what wrong with CADe if it highlights something said health professional might of missed, especially since again, all it is doing is highlighting things on a monitor with camera feed, for a health professional to just further check out.

What am I missing here?

[–] SunsetFruitbat@lemmygrad.ml 1 points 1 month ago* (last edited 1 month ago)

I think it does since to go to this AI in question, it is just simply image recognition software. I don’t exactly see how it affects cognitive abilities. I can see how it can affect skills, sure, and perhaps there could be an over reliance on it! but for cognitive abilities themselves, I don't see it. Something else to, it’s important to note that this is in reference to non AI assisted, and it doesn’t necessarily mean it’s bad. Like to go to the news article under this all.

AI helped health professionals to better detect pre-cancerous growths in the colon, but when the assistance was removed, their ability to find tumors dropped by about 20% compared with rates before the tool was ever introduced, according to findings published Wednesday.

and to go back to that one comments from Omar article

A recently published meta-analysis of 44 RCTs suggested an absolute increase of 8% in the adenoma detection rate (ADR) with CADe-assisted colonoscopy

It could be argued that AI helped more. However I think a few better questions is, if AI is helping health professional detect things more, what is the advantage of going back to non ai assisted then? Why should non ai assisted be preferred if ai assisted is helping more? Is this really a problem, and what could help if this is a problem? I think it is clear that it does help to an extent, so just getting rid of it doesn't seem like a solution, but I don't know! I'm not a health professional who works with this stuff or is involved in this work.

There is this video that covers more about CADe here https://www.youtube.com/watch?v=n9Gd8wK04k0 titled "Artificial Intelligence for Polyp Detection During Colonoscopy: Where We Are and Where We Are He..." from SAGES

I just genuinely don't see what is wrong with CADe especially if it helps health professionals catches things that they might of missed to begin with, and like again CADe is simply just highlight things for health professionals to investigate further, how is there something wrong with that?

To add, just because something being mentally outsourced doesn’t necessarily mean that's bad. I don’t think google maps killed people ability to navigate, it just simply made it easier no? Should we just go back to compass and maps? Or even further, just go by navigation by the stars and bust out our sextants? Besides, mentally offloading can be good and allowing us to free up to do more, it just depends on what or what the end goal is? I don't necessarily see what is wrong with mentally offloading things.

I also don't understand your example with getting run over? I wouldn't want to get hit by any vehicles, since both can kill or cause life long injuries.

I’m not going to go into those other articles much since it’s veering into another topic, but I do understand LLM have a tendency to cause people to become over reliant on it or take it at face value, but I don’t agree with any notion that it's doing things, like making things worse or causing something as serious as cognitive impairment since that is a very big claim and like millions and millions of people are using this stuff. I do think however, there should be more public education on these things like using LLM right and not taking everything it generates at face value. To add, with a lot of those studies I would be interested to what studies are coming out of China to, since they also have this stuff to

What? I was complaining to everyone around me that I felt like my brain had been deep fried after a bout of COVID. I legitimately don’t understand this perspective.

Somehow I was suppose to get that from this?

Chronic AI use has functionally the same effects as deep frying your brain

That's a bit unfair to assume that I'm somehow suppose to get that just based off that single sentence, because what you said a lot different than the other, and with that added context, forgive me! there nothing wrong with that! with that added context.

cw: ableism

It just, I really don’t like it when criticism of AI stem into pretty much people saying others are getting "stupid", since besides the ableism there, millions of people use this stuff, and it also just reeks of like how to word this. “Everyone around me is a sheep and I'm the only enlighten one” kind of stuff, especially since people aren’t stupid, nor are the people who use any of this stuff either, and I just dislike this framing especially when it framed as this stuff causing "brain damage", when it's not and your comment without that added context felt like it was saying that.

[–] SunsetFruitbat@lemmygrad.ml 2 points 1 month ago* (last edited 1 month ago) (2 children)

Except it doesn't, no more than using a computer "deep fries" someone brain, and that just feels like a gross way to frame this stuff to in terms of "deep frying" which just feels dehumanizing. Besides that, to go to this study, it's mainly seems to be referencing something like image recognition, and the AI in question has less to do with generative ai or llms. Especially since the study don't even mention llms.

and to go back to that news article, they reference this comment by Omer Ahmad near the bottom
https://info.thelancet.com/hubfs/Press%20embargo/AIdeskillingCMT.pdf

Computer-aided polyp detection (CADe) in colonoscopy represents one of the most extensively evaluated uses of AI in medicine, demonstrating clinical efficacy in multiple randomised controlled trials (RCTs).”

using this different article https://www.cancer.gov/news-events/cancer-currents-blog/2023/colonoscopy-cad-artificial-intelligence

These systems are based on software that, as the colonoscope snakes through the colon, scans the tissue lining it. The CAD software is “trained” on millions of images from colonoscopies, allowing it to potentially recognize concerning changes that might be missed by the human eye. If its algorithm detects tissue, such as a polyp, that it deems suspicious, it lights the area up on a computer screen and makes a sound to alert the colonoscopy team.

so how exactly is this causing "deep frying"?

[–] SunsetFruitbat@lemmygrad.ml 3 points 1 month ago* (last edited 1 month ago) (5 children)

But the study isn't even about LLM's? It doesn't really say but this video talks about a kind of AI that used for this stuff https://www.youtube.com/watch?v=mq_g7xezRW8

which.. really isn't llm's? I'm going to assume the AI in question is like in that video? and just reminds me of image recognition software. To add, nowhere in the study even mentions LLM's in question. Rather unrelated but also hearing others talk about someone "brain atrophy" just feels gross since it comes across as dehumanizing.

to go back to that news article, they reference this https://info.thelancet.com/hubfs/Press%20embargo/AIdeskillingCMT.pdf in which they are referring to

Computer-aided polyp detection (CADe) in colonoscopy represents one of the most extensively evaluated uses of AI in medicine, demonstrating clinical efficacy in multiple randomised controlled trials (RCTs)."

and to use this different article https://www.cancer.gov/news-events/cancer-currents-blog/2023/colonoscopy-cad-artificial-intelligence

These systems are based on software that, as the colonoscope snakes through the colon, scans the tissue lining it. The CAD software is “trained” on millions of images from colonoscopies, allowing it to potentially recognize concerning changes that might be missed by the human eye. If its algorithm detects tissue, such as a polyp, that it deems suspicious, it lights the area up on a computer screen and makes a sound to alert the colonoscopy team.

so it is image recognition and has nothing to do with generative ai.

[–] SunsetFruitbat@lemmygrad.ml 10 points 2 months ago

It more like the latter, since there also this to a few months ago. https://www.aljazeera.com/news/2025/5/8/israel-retrofitting-dji-commercial-drones-to-bomb-and-surveil-gaza [Israel retrofitting DJI commercial drones to bomb and surveil Gaza]

view more: next ›