this post was submitted on 13 Jun 2025
1407 points (99.4% liked)
Comic Strips
17384 readers
1018 users here now
Comic Strips is a community for those who love comic stories.
The rules are simple:
- The post can be a single image, an image gallery, or a link to a specific comic hosted on another site (the author's website, for instance).
- The comic must be a complete story.
- If it is an external link, it must be to a specific story, not to the root of the site.
- You may post comics from others or your own.
- If you are posting a comic of your own, a maximum of one per week is allowed (I know, your comics are great, but this rule helps avoid spam).
- The comic can be in any language, but if it's not in English, OP must include an English translation in the post's 'body' field (note: you don't need to select a specific language when posting a comic).
- Politeness.
- Adult content is not allowed. This community aims to be fun for people of all ages.
Web of links
- !linuxmemes@lemmy.world: "I use Arch btw"
- !memes@lemmy.world: memes (you don't say!)
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I’m old enough to have gone through a number of these technology bubbles, so much so that I haven’t paid much attention to them for a fair while. This AI bs feels a bit different, though. It seems to me that lots more people have completely lost their minds this time.
Like all bubbles, this too will end up in the same rubbish heap.
it's maybe because chatbots incorporate, accidentally or not, elements of what makes gambling addiction work on humans https://pivot-to-ai.com/2025/06/05/generative-ai-runs-on-gambling-addiction-just-one-more-prompt-bro/
the gist:
chatbots users also are attracted to their terminally sycophantic and agreeable responses, and also some users form parasocial relationships with motherfucking spicy autocomplete, and also chatbots were marketed to management types as a kind of futuristic status symbol that if you don't use it you'll fall behind and then you'll all see. people get mixed gambling addiction/fomo/parasocial relationship/being dupes of multibillion dollar advertising scheme and that's why they get so unserious about their chatbot use
and also separately core of openai and anthropic and probably some other companies are made from cultists that want to make machine god, but it's entirely different rabbit hole
like with any other bubble, money for it won't last forever. most recently disney sued midjourney for copyright infringement, and if they set legal precedent, they might take wipe out all of these drivel making machines for good
I am officially slain and unironically think this may actually unironically be the beginning of the decline of humanity
Because rich morons think they'll get free digital slaves out of it. Because they're rich morons who do not understand anything they ask for.
Because AI is mostly built for tech outsiders. They literally thought that digital art, composing music on the computer, programming, etc. was literally telling the computer what to do. I remember around 2015 someone asking where they can choose art-styles in Photoshop, and what to tell the PC to draw something. Even I as a child thought that you just had to type "please draw me a car" into the Commodore 64 to draw you a car, without all the pixel-art.
I tend to call these "normie tech". Tech that is built for non-enthusiasts, which have negative consequences to the others, and even fool the some enthusiasts into worshipping them. If only I foresaw the dangers of overtly centralized social media...
I remember this as a kid being excited to do stuff like creating games, creating musing, video editing only to find out how hard, tedious and labourious it is. From outside it looks like computer do all the work but in reality computer only assists and the artists/programmer do all the work
That's because there's a non zero amount of actually functionality. Chatgpt does some useful stuff for normal people. It's accessible.
Contrast that to crypto, which was only accessible to tech folks and barely useful, or NFT which had no use at all.
Ok, I guess to be fair, the purpose of NFT was to separate chumps from their money, and it was quite good at that.
There are pretty great applications in medicine. AI is an umbrella term that includes working with LLMs, image processing, pattern recognition and other stuff. There are fields where AI is a blessing. The problem is, as JohnSmith mentioned, it's the "solar battery" of the current day. At one point they had to make and/or advertise everything with solar batteries, even stuff that was better off with... batteries. Or the good ol' plug. Hopefully, it will settle down in a few year's time and they will focus on areas where it is more successful. They just need to find out which areas those are.
Like what? I discussed just 2 days ago with a friend who works in public healthcare, who is bullish about AI and best he could come up with DeepMind AlphaFold which is yes interesting, even important, and yet in a way "good old fashion AI" as has been the case for the last half century or so, namely a team of dedicated researchers, actual humans, focusing on a hard problem, throwing state of the art algorithms at a problem and some compute resources... but AFAICT there is so significant medicine research that made a significant change through "modern" AI like LLMs.
The first thing that comes to my mind is cancer screening. I had to look it up because I can't always trust my memory, and I thought there was some AI involved in the RNA sequencing research for the Covid vaccine, but I actually remembered wrong.
Skimmed through the article and I found it surprisingly difficult to pinpoint what "AI" solution they actually covered, despite going as far as opening the supplementary data of the research they mentioned. Maybe I'm missing something obvious so please do share.
AFAICT they are talking about using computer vision techniques to highlight potential problems in addition to bringing the non annotated image.
This... is great! But I'd argue this is NOT what "AI" at the moment is hyped about. What I mean is that computer vision and statistics have been used, in medicine and elsewhere, with great success and I don't see why it wouldn't be applied. Rather I would argue the hype at he moment in AI is about LLM and generative AI. AFAICT (but again had a hard time parsing through this paper to get anything actually specific) none of that is using it.
FWIW I did specific in my post tht my criticism was about "modern" AI, not AI as a field in general.
I'm not at that exact company, but a very similar one.
It's AI because we essentially we just take early scans from people who are later diagnosed with respiratory illnesses and using that to train a neural network to recognise early signs that a human doctor wouldn't notice.
The actual algorithm we started with and built upon is basically identical to one of the algoriths used in a generative AI models (the one that takes an image, does some maths wizardry on it and tells you how close the image is to the selected prompt). Of course we heavily modify it for our needs so it's pretty different in the end product, and we're not using its output to feedback into a denoiser and we have a lot of cognitive layers and some other tricks to bring the reliability up to a point we can actually use it denoise, but it's still at its core the same algorithm.
Thanks, any publication please to better understand how it work?
What I heard so far was about advanced pattern recognition for scans (MRI, CT etc) to reduce oversights and in documents to detect potential patterns relevant for epidemologists (a use that's very controversial since it requires all medical documents of citizens to be centralized and available unencrypted). Also some scientists seem to praise purpose-built machine learning technology for specialised tasks (those are not LLMs though).
Yeah that's what I do for work, it can detect respiratory diseases or even tumours from scans long before even the best human doctor could do reliably, and our work has already saved hundreds of lives and we're still only just rolling it out. It's legitimately going to revolutionise medicine.
Please help on https://lemmy.world/post/31304750/17675245
Awesome, truly love to hear that. 🥰
Question out of curiosity, even though that isn't exactly what you're working on: Do you think the technology could eventually also be used to detect what might be referred to as "latent cancer cells", that can't be destroyed by the body but also didn't grow into tumors yet due to the body fighting it?
Asking because that's what happened to me years ago. Had high inflammatory markers for over 1.5 years with no doctors being able to tell what the heck was going on. Then one day an angry Lymphoma appeared that required 4 aggressive chemo cycles and 14 day radio to get rid off, even though it was stage 1. If AI tech could be able to detect those "latent cancer cells" (or some biomarkers caused by them) before tumors appear… that would be phenomenally awesome.
Honestly have no idea, I'm on the programming side, so don't really have much medical knowledge, but if I remember ill ask someone when I'm in the office on Wednesday.
@RemindMe@programming.dev Remind me 2 days
AI as in "Artificial Intelligence" has existed for decades and is quite useful - and specialized uses of LLMs can extend that. Although AI the buzzword for "generative intelligence" is new, and often wrong, being built to give the form of an answer rather than the reality of one.
Possibly through ignorance or misunderstanding, btu I still think the tech behind NFTs may have some function, but it's certainly not the weird pictures of badly colored in monkeys speculation market that happened there.
You know I've been saying this for years now and not a single post I've put up along those lines has EVER been in the positive upvote zone, here or reddit
NFTs are digitally enforcible contracts that can do literally everything a traditional binding legal contract can do and a whole fucktonne of other things on top of it
The whole 'just pictures on a server somewhere' is the TINIEST slice of functionality that NFT frameworks provide.
It's like getting a really well crafted leatherman multitool but only ever using the toothpick for everything
It could potentially work for DRM, in that you can have a key assigned to an identity that can later be transferred and not be dependent on a particular marketplace.
For example, you could buy a copy of whatever next year's Call of Duty game will be, and have the key added to your NFT wallet. Then you could play it on XBox, Playstation, Steam, or GOG with that single license.
Of course that will never happen because that'd be more consumer friendly than we have now.
There are a fucktonne of applications
Fully automatic rentals where your NFT is your key to access
Protection for small time content creators who want to retain control of their content.
Virtually abuse proof copyright system
Game items and characters that are not bound to the game they originate from
Automatic IP rights assignments
Frictionless software and service licensing
Literally anything a standard contract can do
Basically functioning as a digital proof of purchase.
As a digital proof of purchase that can be frictionlessly traded without the permission of the platform it was purchased from.
I.e. you don't need the site you bought the ticket's permission to trade that ticket to someone else
Can't believe I'm doing this... but here I go, actually defending cryptocurrency/blockchain :
... so yes there are some functionalities to AI. In fact I don't think anybody is saying 100% of it is BS and a scam, rather... just 99.99% of the marketing claims during the last decade ARE overhyped if not plain false. One could say the same for crypto/blockchain, namely that SQLite or a random DB or is enough for most people BUT there are SOME cases where it might actually be somehow useful, ideally not hijacked by "entrepreneurs" (namely VC tools) who only care about making money but not what the technology could actually bring.
Now anyway both AI & crypto use an inconceivable amount of resources (energy, water, GPU and dedicated hardware, real estimate, R&D top talent, human resources for dataset annotation including very VERY gruesome ones, etc) so yes even if in 0.01% they are actually useful one still must ask, is it worth it? Is it OK to burn literally tons of CO2eq ... to generate an image that one could have done quite easily another way? Summarize a text?
IMHO both AI & crypto are not entirely useless in theory yet in practice have been :
So... sure, go generate some "stuff" if you want to but please be mindful of what it genuinely costs.
i think you've got it backwards. the very same people (and their money) who were deep into crypto went on to new buzzword, which turns out to be AI now. this includes altman and zucc for starters, but there's more
In programming the AI has real application, i have personally refactored code, designed services all by chatgpt which would take me days to do in hours, its just good at it. For non techies though i can't say.
That's not really an AI thing, that's just... everything.
The internet did not end up in the trash heap after the dot com bubble burst. Ai too has real world uses that go beyond the current planet wrecking bubble.