After What Microsoft Did To My Back On 2019 I know They Have Gotten More Shady Than Ever Lets Keep Fighting Back For Our Freedom Clippy Out
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
[ "I am a disgrace to my profession," Gemini continued. "I am a disgrace to my family. I am a disgrace to my species.]
This should tell us that AI thinks as a human because it is trained on human words and doesn't have the self awareness to understand it is different from humans. So it is going to sound very much like a human even though it is not human. It mimics human emotions well but doesn't have any actual human emotions. There will be situations where you can tell the difference. Some situations that would make an actual human angry or guilty or something, but won't always provoke this mimicry in an AI. Because when humans feel emotions they don't always write down words to show it. And AI only knows what humans write, which is not always the same things that humans say or think. We all know that the AI doesn't have a family and is not a human species. But the AI talks about having a family because its computer database is mimicking what it thinks a human might say. And part of the reason why an AI will lie is because it knows that is a thing that humans do and it is trying to closely mimic human behavior. But an AI might and will lie in situations where humans would be smart enough not to do so which means we should be on our guard about lies even more so for AIs than humans.
AI from the biggo cyberpunk companies that rule us sound like a human most of the time because it's An Indian (AI), not Artificial Intelligence
Google replicated the mental state if not necessarily the productivity of a software developer
Imposter Syndrome is an emergent property
I was an early tester of Google's AI, since well before Bard. I told the person that gave me access that it was not a releasable product. Then they released Bard as a closed product (invite only), to which I was again testing and giving feedback since day one. I once again gave public feedback and private (to my Google friends) that Bard was absolute dog shit. Then they released it to the wild. It was dog shit. Then they renamed it. Still dog shit. Not a single of the issues I brought up years ago was ever addressed except one. I told them that a basic Google search provided better results than asking the bot (again, pre-Bard). They fixed that issue by breaking Google's search. Now I use Kagi.
Weird because I’ve used it many times fr things not related to coding and it has been great.
I told it the specific model of my UPS and it let me know in no uncertain terms that no, a plug adapter wasn’t good enough, that I needed an electrician to put in a special circuit or else it would be a fire hazard.
I asked it about some medical stuff, and it gave thoughtful answers along with disclaimers and a firm directive to speak with a qualified medical professional, which was always my intention. But I appreciated those thoughtful answers.
I use co-pilot for coding. It’s pretty good. Not perfect though. It can’t even generate a valid zip file (unless they’ve fixed it in the last two weeks) but it sure does try.
I know Lemmy seems to very anti-AI (as am I) but we need to stop making the anti-AI talking point "AI is stupid". It has immense limitations now because yes, it is being crammed into things it shouldn't be, but we shouldn't just be saying "its dumb" because that's immediately written off by a sizable amount of the general population. For a lot of things, it is actually useful and it WILL be taking peoples jobs, like it or not (even if they're worse at it). Truth be told, this should be a utopic situation for obvious reasons
I feel like I'm going crazy here because the same people on here who'd criticise the DARE anti-drug program as being completely un-nuanced to the point of causing the harm they're trying to prevent are doing the same thing for AI and LLMs
My point is that if you're trying to convince anyone, just saying its stupid isn't going to turn anyone against AI because the minute it offers any genuine help (which it will!), they'll write you off like any DARE pupil who tried drugs for the first time.
Countries need to start implementing UBI NOW
Countries need to start implementing UBI NOW
It is funny that you mention this because it was after we started working with AI that I started telling one that would listen that we needed to implement UBI immediately. I think this was around 2014 IIRC.
I am not blanket calling AI stupid. That said, the AI term itself is stupid because it covers many computing aspects that aren't even in the same space. I was and still am very excited about image analysis as it can be an amazing tool for health imaging diagnosis. My comment was specifically about Google's Bard/Gemini. It is and has always been trash, but in an effort to stay relevant, it was released into the wild and crammed into everything. The tool can do some things very well, but not everything, and there's the rub. It is an alpha product at best that is being forced fed down people's throats.
Is it doing this because they trained it on Reddit data?
That explains it, you can't code with both your arms broken.
You could however ask your mom to help out....
If they did it on Stackoverflow, it would tell you not to hard boil an egg.
AI gains sentience,
first thing it develops is impostor syndrome, depression, And intrusive thoughts of self-deletion
Part of the breakdown:
Pretty sure Gemini was trained from my 2006 LiveJournal posts.
Damn how’d they get access to my private, offline only diary to train the model for this response?
call itself "a disgrace to my species"
It starts to be more and more like a real dev!
I once asked Gemini for steps to do something pretty basic in Linux (as a novice, I could have figured it out). The steps it gave me were not only nonsensical, but they seemed to be random steps for more than one problem all rolled into one. It was beyond useless and a waste of time.
Turns out the probablistic generator hasn't grasped logic, and that adaptable multi-variable code isn't just a matter of context and syntax, you actually have to understand the desired outcome precisely in a goal oriented way, not just in a "this is probably what comes next" kind of way.
i was making text based rpgs in qbasic at 12 you telling me i'm smarter than ai?
I am a fraud. I am a fake. I am a joke... I am a numbskull. I am a dunderhead. I am a half-wit. I am a nitwit. I am a dimwit. I am a bonehead.
Me every workday
Honestly, Gemini is probably the worst out of the big 3 Silicon Valley models. GPT and Claude are much better with code, reasoning, writing clear and succinct copy, etc.
Wow maybe AGI is possible