We've used the Google AI speakers in the house for years, they make all kinds of hilarious mistakes. They also are pretty convenient and reliable for setting and executing alarms like "7AM weekdays", and home automation commands like "all lights off". But otherwise, it's hit and miss and very frustrating when they push an update that breaks things that used to work.
MangoCats
I think I'd at least use an OCR program to do the bulk of the typing for me...
Though one thing I have to say: I’m very annoyed by it’s constant agreeing with what I say, and enabling me when I’m doing dumb shit. I wish it would challenge me more and tell me when I’m an idiot.
There's a balance to be had there, too... I have been comparing a few AI engines to compare their code generation capabilities. If you want an exercise in frustration, try to make an old school keypress driven application on a modern line-oriented terminal interface while still using the terminal for standard text output. I got pretty far with Claude, then my daily time limits were kicking in. Claude did all that "you're so right" ego stroking garbage, but also got me near to a satisfactory solution. Then I moved into Google AI and it started out with reading my the "you just can't do that, it won't work" doom and gloom it got from some downer stack overflow or similar material. Finally, I showed Google my code that was already doing what it was calling impossible and it started helping me to polish the remaining rough spots. But, if you believed its first line answers you'd walk away thinking that something relatively simple was simply impossible.
Lately, I have taken to writing my instructions in a requirements document instead of relying so much on interactive mode. It's not a perfect approach, but it seems to be much more stable for "larger" projects where you hit the chat length limits and have to start over with the existing code - what you've captured in requirements tends to stick around better than just using the existing code as a starting point of how things should be then adding/modifying from there. Ideally, I'd like it if the engine could just take my requirements document and make the app from that, but Claude still seems to struggle when total LOC gets into the 2000-5000 range for a 200-ish lines requirement spec.
Con-ned-di-cut
Of course, when the question asks "contains the letter _" you might think an intelligent algorithm would get off its tokens and do a little letter by letter analysis. Related: ChatGPT is really bad at chess, but there are plenty of algorithms that are super-human good at it.
Bubbles and crashes aren't a bug in the financial markets, they're a feature. There are whole legions of investors and analysts who depend on them. Also, they have been a feature of financial markets since anything resembling a financial market was invented.
AI writes code for me. It makes dumbass mistakes that compilers automatically catch. It takes three or four rounds to correct a lot of random problems that crop up. Above all else, it's got limited capacity - projects beyond a couple thousand lines of code have to be carefully structured and spoonfed to it - a lot like working with junior developers. However: it's significantly faster than Googling for the information needed to write the code like I have been doing for the last 20 years, it does produce good sample code (if you give it good prompts), and it's way less frustrating and slow to work with than a room full of junior developers.
That's not saying we fire the junior developers, just that their learning specializations will probably be very different from the ones I was learning 20 years ago, just as those were very different than the ones programmers used 40 and 60 years ago.
And with transparency greed loses some of its advantage, we should be eroding those advantages any way we can...
This is where personalization comes in, if everybody can tune the algorithm to their liking with sufficient individuality, then algorithm gamers have a much more diffuse target. Also, if you're getting targeted by abusers you don't want to see, you can already filter that to some degree but it should be made even easier to "turn down the volume" on abusive groups. Abusive being in the opinion of the abused.
What we, as users, desere is transparency in the algorithm and significant input into how it works for us. Do you like big channels, small channels? etc. The problem is when people opt-out of sponsored content, but also refuse to pay. Transparency in the cost of delivery of service and the income from advertising would help there too, except if the service provider is wanting obscene profits.
Human coder here. First problem: define what is "writing code." Well over 90% of software engineers I have worked with "write their own code" - but that's typically less (often far less) than 50% of the value they provide to their organization. They also coordinate their interfaces with other software engineers, capture customer requirements in testable form, and above all else: negotiate system architecture with their colleagues to build large working systems.
So, AI has written 90% of the code I have produced in the past month. I tend to throw away more AI code than the code I used to write by hand, mostly because it's a low-cost thing to do. I wish I had the luxury of time to throw away code like that in the past and start over. What AI hasn't done is put together working systems of any value - it makes nice little microservices. If you architect your system as a bunch of cooperating microservices, AI can be a strong contributor on your team. If you expect AI to get any kind of "big picture" and implement it down to the source code level - your "big picture" had better be pretty small - nothing I have ever launched as a commercially viable product has been that small.
Writing code / being a software engineer isn't like being a bricklayer. Yes, AI is laying 90% of our bricks today, but it's not showing signs of being capable of designing the buildings, or even evaluating structural integrity of something taller than maybe 2 floors.