If you've ever seen what AI tries to do when you ask it to code for you, you would understand why it's declining.
We just hired this new guy who uses AI for everything and often leads him astray and into arguments that he is definitely not prepared for.
Those are excellent use cases for AI, but it is also not a magic bullet. It cannot do everything for you, and often it can leave you a strike, especially if you're not willing to fact check it. It's a well-known fact that LLMs hallucinate, or straight up lie to you about what they know. So in many niche cases, which is what I am doing and what we hire this guy to do, it's often not effective. Just as often as it gives a silver bullet, it is often effectively wrong.
I have seen this dude use code and use AI to say things that are absolutely not true, like claiming setting a very high UID can resize a Docker image to an an absurd level nearing 500 gigabytes.
He also tried to use it to lecture me on how the auditors don't audit our company correctly and how we're actually doing things completely wrong and that he's the guy to fix it all and that'll take him just a little bit to train everybody up to shape.
LLM tools are excellent when treated with respect and the limitations of the tool is understood but unfortunately, far too many people believe it is a magic talking box that always knows everything and always has the right solution. ๐ฎโ๐จ
I mean, this joker is so ridiculous that he can't even figure out how to use the AWS CLI correctly or how to setup "deploy" repo github keys. We asked him if he was comfortable working with puppet, or at least capable of figuring it out, and he looked like we asked him to touch a hot stove. Did I mention this joker has 15 years of experience doing stuff like this?
When I was looking at his code, it reeks of AI with anti-patterns I normally only see by strictly generated llm code.