piggy

joined 1 week ago
[–] piggy@hexbear.net 3 points 1 day ago* (last edited 1 day ago) (4 children)

And as I already pointed out above, the problem here isn't with automation but with capitalism. In a sane system, automation would mean more free time for people, and less tedium. People are doing these jobs not because they want to be doing them, but because it's a way to survive in this shitty system.

There are certainly bad programming jobs, but programming jobs in general are extreme labor aristocracy. Yes people are chasing the bag, but they're certainly not "survival jobs". Within the system until you reach senior levels is no real discriminator between "bag chaser" and "person who is trying to learn", both these are going to get squad wiped.

There's certainly still going to be a path to being a SE. But it's going to be autodidact hobbyists who start extremely young. As a person who has been running Linux since 5th grade, who got a CCNA at 16, who has only had programming or network jobs since high school, this is the worst path because the reality of the career at scale murders your passion. If I don't age out I'm betting my next 10 years are going to be uncomfortably close to Player Piano, and that's something that's entirely dreadful. Instead of teaching juniors to program at scale while giving them boring CRUD tasks, I'll be communing with machine spirits so "they" can generate the basic crud endpoints and the component screens.

The reality of being a greybeard is that if you're close to retirement in this industry like my dad is, you're gonna do the same shit jobs as the bag chasers. They'll stick you in the basement and steal your stapler if you even make it past the vibe check interview. The only way to avoid this is to be a lifer somewhere, but that in itself is a challenge.

The difference between the previous developments and now, is that it may improve productivity now in your case and the case of the 1000 juniors, but tomorrow it's going to actually undercut demand for people. Building a system that builds and deploys applications has been the goal of several public and private projects I've been privy to. I agree this exact use-case that you linked is an example of a way to not have to learn ANTLR or how an AST works and flip a coin if it works. In practice though, this is step 1. Code generation has improved significantly in the last year alone across the whole LLM ecosystem. The goal isn't' to write maintainable code or readable code, the goal is to write deploy-able code with 90% feature coverage. Filling the last 10% with freelancers or in house engs. depending on scale. To me that's a worse job than the job I have now, at least now I can teach others how to do what I do. If that's taken away from me I'm not fucking doing this job anymore. I don't care about computers because in reality this job at scale is about convincing morons to stop micromanaging how you build things.

[–] piggy@hexbear.net 34 points 1 day ago (3 children)

I'm so excited for this

[–] piggy@hexbear.net 3 points 1 day ago* (last edited 1 day ago) (6 children)

Yes and?

  1. They're getting paid.
  2. It's a job.
  3. They're humans who can choose to be better.
  4. They're humans who can choose to fight their bosses out of some idiotic love of the game to the detriment of their own mental health because they're crazy. (I'm describing myself).
  5. They're humans who can stall or break awful things from coming to pass by refusing to work on something or sabotaging it.

This is about a door to those possibilities closing, not about how many software developers are forced through it. I'm not going to cheer on an awful totalizing future dark age of technology simply because the current odds are bad.

And yeah this won't actually kill higher end devs in my understanding of the world, I'll be able to find a job. But, it will kill the social reproduction of people like me. In the same way that the iPad killed broad user-focused technological literacy from zoomers to millenials, LLMs will ultimately destroy the current level of developer-focused technological literacy. There won't even be guys who can't code their way out of a paper bag using StackOverflow or guys who memorize LeetCode solutions. It will just be old-heads powerful enough to avoid the cull and nobody else, until we die.

[–] piggy@hexbear.net 3 points 1 day ago* (last edited 1 day ago) (8 children)

Every large coporation uses this method because they want to have fungible devs. Since developers with actual skill don't want to be treated as fungible cogs, the selection pressures ensure that people who can't get jobs with better conditions end up working in these places. They're just doing it to get a paycheck, and they basically bang their heads against the keyboard till something resembling working code falls out. I'll also remind you of the whole outsourcing craze which was basically exact same goal corps want to accomplish with AI now.

Damn that's crazy, imagine working a coding job for a paycheck! Soon you won't even be able to!

[–] piggy@hexbear.net 4 points 1 day ago* (last edited 1 day ago)

Also, for devices like airconditioners or televisions that use IR remotes and presumably some subset of a standard code is there a structured way to build an interface for them?

You can DIY with https://tasmota.github.io/docs/Tasmota-IR/

Some people recommend Broadlink RM3/RM4 and just walling it off from the web.

If your PC/Raspberry PI is in range you can get a cheap linux compatible IR Emitter and use LIRC https://www.lirc.org/

[–] piggy@hexbear.net 3 points 1 day ago* (last edited 1 day ago) (10 children)

StackOverflow copypasta wasn't a productive processes that was seeking to remove the developer from the equation though.

This isn't about a tech scaling strategy of training high quality high productivity engineers vs "just throwing bodies at it" anymore. This is about the next level of "just throwing bodies at it", "just throwing compute at it".

This is something technically feasible within the next decade unless, inshallah, these models collapse from ingesting their own awful data, rather than improving.

[–] piggy@hexbear.net 3 points 1 day ago (12 children)

For everyone of you there's 1000 junior engineers running copilot.

[–] piggy@hexbear.net 3 points 1 day ago* (last edited 1 day ago) (14 children)

The way that you're applying the tool "properly" is ultimately the same way that middle managers want to apply the tool, the only difference is that you know what you're doing as a quality filter, where the code goes and how to run it. AI can't solve the former (quality) but there are people working on a wholesale solution for the latter two. And they're getting their data from people like you!

In terms a productive process there's not as much daylight between the two use cases as you seem to think there is.

[–] piggy@hexbear.net 3 points 1 day ago* (last edited 1 day ago) (16 children)

Nobody is arguing for using the AI for problems you keep mentioning, and you keep ignoring that.

This is absolutely not true. Almost every programmer I know has had their company try to "AI" their documentation or "AI" some process only to fail spectacularly because the basis of what the AI does to data is either missing or doesn't have enough quality. I have several friends at the Lead/EM level take too much time out of their schedules to talk down a middle manager from sapping resources into AI boondoggles.

I've had to talk people off of this ledge, and lead that works under me (I'm technically a platform architect across 5 platform teams) actually decided to try it anyway and burn a couple days on a test run and guess what the results were garbage.

Beyond that the problem is that AI is a useful tool in IGNORING the problems.

I've given you concrete examples of how this tool is useful for me, you've just ignored that and continued arguing about the straw man you want to argue about.

I started this entire comment thread with an actual critique, a point, that you have in very debate bro fashion have consistently called a strawman. If I were a feeling less charitable I could call the majority of your arguments non-sequitors to mine. I have never argued that AI isn't useful to somebody. In fact I'm arguing that it's dangerously useful for decision makers in the software industry based on how they WANT to make software.

If a piece of software is a car, and a middle manager wants that car to have a wonderful proprietary light bar on it and wants to use AI to build such a light bar on his wonderful car. The AI might actually build the light bar in a narrow sense to the basic specs the decision maker feels might sell well on the market. However the light bar adds 500lbs of weight so when the driver gets in the car the front suspension is on the floor, and the wiring loom is also now a ball of yarn. But the car ends up being just shitty enough to sell, and that's the important thing.

And remember the AI doesn't complain about resources or order of operations when you ask it do make a light bar at the same time as a cool roof rack, a kick ass sound system and a more powerful engine, and hey if the car doesn't work after one of these we can just ask it to regenerate the car design and then just have another AI test it! And you know what it might even be fine to have 1 or 2 nerds around just in case we have to painfully take the car apart only to discover we're overloading the alternator from both ends.

[–] piggy@hexbear.net 1 points 1 day ago* (last edited 1 day ago) (18 children)

I've never said that AI is the cause of those problems that's words you're putting in my mouth. I've said that AI is being used as a solution to those problems in the industry when in reality the use of AI to solve those problems exacerbates them while allowing companies to reap "productive" output.

For some reason programmers can understand "AI Slop" but if the AI is generating code instead of stories, images, audio and video it's no longer "AI Slop" because we're exalted in our communion with the machine spirits! Our holy logical languages could never encode the heresy of slop!

[–] piggy@hexbear.net 1 points 1 day ago* (last edited 1 day ago)

This is a quantization function. It's a fairly "math brained" name I agree, but the function is called qX_K_q8_K because it quantizes a value with a quantization index of X (unknown) to one with a quantization index of 8 (bits) which correlates to the memory usage. The 0 vs K portions are how it does rounding, 0 means it does rounding by equal distribution (without offset), and K means it creates a distribution that is more fine grained around more common values and is more rough around least common values. e.g. I have a data set that has a lot of values between 4 and 5 but not a lot of 10s. I have lets say 10 brackets between 4 and 5 but only 3 between 5 and 10.

Basically it's a lossy compression for a data set into a specific enumeration (roughly correlates with size), so it's a way to given 1,000,000 numbers from 1-1000000, of putting their values into a range of numbers based on the q level How using different functions affects the output of models is more voodoo than anything else. You get better "quality" output from higher memory space, but quality is a complex metric and doesn't necessarily map to factual accuracy in the output, just statistical correlation with the model's data set.

An example of a common quantizer is an analog to digital converter. It must take continuous values from a wave that goes 0 to 1 and transform them into digital values of 0 and 1 with a specific sample rate.

Taking a 32 bit float and copying the value into 32 bit float is an identity quantizer.

[–] piggy@hexbear.net 1 points 1 day ago* (last edited 1 day ago) (20 children)

You're making up a giant straw man of how you pretend software development works which is utterly divorced from what we see happening in the real world. The AI doesn't change this one bit.

Commenting this under a post where an AI has spit out a dot product function optimization for an existing dot product function that's already ~150-250 lines long depending on architectural implementation of which there are about 6. The PR for which has an interaction that is two devs finger pointing about who is responsible for writing tests. The PR for which notes that the original and new function often don't give the correct answer. Just an amazing response. Chefs kiss.

What a wonderful way to engage with my post. You win bud. You're the smartest. This industry would never mystify a basic concept that's about 250 years old with a 716 line PR through its inability to communicate, organize and follow an academic discipline.

view more: ‹ prev next ›