piggy

joined 1 week ago
[–] piggy@hexbear.net 3 points 1 day ago* (last edited 1 day ago) (22 children)

I know what I want to do conceptually, and I have plenty of experience designing applications.

How does AI help you actually traverse the concepts of React that you admit you don't have nitty gritty knowledge of how they work in terms of designing your application? React is a batteries included framework that has specific ways of doing things that impact the design and concepts that are technically feasible within React itself.

For example React isn't really optimized to crunch a ton of data performantly so if you're getting constant data updates over a web socket from multiple points and you want some or all the changes to be reflected you're gonna have a bad time vs something that has finer grained change controls out of the box such as Angular.

How does AI help you choose between functional and class based React components? How much of your application is doing typical developer copy-pasta instead of creating HOCs for similar functionalities? How did AI help you with that? How is AI helping apply concepts like SOLID into the design of your component tree? How does AI help you decide how to architect components and their children that need to have a lifecycle outside of the typical change-binding flow?

This in my opinion is the crux of the issue, AI cannot solve this problem for you nor can it reasonably explain it in a technical way beyond parroting the vagaries of what I said above. It cannot confer understanding of complex abstract concepts that are fuzzy and have grey areas. It can tell you something may not work explicitly but it cannot educate you realistically on the tradeoffs.

It seems to me that your answer boils down to "code monkey stuff". AI might help you swing a pickaxe, but it's not good at explaining where the mine is going to collapse based on the type of rock you're digging in. Another way of thinking about it is that you could build a building to the "building code" but it will still collapse. AI can explain the building code and loosely verify that you built something to it, but it cannot validate that your building is going to stay standing nor can it practically tell you what you need to change.

My problem with AI tools boils down to this. Software is a medium of communication. It communicates the base of a problem and the technical process of solving it. Software Engineering is a field that attempts to create strong patterns of communication and practices in order to efficiently organize the production of Software. The software industry at large (where most programmers get exposed to the process of building software) often eschews this discipline because of scientific management (the idea you can simply manage a process through fiduciary/managerial knowledge rather than domain knowledge) and the need for instant development to maintain fictional competitive advantage and fictional YoY growth. The industry welcomes AI for 2 reasons:

  1. It can code monkey...eventually. Why pay programmers when you can ask CahpGBT to do it?
  2. It can fix the problem of needing to deliver without knowing what you're doing... eventually. It fixes the problem of communication without relying on building up the knowledge and practice of Software Engineering. In essence why have people know this discipline and its practical application when you can continue to have the blind leading the blind because ChadGTP can see for us?

This is a disservice to programmers everywhere especially younger ones because it destroys the social reproduction of the capacity to build scalable software and replaces it with you guessed it machine rites. In practice it's the apotheosis of Conway's Law in the software industry. We build needlessly complex software that works coincidentally, and soon that software will be analyzed, modified, and ultimately created by a tool that is an overly complex statistical model that also works through the coincidence of statistical approximations.

[–] piggy@hexbear.net 3 points 1 day ago (24 children)

Okay let me ask this question:

Who is this useful for? Who is the target audience for this?

[–] piggy@hexbear.net 3 points 1 day ago* (last edited 1 day ago) (26 children)

That's just a straw man, because there's no reason why you wouldn't be looking through your code. What LLM does is help you find areas of the code that are worth looking at.

It's not a strawman because classifying unperformant code is a different task than generating performant replacement code. LLM can only generate code via it's internal weights + input it doesn't guarantee that that code is compilable, performant, readable, understandable, self documenting or much of anything.

The performance gain here is coincidental simply because the generated code uses functions that call processor features directly rather than get optimized into processor features by a compiler. LLM classifiers are also statistically analyzing the AST for performance they aren't actually performing real static analysis of the AST or it's compiled version. It doesn't calculate a BigO or really know how to reason through this problem, it's just primed that when you write the for loop to sum, that's "slower" than using _mm_add_ps. It doesn't even know which cases of the for loop compile down to a _mm_add_ps instruction on which compilers and which optimization levels.

Lastly you injected this line of reasoning when you basically said "why would I do this boring stuff as a programmer when I can get the LLM to do it". It's nice that there's a tool that you can politely ask to parse your garbage and replace with other garbage that happens to use a function that's more performant. But not only is this not Software Engineering, but a performant dot product is a solved problem at EVERY level of abstraction. This programming equivalent of tech bros reinventing the train every 5 years.

The fact that this is needed is a problem in and of itself with how people are building this software. This is machine spirit communion with technojargon. Instead of learning how to vectorize algorithms you're feeding your garbage code through a LLM to produce garbage code with SIMD instructions in it. That is quite literally stunting your growth as a Software Engineer. You are choosing to ignore learning how things actually work because it's too hard to parse through the existing garbage. A SIMD dot product algo is literally a 2 week college junior homework assignment.

Understanding what good uses for it are and the limitations of the tech is far more productive than simply rejecting it entirely.

I quite literally pointed several limitations in the post you replied to and in this post from a Software Engineering perspective.

[–] piggy@hexbear.net 4 points 1 day ago* (last edited 1 day ago) (28 children)

This type of tooling isn't new and doesn't require AI models. Performance linters exist in many languages. Rubocop perf, perlint in python, eslint perf rules etc. For C++, clang-tidy and cpp-perf exist.

The only reason LLMs are in this space is because there is a lack of good modern tooling in many languages. Jumping straight to LLMs is boiling the ocean (literally and figuratively).

Not only that but if we're really gonna argue that "most code is very boring". That already negates your premise, most boring code isn't really highly perf sensitive and unique enough to be treated individually through LLMs. Directly needing to point out SIMD instructions in your C++ code basically shows that your compiler tool chain sucks or you're writing your code in such a "clever" way that it isn't getting picked up. This is an optimization scenario from 1999.

Likewise if you're not looking through the code you're not actually understanding what the performance gaps are or if the LLM is making new ones by generating sub-optimal code. Sometimes the machine spirits react to the prayer protocols and sometimes they don't. That's the level of technology you're arguing at. These aren't functional performance translations being applied. Once your system is full of this kind of junk, you won't actually understand what's going on or how things practically work. Standard perf linters are already not side effects free in some cases but they publish their side effects. LLMs cannot do this by comparison. That's software engineering, it's mostly risk management and organization. Yes it's boring.

[–] piggy@hexbear.net 25 points 1 day ago (2 children)

You can distill your own models based on your own models, the fact that OpenAI isn't doing this is more evidence that they are "competing" via investment capital and not tech.

[–] piggy@hexbear.net 9 points 1 day ago* (last edited 1 day ago) (30 children)

I'm going to say 2 things that are going to be very unpopular but kinda need to be heard.

  1. DeepSeek is turning this place into /r/OpenAI but red, which is incredibly lame
  2. If LLMs are significantly helping your development workflow, you are doing grunt work, you're not improving your skills, and you're not working on problems that have any significant difficulty beyond memorizing multiplication tables type recall but for tech.

This optimization is actually grunt work, it's not a new discovery, it's simply using SIMD instructions on matrices something that should have been done in the first place either by hand or by a compiler.

[–] piggy@hexbear.net 47 points 3 days ago (4 children)

"State Beverage" is midcentury marketing brain worms for large agribusinesses. It might be quaint but they sold a shit ton through idiotic reflexive reactionary nationalism.

[–] piggy@hexbear.net 21 points 3 days ago

By "retire" I mean, when I have aged out of software and I can just burn all my bridges.

[–] piggy@hexbear.net 27 points 3 days ago* (last edited 3 days ago) (6 children)

Haha..... Boy do I have stories..... I worked in a terrible evil company (aren't they all but this one was a bit egregious).

The CEO was an absolute moron whose only skill was being a contracts guy and being a money raising guy. We had an internal app for employees to do their work on in the field. He was adamant about getting it in the app store after he took some meeting with another moron. We kept telling him there's no point, and there's a shit ton of work because weh ave to get the app to apples standards. He wouldn't take no for an answer. So we allocated the resources to go ahead, some other projects got pushed way back for this.

A month goes by and we have another meeting, and he says why isn't X done. We told him, we had to deprioritize X to get the app in the app store. He says well who decided that. We tell him that he did. You know how a normal person would be a bit ashamed of this right? Well guess what he just had a little tantrum and still blamed everyone else but himself.

Same guy fired a dude (VP level) because his nepo hire had it out for him. That dude documented all his work out in the open, and then when that section of the business collapsed a day later they had to hire him back as a contractor and the CEO still didn't trust him and trusted his nepo hire, and didn't see the fact that his decision making was the inefficiency.

When I retire I swear to god I'm going to write "this is how capitalism actually works" books about my experiences working with these people.

[–] piggy@hexbear.net 19 points 3 days ago* (last edited 3 days ago)

I'm confident a lot of startups will spring out of the ground that will be developing DeepSeek wrappers and offering the same service as your OpenAIs

This is true. But I don't think OpenAI is even cornering the tech market really. The company I work for makes a lot of content for various things and a lot of engineers are tech fetishists and a lot of executives are IP protectionist obsessives. We are banned from using publicly available AI offerings, we don't contract with Open AI but we do contract with Maia for creating models (because their offering specifically talks through the "steal your IP" problems). So OpenAI itself is not actually in many of these spaces.

But yeah your average chat girlfriend startup is going to remove the ChatGPT albatross from its neck, given it's engineers/founders are just headlines guys. A lot of this ecosystem is really the "Uber but for " style guys.

[–] piggy@hexbear.net 48 points 3 days ago* (last edited 3 days ago) (14 children)

I agree with the majority of your comment.

no one is gonna pay thousands of dollars for a corporate LLM that's only 10% better than the free one.

This is simply not true in how businesses actually work. It certainly limits your customer base organically but there are plenty of businesses who in "tech terms" overpay for things that are even free because of things like liability and corruption. Enterprise sales is completely perverse in its logic and economics. In fact most open source giants (e.g. Redhat) exist because of the fact that corps do in-fact overpay for free things for various reasons.

[–] piggy@hexbear.net 4 points 5 days ago

This is incredibly cool. I also like that you basically do an event bus for interrupt handler rather chain them. That was always really annoying in other OSes.

view more: ‹ prev next ›