this post was submitted on 20 Oct 2025
884 points (99.2% liked)

Programmer Humor

27101 readers
817 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] rizzothesmall@sh.itjust.works 90 points 1 week ago* (last edited 1 week ago) (4 children)

If you properly divide your instances between providers and regions and use load balancing which uses a corum of 3 availability model then it can be zero downtime pretty fairly guaranteed.

People be cheap and easy tho, so πŸ€·β€β™‚οΈ

[–] dis_honestfamiliar@lemmy.sdf.org 83 points 1 week ago (1 children)

Yup. And I think I'll add:

What do you mean we've blown our yearly budget in the first month.

[–] douglasg14b@lemmy.world 4 points 1 week ago (1 children)

Screw the compute budget, the tripled team size without shipping any more features is a bigger problem here.

[–] figjam@midwest.social 1 points 1 week ago (1 children)

I've seen the opposite. "Oh, you moved your app to the cloud and rebuilt it to be full cicd and self healing? Cool. Your team of 15 is now 3."

[–] douglasg14b@lemmy.world 2 points 1 week ago* (last edited 1 week ago)

I'm not sure if you are referring to the same thread.

I'm talking about the effort to build multi region and multi cloud applications, which is incredibly difficult to pull off well. And presents seemingly endless challenges.

Not the effort to move to the cloud.

[–] FishFace@piefed.social 20 points 1 week ago (1 children)

Dividing between providers is not what people would be doing if the resilience of cloud services were as is being memed about.

Doing so is phenomenally expensive.

[–] rizzothesmall@sh.itjust.works 9 points 1 week ago (3 children)

Doing so is phenomenally expensive.

It's demonstrably little more expensive than running more instances on the same provider. I only say -little- because there is a marginal administrative overhead.

[–] rainwall@piefed.social 25 points 1 week ago* (last edited 1 week ago) (1 children)

Only if you engineered your stack using vendor neutral tools, which is not what each cloud provider encourages you to do.

Then the adminstrative overhead of multi-cloud gets phenomenally painful.

[–] felbane@lemmy.world 4 points 1 week ago (2 children)
[–] rainwall@piefed.social 7 points 1 week ago* (last edited 1 week ago) (1 children)

Yeah, Terraform or it's FOSS fork would be ideal, but many of these infrastructures are setup by devs, using the "immediately in front of them" tools that each cloud presents. Decoupling everything back to neutral is the same nightmare as migrating any stack to any other stack.

[–] felbane@lemmy.world 2 points 1 week ago

Definitely. I go through that same nightmare every time I have to onboard some new acquisition whose devops was the startup cfo's nephew.

[–] Lysergid@lemmy.ml -1 points 1 week ago (1 children)

Infrastructure is there to be used by apps/services. It doesn’t matter how it’s created if infrastructure across providers does not provide same API. You can’t use GCP storage SDK to call AWS s3. Even if API would be same, nothing guarantees consistent behavior. Just like JPA provides API but implementations and DBs behavior are inconsistent

[–] felbane@lemmy.world 2 points 1 week ago

You can use the S3 API to interop with basically every major provider. For most core components there are either interop APIs or libraries that translate into provider-native APIs.

It's 100% doable to build a provider-agnostic stack from the iac all the way up to the application itself.

[–] douglasg14b@lemmy.world 2 points 1 week ago* (last edited 1 week ago) (1 children)

It's phenomenally expensive from a practical standpoint, it takes an immense amount of engineering and devops effort to make this work for non trivial production applications.

It's egregiously expensive from an engineering standpoint. And most definitely more expensive from a cloud bill standpoint as well.

We're doing this right now with a non trivial production application built for this, and it's incredibly difficult to do right. It affects EVERYTHING, from the ground up. The level of standardization and governance that goes into just making things stable across many teams takes an entire team to make possible.

[–] rizzothesmall@sh.itjust.works 1 points 1 week ago (2 children)

In my experience using containers has removed requirements for additional engineering cost to deploy between providers because a container is the same wherever it's running, and all the providers will offer container hosting, and most offer cluster private networking.

Deployment is simplified using something like octopus which can deploy to many destinations in a blue-green fashion with easy rollback.

[–] douglasg14b@lemmy.world 3 points 1 week ago* (last edited 1 week ago)

Yes, containers make your application logic work.

That's the lowest hanging fruit on the tree.

Let's talk about persistence logic, fail forwards, data synchronization, and write queues next.

Let's also talk about cloud provider network egress costs.

Let's also talk about specific service dependencies that may not be replicatable across clouds, or even regions.

Oh, also provider specific deployment nuances, I AM differences, networking differences....etc

[–] zalgotext@sh.itjust.works 2 points 1 week ago

Containers are nice, but don't really cover things like firewalls, network configuration, identity management, and a whole host of other things, the configuration of which varies between providers.

[–] FishFace@piefed.social 1 points 1 week ago

The administrative overhead and the overhead of engineering everything to with multiple vendors is what is massive

[–] criss_cross@lemmy.world 14 points 1 week ago

Also requires AWS to do the same thing which they sometimes don’t …

[–] ICastFist@programming.dev 7 points 1 week ago

"But we have our load balacing with 3 different AWS buckets!!!!"