This is obviously slightly exaggerated, but I do feel like this whenever people dismiss Kubernetes as either too complicated or not needed.
The response I always got when suggesting Kubernetes is "you can do all those things without Kubernetes"
Sure, of course. There are a million different ways to do everything Kubernetes does, and some of them might be simpler or fit your use case more perfectly. You can make different decisions for each choice Kubernetes makes, and maybe your decisions are more perfect for your workload.
However, the big win with Kubernetes is that all of those choices have been made and agreed upon, and now you have an entire ecosystem of tools, expertise, blog posts, AI knowledge, etc, that knows the choices Kubernetes made and can interface with that. This is VERY powerful.
> However, the big win with Kubernetes is that all of those choices have been made and agreed upon, and now you have an entire ecosystem of tools, expertise, blog posts, AI knowledge, etc, that knows the choices Kubernetes made and can interface with that. This is VERY powerful.
Yep! I am now using k8s even for small / 'single purpose' clusters just so I can keep renovate/argo/flux in the loop. Yes, I _could_ wire renovate up to some variables in a salt state or chef cookbook and merge that to `main` and then have the chef agent / salt minion pick up the new version(s) and roll them out gradually... but I don't need to, now!
I just feel like "you can do this with Kubernetes" is a slippery slope.
"You can do X with Y, so use Y" is a great way to add a dependency, especially if it is "community vetted" already.
Sometimes simple is better - you don't need to add anything that implements some of you logic as a dependency to stay DRY or whatever you want to call it.
It really feels like we are drowning in self-imposed tech debt and keep adding layers to try and hold it for just a while longer.
Now that being said, there is no reason not to add Kubernetes once a sufficient overlap is achieved.
You can use k8s on $2/mo digital ocean projects. It probably even works on the free tier of a lot of providers.
And there's zero setup. Just a deployment yaml that specifies exactly what you want deployed, which has the benefit of easy version control.
I don't get why people are so bent on hating Kubernetes. The mental cost to deploy a 6-line deployment yaml is less than futzing around with FTP and nginx.
Kube is the new LAMP stack. It's easier too. And portable.
If you're talking managed kube vs one you're taking the responsibility of self-managing, sure. But that's no different than self-managing your stack in the old world. Suddenly you have to become Sysadmin/SRE.
This made me audibly guffaw. Kubernetes is a lot of things, but "portable" is not one of them. GKE, EKS, AKS, OCP, etc., portability between them is nowhere near guaranteed.
I don't you made that argument but could a valid conclusion of your comment be that, because Kubernetes is so ubiquitous, using it frees you from being a Sysadmin/SRE?
Agree. For years I had developed my own preferred way of deploying Rails apps large and small on VMs: haproxy, nginx, supervisord, ufw, the actual deploy tooling (capistrano and other alternatives) and so on... and if those tools are old or defunct now it's because my knowledge of that world basically halted 8 years ago because I've never had to configure anything but k8s since then.
I've used it every day since then so I have the luxury of knowing it well. So the frustrations that the new or casual user may have are not the same for me.
Honestly the main problem is people using k8s for something that's like... a database, and an app, and maybe a second app, that all could be containers or just a systemd service.
And then they hit all the things that make sense in big company with like 40 services but very little in their context and complain that complex thing designed for complex interactions isn't simple
But if you want some redundancy, k8s let's you just say run 4 of this, 6 of this on these 3 machines. At least I find it quite straight forward.
The database is more complex since there is storage affinity (I use cockroachDB with local persistent volumes for it) - but stateful is always complicated.
Most of the time you don't need redundancy. You need regular backups for exceptional circumstances. And k8s gives you more complexity, and more problems through more moving parts, to give you the possibility of using a feature you'll never need, and if you do start to use it it'll probably be instead of fixing performance problems downstream
No argument there. The Toyota 5S-FE non-interference engine is a near indestructible 4 cylinder engine that's well documented, popular and you can purchase parts for pennies. It has powered 10 models of Camrys and Lexus and battle proven. You can expect any mechanic who has been a professional mechanic for the last 3 years know exactly what to do when it starts acting up. 1 out of 4 cars on the road have this engine or a close clone of it.
It's not what any reasonable person would use for a weedwhacker, lawnmower, pool pump or an air compressor.
The saddest part about Kubernetes is… after you set it all up, you still need a hacky deploy.sh to sed in the image tag to deploy! And pretty soon you’re back to “my dear friend you have built a Helm”. And so the configuration clock continues ticking…
Claude Code has essentially fixed this perpetual annoyance for me. Doesn't matter if it's a hacked up deploy.sh that mixes sed, envsubst and god knows what or a non-idiomatic Helm chart that was perpetually on my backlog to fix... today I just say "make this do this thing and also fix any bash bugs along the way" and it just does it. Its effectiveness for these thousand-little-cuts type DevOps tasks is underrated IMO.
Now the actual CI/CD/thing-doers tools that all suck... I'm still stuck with those.
How do you handle cleanups and hooks? The best way to do helm, at least for me, seems to be about limiting its use to simple templating use cases; if you end up needing an if, you've probably done something terribly wrong.
Cleanups: I want to do a `helm uninstall` and have all the manifests go away at once instead of looking around for N different resources.
Hooks: I want to apply my database migrations and populate the database with static datasets before I deploy my application, without having my CI connect to the database cluster (at places I've worked, the CI cluster and K8s cluster were completely separate).
And if you want your Helm to run on certain deploys, and maintain a declarative set of the variables given to charts over time, thinking you can use Helmfile and some custom GitHub Actions… “my dear friend you have built a GitOps.”
(I tend to think this one is acceptable in the beginning, but certainly doesn’t scale.)
Or if your colleagues are "smarter" than you they make it in Clojoure instead, with an EDN-but-with-subroutines config language, so that not only yaml-aware editors are useless, but EDN-aware editors cannot make heads or tails of the macros.
IMO, Kubernetes isn't inevitable, and this seems to paint it as such.
K8s is well suited to dynamically scaling a SaaS product delivered over the web. When you get outside this scenario - for example, on-prem or single node "clusters" that are running K8s just for API compatibility, it seems like either overkill or a bad choice. Even when cloud deployed, K8s mostly functions as a batteries-not-included wrapper around the underlying cloud provider services and APIs.
There are also folks who understand the innards of K8s very well that have legitimate criticisms of it - for example, this one from the MetalLB developer: https://blog.dave.tf/post/new-kubernetes/
Before you deploy something, actually understand what the pros/cons are, and what problem it was made to solve, and if your problem isn't at least mostly a match, keep looking.
Kubernetes, in the form of k3s, was a critical success factor for us with the onprem deployment of our SaaS product.
What's the problem with a single-node cluster? We use that for e.g. dev environments, as well as some small onprem deployments.
> Even when cloud deployed, K8s mostly functions as a batteries-not-included wrapper around the underlying cloud provider services and APIs.
Which batteries are not included? The "wrapper around the underlying cloud provider services and APIs" is enormously important. Why would you prefer to use a less well-designed, more vendor-specific set of APIs?
I seriously don't get these criticisms of k8s. K8s abstracts away, and standardizes, an enormous amount of system complexity. The people who object to it just don't have the requirements where it starts making sense, that's all.
> Kubernetes, in the form of k3s, was a critical success factor for us with the onprem deployment of our SaaS product.
What surprises and gotchas did you have to deal with using k3s as a Kubernetes implementation?
Did you use an LB? Which one? I'm assuming all your onprem nodes were just linux servers with very basic equipment (the fanciest networking equipment you used were 10GbE PCIe cards, nothing more special than that?)
We sell to enterprise customers. All of them deploy our solution on internal cloud-style VM clusters. We use the Traefik ingress controller by default.
There really weren't any particular surprises or gotchas at that level.
In this context, I've never had to deal with anything at the level of the type of Ethernet card. That's kind of the point: platforms like k8s abstract away from that.
As someone rolling their self-hosted stuff via Compose and shell scripts instead of K8s specifically for the simplicity of the experience, this is 100% why you need to understand what Kubernetes solves before writing it off entirely.
I'm not doing overlay networks, I'm using a single bare-metal host, and I value the hands-on Linux administration experience versus the K8s cluster admin experience. All of these are reasons I specifically chose not to use Kubernetes.
The second I want HA, or want to shift from local VLANs to multi-cloud overlays, or I don't need the local Linux sysadmin experience anymore? Yeah, it's K8s at the top of the list. Until then, my solution works for exactly what I need.
I run K8s at home. I used to do docker-compose - and I'd still recommend that to most people - but even for my 1 little NUC with 4vcpu / 16Gi Homelab, I still love deploying with K8s. It's genuinely simpler for me.
If anyone's looking for inspiration, my setup:
* ArgoCD pointed to my GitLab repos
* GitLab repos contain Helm charts
* Most of the Helm charts contain open-source charts as subcharts, with versions set like (e.g.) `version: ~0` - meaning I automatically receive updates for all major version until `1`
* Updating my apps usually consists of logging into the UI, reviewing the infrastructure and image tag updates, and manually clicking sync. I do this once every few months
My next little side project: Autoscaling into the cloud (via a secure WireGuard tunnel) when I want to expand past my current hardware limitations
A reason not to run k8s is if you want your server to reach C10 idle states. The k8s control plain with its polling and checking are quite heavy on the mostly idle server.
I have reverted to just use Nixos and oci podman containers. Everything is declarative and reproducible
I can tell you how vendors deliver a software solution that runs on Kubernetes: very poorly.
The needed tweaks, the ability to customize things, basically goes to zero because the support staff is technical about the software, but NOT about Kubernetes.
I am not joking: a recent deployment required 3x VMs for Kubernetes, each VM having 256 gigabytes of RAM; then a separate 3x VMs for a different piece. 1.5TB of RAM to manage less than 1200 network devices (routers etc. that run BGP).
No one knew, for instance, how to lower the MongoDB (because of course you need it!) resource usage, despite the fact that the clustered VMware install is using a very fast SSD storage solution and thus MongoDB is unlikely accelerate anything; so over 128GB RAM is being burned on caching the results coming back from SSDs that are running at many-GB/s throughput.
I've experienced something like this at work but with data warehouse instead, and it happened multiple times (to be fair, data engineering is still fairly new where I'm from).
One example was an engineer wanted to build an API that accepts large CSV (GBs of credit reports) to extract some data and perform some aggregations. He was in the process of discussing with SREs on the best way to process the huge CSV file without using k8s stateful set, and the solution he was about to build was basically writing to S3 and having a worker asynchronously load and process the CSV in chunks, then finally writing the aggregation to db.
I stepped in and told him he was about to build a data warehouse. :P
If it was less than 100 gb, he probably should have just loaded the whole thing in RAM on a single machine, and processed it all in a single shot. No S3, no network round trips, no chunking, no data warehouse.
Kubernetes was overkill (I do that all day, 5 days a week); Kamal was too restrictive, so I found myself rolling out Yoink. Just what I need from k8s, but simple enough I can point it to a baremetal machine on Hertzner that can easily run all my workloads.
- using caddy-docker-proxy for ingress is brilliant
What do you use for:
- service discovery
- secret store (EDIT: Crap you use Infisical. No shade, I just have this horrible foreboding it will end up like Hashicorp. I use Conjur Secretless Broker but am tracking: https://news.ycombinator.com/item?id=47903690)
- backing up and restoring state like in a DB
PS: Have you been having issues with Hetzner the last few weeks?
Service discovery is basically just Docker's internal DNS. Caddy-docker-proxy can use it to find healthy upstreams.
For secrets, I self-host Infisical on the box -- easy to plug in whatever secret manager, should make it pair nicely with https://github.com/tellerops/teller or something similar
Had no problems with Hertzner so far, just enjoying the raw CPU power of bare metal. The plan is to roll out more boxes across different providers, using Tailscale for the backplane network and Cloudflare to load-balance between them. All in due time What issues have you been having ?
Why both posts mention docker compose and not mentioning docker swarm. Being using it for my projects for long time. And it's so nice. Similar syntax, easy networking, rollout strategy, easy to add nodes to cluster.
You can have one template docker-compose.yaml file and separate deployment files for different envs, like: docker-compose.dev.yaml, docker-compose.prod.yaml
I've been there. We still ended up with messy deploy scripts written in Ruby and the only debugging solution was "just comment out everything then run line by line".
Unless you’re in Erlang world (Elixir, Gleam..) and all that is already baked into OTP and the BEAM. You can go on holiday knowing it will be a while longer before you need to break out the pods (and at that scale, you will be able to afford a colleague or two to help you).
After reading this and remembering an old hobby project, I decided to switch the deploy from a systemd service to PM2, which apparently has rolling deployments without needing Docker engine (for those of us minmaxing instance RAM).
im just about giving OPs premise another go. compose just feels so much better as abstraction especially with small and medium setups looking close to the optimum of expressiveness without boilerplate to describe what is needed. The missing pieces seem to also be in the compose compatible “docker stack” aka new docker swarm, which i ignored for probably too long as i assumed it was the discontinued old swarm. Even if new swarm mode sucks how hard can it be to make something compose shaped vs running k8?
I see docker as a way to avoid having a standard dev platform for everyone in the company so that the infra team don't have to worry about patch xyz for library abc, only run docker.
But, with all the effort put in place to coordinate docker, k8s and all the shebang, isn't it finally easier to force a platform and let it slowly evolve over time?
Is docker another technical tool that tries to solve a non-technical problem?
I do not follow you. Every app has different needs. Containers encode them in a shareable way. You can evolve the image over time. So what more do you want?
Criticisms of Kubernetes generally come from a few places:
- People who would prefer their way of doing this, whether that's deployments on VMs, or use some sort of simpler cloud provider.
I had the same opinion a few years ago, but have kind of come to like it, because I can cleanly deploy multiple applications on a cluster in a declarative fashion. I still don't buy the "everything on K8s", and my personal setup is to have a set of VMs bought from a infrastructure provider, setup a primary/replica database on two of them, and use the rest as Kubernetes nodes.
- People who run Kubernetes at larger scales and have had issues with them.
This usually needs some custom scaling work; the best way to work around this if you're managing your own infra[1] is to split the cluster into many small independent clusters, akin to "cellular deployments"[2]/"bulkhead pattern"[3]. Alternatively, if you are at the point where you have a 500+ node cluster, it may not be a bad idea to start using a hyperscaler's service as they have typically done some of the scaling work for you, typically in form of replacing etcd and the RPC layer through something more stable.
- People who need a deep level of orchestration
Examples of such use cases may be to run a CI system or a container service like fly.io; for such use cases, I agree that K8s is often overkill, as you need to keep the two datastores in sync and generate huge loads on the kube-apiserver and the cluster datastore in the process, and it might be often better to just bring up Firecracker MicroVMs or similar yourself.
Although, I should say that teams writing their first orchestration process almost always run to Kubernetes without realizing this pitfall, though I have learned to keep my mouth shut as I started a small religious war recently at my current workplace by raising this exact point.
[1] Notice how I don't say "on-prem", because the hyperscaler marketing teams would rather have you believe in two extremes of either using their service or running around in a datacenter with racks, whereas you can often get bog-standard VMs from Hetzner or Vultr or DigitalOcean and build around that.
Another case: People who want to run workloads that are inherently incompatible with Kubernetes networking model.
For example:
* For some cursed reasons you want to make sure every single one instance of a large batch job see just one NIC in its container and they are all the same IP and you NAT to the outside world. Ingress? What ingress? This is a batch job!
* Like the previous point, except that your "batch job" somehow has multiple containers in one instance now, and they should be able to reach each other by domain.
That is indeed a weirdly cursed requirement. Why? Black box of legacy stuff? A system that was never designed to be run in multiple does so if all the nodes think they’re the same machine? Defeating a license restriction?
Shit just gets really weird when your network isn’t split for k8s in an equivalent way to what GCP/AWS expect. Like, if you have other services running on the nodes that you want things inside k8s to talk to, or if the nodes are in a flat subnet with other stuff in it, things get annoying. Those are worst practices for a reason, but pretty common in environments with home rolled k8s clusters.
> Ah, but wait! Inevitably, you find a reason to expand to a second server
>> The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming.
-- Donald Knuth, Computer Programming as an Art (1974)
EDIT:
> Except if you quit or go on vacation, who will maintain this custom pile of shell scripts?
Honestly? I don't care. There is a reason why I quit and 99% of time it's the pay. And if the company doesn't pay me enough to bother then why should I? Why should I bother about some company future in the first place?
Not as well as they can reason (or others can google) something as standardized as kubernetes. There’s just less context (in both senses of the term) needed to understand something running on a common substrate versus something bespoke, even if the bespoke thing is itself comprised of standardized parts.
For a project set up by a qualified engineer, there would be little difference to the end user in practice. The LLM would work out a solution with a negligible difference in speed. Maybe debugging would also be faster for the LLM without the abstraction layers and low level access?
I am a big fan which is why I am saying this: you're dismissing the kernel and ABI surface is a huge assumption that must hold true for your comment to hold
stavros.
If you had said "unikernels" I would have had no arguments to make.
The response I always got when suggesting Kubernetes is "you can do all those things without Kubernetes"
Sure, of course. There are a million different ways to do everything Kubernetes does, and some of them might be simpler or fit your use case more perfectly. You can make different decisions for each choice Kubernetes makes, and maybe your decisions are more perfect for your workload.
However, the big win with Kubernetes is that all of those choices have been made and agreed upon, and now you have an entire ecosystem of tools, expertise, blog posts, AI knowledge, etc, that knows the choices Kubernetes made and can interface with that. This is VERY powerful.
Yep! I am now using k8s even for small / 'single purpose' clusters just so I can keep renovate/argo/flux in the loop. Yes, I _could_ wire renovate up to some variables in a salt state or chef cookbook and merge that to `main` and then have the chef agent / salt minion pick up the new version(s) and roll them out gradually... but I don't need to, now!
It really feels like we are drowning in self-imposed tech debt and keep adding layers to try and hold it for just a while longer. Now that being said, there is no reason not to add Kubernetes once a sufficient overlap is achieved.
And there's zero setup. Just a deployment yaml that specifies exactly what you want deployed, which has the benefit of easy version control.
I don't get why people are so bent on hating Kubernetes. The mental cost to deploy a 6-line deployment yaml is less than futzing around with FTP and nginx.
Kube is the new LAMP stack. It's easier too. And portable.
If you're talking managed kube vs one you're taking the responsibility of self-managing, sure. But that's no different than self-managing your stack in the old world. Suddenly you have to become Sysadmin/SRE.
This made me audibly guffaw. Kubernetes is a lot of things, but "portable" is not one of them. GKE, EKS, AKS, OCP, etc., portability between them is nowhere near guaranteed.
I don't you made that argument but could a valid conclusion of your comment be that, because Kubernetes is so ubiquitous, using it frees you from being a Sysadmin/SRE?
I've used it every day since then so I have the luxury of knowing it well. So the frustrations that the new or casual user may have are not the same for me.
And then they hit all the things that make sense in big company with like 40 services but very little in their context and complain that complex thing designed for complex interactions isn't simple
The database is more complex since there is storage affinity (I use cockroachDB with local persistent volumes for it) - but stateful is always complicated.
No argument there. The Toyota 5S-FE non-interference engine is a near indestructible 4 cylinder engine that's well documented, popular and you can purchase parts for pennies. It has powered 10 models of Camrys and Lexus and battle proven. You can expect any mechanic who has been a professional mechanic for the last 3 years know exactly what to do when it starts acting up. 1 out of 4 cars on the road have this engine or a close clone of it.
It's not what any reasonable person would use for a weedwhacker, lawnmower, pool pump or an air compressor.
Now the actual CI/CD/thing-doers tools that all suck... I'm still stuck with those.
Same with build.sh and doing it in such a way that I can use all the build.sh in my ci.yml for Github Actions.
Some Kustomize, a little bit of envsubst and we're good to go thank you very much.
Hooks: I want to apply my database migrations and populate the database with static datasets before I deploy my application, without having my CI connect to the database cluster (at places I've worked, the CI cluster and K8s cluster were completely separate).
(I tend to think this one is acceptable in the beginning, but certainly doesn’t scale.)
Fun times.
K8s is well suited to dynamically scaling a SaaS product delivered over the web. When you get outside this scenario - for example, on-prem or single node "clusters" that are running K8s just for API compatibility, it seems like either overkill or a bad choice. Even when cloud deployed, K8s mostly functions as a batteries-not-included wrapper around the underlying cloud provider services and APIs.
There are also folks who understand the innards of K8s very well that have legitimate criticisms of it - for example, this one from the MetalLB developer: https://blog.dave.tf/post/new-kubernetes/
Before you deploy something, actually understand what the pros/cons are, and what problem it was made to solve, and if your problem isn't at least mostly a match, keep looking.
It’s well suited to other things as well, people are just in denial about some of them.
“I need to run more than two containers and have a googleable way to manage their behavior” is a very common need.
What's the problem with a single-node cluster? We use that for e.g. dev environments, as well as some small onprem deployments.
> Even when cloud deployed, K8s mostly functions as a batteries-not-included wrapper around the underlying cloud provider services and APIs.
Which batteries are not included? The "wrapper around the underlying cloud provider services and APIs" is enormously important. Why would you prefer to use a less well-designed, more vendor-specific set of APIs?
I seriously don't get these criticisms of k8s. K8s abstracts away, and standardizes, an enormous amount of system complexity. The people who object to it just don't have the requirements where it starts making sense, that's all.
What surprises and gotchas did you have to deal with using k3s as a Kubernetes implementation?
Did you use an LB? Which one? I'm assuming all your onprem nodes were just linux servers with very basic equipment (the fanciest networking equipment you used were 10GbE PCIe cards, nothing more special than that?)
There really weren't any particular surprises or gotchas at that level.
In this context, I've never had to deal with anything at the level of the type of Ethernet card. That's kind of the point: platforms like k8s abstract away from that.
I'm not doing overlay networks, I'm using a single bare-metal host, and I value the hands-on Linux administration experience versus the K8s cluster admin experience. All of these are reasons I specifically chose not to use Kubernetes.
The second I want HA, or want to shift from local VLANs to multi-cloud overlays, or I don't need the local Linux sysadmin experience anymore? Yeah, it's K8s at the top of the list. Until then, my solution works for exactly what I need.
But really this applies to any powerful tool. If you need to measure a voltage, an 4 channel oscilloscope also probably seems too complicated.
I run K8s at home. I used to do docker-compose - and I'd still recommend that to most people - but even for my 1 little NUC with 4vcpu / 16Gi Homelab, I still love deploying with K8s. It's genuinely simpler for me.
If anyone's looking for inspiration, my setup:
* ArgoCD pointed to my GitLab repos
* GitLab repos contain Helm charts
* Most of the Helm charts contain open-source charts as subcharts, with versions set like (e.g.) `version: ~0` - meaning I automatically receive updates for all major version until `1`
* Updating my apps usually consists of logging into the UI, reviewing the infrastructure and image tag updates, and manually clicking sync. I do this once every few months
My next little side project: Autoscaling into the cloud (via a secure WireGuard tunnel) when I want to expand past my current hardware limitations
The needed tweaks, the ability to customize things, basically goes to zero because the support staff is technical about the software, but NOT about Kubernetes.
I am not joking: a recent deployment required 3x VMs for Kubernetes, each VM having 256 gigabytes of RAM; then a separate 3x VMs for a different piece. 1.5TB of RAM to manage less than 1200 network devices (routers etc. that run BGP).
No one knew, for instance, how to lower the MongoDB (because of course you need it!) resource usage, despite the fact that the clustered VMware install is using a very fast SSD storage solution and thus MongoDB is unlikely accelerate anything; so over 128GB RAM is being burned on caching the results coming back from SSDs that are running at many-GB/s throughput.
One example was an engineer wanted to build an API that accepts large CSV (GBs of credit reports) to extract some data and perform some aggregations. He was in the process of discussing with SREs on the best way to process the huge CSV file without using k8s stateful set, and the solution he was about to build was basically writing to S3 and having a worker asynchronously load and process the CSV in chunks, then finally writing the aggregation to db.
I stepped in and told him he was about to build a data warehouse. :P
Kubernetes was overkill (I do that all day, 5 days a week); Kamal was too restrictive, so I found myself rolling out Yoink. Just what I need from k8s, but simple enough I can point it to a baremetal machine on Hertzner that can easily run all my workloads.
- using Tailscale SSH is brilliant
- using caddy-docker-proxy for ingress is brilliant
What do you use for:
- service discovery
- secret store (EDIT: Crap you use Infisical. No shade, I just have this horrible foreboding it will end up like Hashicorp. I use Conjur Secretless Broker but am tracking: https://news.ycombinator.com/item?id=47903690)
- backing up and restoring state like in a DB
PS: Have you been having issues with Hetzner the last few weeks?
For secrets, I self-host Infisical on the box -- easy to plug in whatever secret manager, should make it pair nicely with https://github.com/tellerops/teller or something similar
Had no problems with Hertzner so far, just enjoying the raw CPU power of bare metal. The plan is to roll out more boxes across different providers, using Tailscale for the backplane network and Cloudflare to load-balance between them. All in due time What issues have you been having ?
You can have one template docker-compose.yaml file and separate deployment files for different envs, like: docker-compose.dev.yaml, docker-compose.prod.yaml
I think swarm is really underrated
Scaling is a sidenote, that it becomes easy is a result of hoisting everything else onto one control plane and a set of coherent APIs.
I see docker as a way to avoid having a standard dev platform for everyone in the company so that the infra team don't have to worry about patch xyz for library abc, only run docker.
But, with all the effort put in place to coordinate docker, k8s and all the shebang, isn't it finally easier to force a platform and let it slowly evolve over time?
Is docker another technical tool that tries to solve a non-technical problem?
- People who would prefer their way of doing this, whether that's deployments on VMs, or use some sort of simpler cloud provider.
I had the same opinion a few years ago, but have kind of come to like it, because I can cleanly deploy multiple applications on a cluster in a declarative fashion. I still don't buy the "everything on K8s", and my personal setup is to have a set of VMs bought from a infrastructure provider, setup a primary/replica database on two of them, and use the rest as Kubernetes nodes.
- People who run Kubernetes at larger scales and have had issues with them.
This usually needs some custom scaling work; the best way to work around this if you're managing your own infra[1] is to split the cluster into many small independent clusters, akin to "cellular deployments"[2]/"bulkhead pattern"[3]. Alternatively, if you are at the point where you have a 500+ node cluster, it may not be a bad idea to start using a hyperscaler's service as they have typically done some of the scaling work for you, typically in form of replacing etcd and the RPC layer through something more stable.
- People who need a deep level of orchestration
Examples of such use cases may be to run a CI system or a container service like fly.io; for such use cases, I agree that K8s is often overkill, as you need to keep the two datastores in sync and generate huge loads on the kube-apiserver and the cluster datastore in the process, and it might be often better to just bring up Firecracker MicroVMs or similar yourself.
Although, I should say that teams writing their first orchestration process almost always run to Kubernetes without realizing this pitfall, though I have learned to keep my mouth shut as I started a small religious war recently at my current workplace by raising this exact point.
[1] Notice how I don't say "on-prem", because the hyperscaler marketing teams would rather have you believe in two extremes of either using their service or running around in a datacenter with racks, whereas you can often get bog-standard VMs from Hetzner or Vultr or DigitalOcean and build around that.
[2] https://docs.aws.amazon.com/wellarchitected/latest/reducing-...
[3] https://learn.microsoft.com/en-us/azure/architecture/pattern...
For example:
* For some cursed reasons you want to make sure every single one instance of a large batch job see just one NIC in its container and they are all the same IP and you NAT to the outside world. Ingress? What ingress? This is a batch job!
* Like the previous point, except that your "batch job" somehow has multiple containers in one instance now, and they should be able to reach each other by domain.
>> The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming.
-- Donald Knuth, Computer Programming as an Art (1974)
EDIT:
> Except if you quit or go on vacation, who will maintain this custom pile of shell scripts?
Honestly? I don't care. There is a reason why I quit and 99% of time it's the pay. And if the company doesn't pay me enough to bother then why should I? Why should I bother about some company future in the first place?
The people advocating for boring tech generally aren't interested in containers.
You can just run programs.
If your app is just a blob that can be run it is fine, but many languages make it more complicate.
I wonder if just putting app into .appimage + using systemd for some of the separation would be a sweet spot ?
If you had said "unikernels" I would have had no arguments to make.