The article assumes there are people who want clusters. But a single Linux VM in the cloud can scale pretty far. Separate VM's for different apps works well for isolation. Why do I need a cluster?
if you run firecracker inside the rented cloud vm, and you let a few of them run, and perhaps interact with each other, you have essentially created a cluster of microvms that's hosted on a single machine
as argued by OP, you can see this happening with exe.dev, and less explicitly with sprites.dev
Never understood the appeal of Kubernetes to developers, outside of a massive deployments. Always felt like a poor man's Linux for those that insist on using apple or windows desktop.
I am not sure I understand this argument. Kubernetes typically runs on Linux. I use an Apple laptop, work mostly with headless Linux VMs and Kubernetes. What is a “poor man’s Linux”?
Yeah I’ve been doing this with tailscale and a single vps and it’s been wonderful. Unless you’re planning to have millions of users I don’t think there’s any reason to have a cluster.
Maybe they’re assuming some massive amount of compute will be necessary for future tasks? Self hosted LLMs? I’m currently finding it difficult to come up with more uses for my vps beyond hosting trillium and some personal applications I’ve made
Isn't there a meaningful sense in which "separate VMs for different apps" constitutes a cluster?
The "cooperative task" they're engaged in is just, broadly, meeting your needs, whatever they are.
The isolation is a desirable property, and I agree this is much preferable to a highly inter-coupled bunch of machines, and also that thia stretches the typical sense in which we refer to a "compute cluster", but I don't think it's an entirely invalid framing of the term.
> Isn't there a meaningful sense in which "separate VMs for different apps" constitutes a cluster?
Not really. In my experience clustering implies multiple compute elements serving the same function with a coordination mechanism to provide redundancy and/or enhanced capacity.
As far as I can tell and from some quick researching of the guys previous experience, that's all it is. I think the implication is that LLM's will be architecting and deploying the cluster setups at some point? Which sounds horrific so I'm assuming I am interpreting it long
The article itself reminds me of the enthusiasm I felt for plan9 when I first heard about it back in uni. I also thought everyone should have their own compute grids and that clustered computing was the future; of course now I realize there's a lot of reasons why that doesn't actually work. Considering this appears to be a start-up ad, I hope the author knows something I don't.
Wouldn’t it be cheaper / less complex to scale vertically (eg a large workstation or medium size bare metal server) instead of using clusters? My understanding is that clusters are primarily useful when you want to share a resource from a pool across unpredictable usage, which becomes a moot point once the cluster is personal.
Scale isn’t the only reason. Sometimes you want resource isolation and self-healing, something that is useful if you want a personal swarm of AI agents.
I’m not sure quite what this is trying to say. My laptop is already a personal cluster — it has 16 cores, lots of storage, a fast network, I run VMs on it. It’s been the case for a long time that you can run bursty jobs in the cloud if you need more power for a brief period than whatever is currently locally affordable. That’s kind of what the cloud is for, really. So what’s new?
It’s pretty fun to throw a thousand cores at a problem, but I guess it won’t be that long before you can get that in two-socket AMD workstation or whatever.
You're drawing an incorrect conclusion from that site. Aside from the fact that "fitting in RAM" is not the only criterion for needing a cluster, the fact that it's possible to fit data into RAM on a single machine doesn't mean that's the most cost-effective, practical, or sensible solution.
A big advantage of clusters, and horizontal scaling in general, is the ability to easily dynamically scale to meet demand.
If you're running a system on a single machine that has N GB of memory and you need to scale to N+1, what do you do? Provision a new machine and migrate everything over?
No-one operates online real-time systems like this. Clusters make it much easier and less expensive to handle this.
On top of that, it's probably true that in some pure numerical problem-count sense, "most problems" don't need a cluster, but that's misleading. It's like saying "most businesses are mom-and-pop shops." Perhaps true, but it ignores hundreds of thousands of larger businesses, or even small business that have big data needs.
There are plenty of problems that involve large amounts of data, and that's increasingly true with ML applications.
I'm at a company of ~100 people which you've probably never heard of (classified as a "small" company in government stats, so not included in the hundreds of thousands figure I mentioned above.) We have 1.9 PB of data for our main environment. When we run processes that deal with it all, the clusters scale to thousands of vCPUs and tens of terabytes of RAM.
Several processes that run daily scale to 500+ vCPUs and many TB of RAM. For the latter, the data itself could probably fit in RAM on a humongous machine, but the CPUs wouldn't fit on a single machine. And we'd have to size the machines carefully every time we start them up. Clusters can scale up dynamically according to the demands of the jobs they're executing.
Even in a physical hardware, on-premise scenario, it's still easier to scale horizontally than vertically in almost all cases, for all the reasons I mentioned. That's a big reason why Kubernetes was adopted at an unprecedented pace at medium to large organizations - because it helps manage that approach.
that's..kind of not true. they weren't elastic in the sense that you never had to think about how big they were. but you had say 64k nodes, and people would launch jobs with 1000 of them, or 10000, or if if they could clear the decks all of them. or if they were just debugging, maybe 5 of them.
No idea about ClusterOS, but I would recommend IncusOS if you're looking for a nice clustering solution. Incus has become indispensable in my homelab over the past few months. It's what I put on my bare metal machines and then spin up Talos Linux VMs for day job practice.
I really liked IncusOS but it still felt quite primitive compared to Proxmox. I also didn’t really like the way it bundles VMs and containers into an ‘instance’ concept, it made the UI and management via Terraform confusing. Had a lot of problems with the TF provider too.
How does the IncusOS API compare to Talos? When I first looked at it it seemed very minimal and I didn't see a lot of options for more complex installs (eg network bonding, disk partitioning).
One could argue that multiple cores are already not seamless especially if you have NUMA (now available in high-end desktops by the way! and every multi-socket system that's ever existed) and the distinction between RAM and disk is very not seamless and so is any other number of things you'd hope the OS would magically handwave away for you but it doesn't.
10Gbps is now very cheap and 100Gbps is viable at hobby scale. That's Ethernet. I don't know anything about CXL and so on.
I have an irrational soft spot for Apache Mesos. I loved the separation of the resource management from the scheduling. Note to self: do not rabbit hole on this. Hm. Maybe mesos is the manager for my agent sandboxes. No! Bad lowbloodsugar!
as argued by OP, you can see this happening with exe.dev, and less explicitly with sprites.dev
Uptime, self healing, reproducibility, separating the system from app. There's probably a half dozen more.
K8s comes with resource consumption tax certainly but for anything beyond the trivial it's usually justified.
> Separate VM's for different apps works well for isolation
Sounds inefficient along with a lot more work doing the plumbing than simply writing a 100 lines of yaml.
https://commaok.xyz/ai/just-in-time-software/
I mean, I don't do that, but I'll type a prompt.
Maybe they’re assuming some massive amount of compute will be necessary for future tasks? Self hosted LLMs? I’m currently finding it difficult to come up with more uses for my vps beyond hosting trillium and some personal applications I’ve made
The "cooperative task" they're engaged in is just, broadly, meeting your needs, whatever they are.
The isolation is a desirable property, and I agree this is much preferable to a highly inter-coupled bunch of machines, and also that thia stretches the typical sense in which we refer to a "compute cluster", but I don't think it's an entirely invalid framing of the term.
Not really. In my experience clustering implies multiple compute elements serving the same function with a coordination mechanism to provide redundancy and/or enhanced capacity.
JBOD vs. RAID.
It sits on top of Kubernetes and seems very hand wavy about how you create and manage those clusters.
The article itself reminds me of the enthusiasm I felt for plan9 when I first heard about it back in uni. I also thought everyone should have their own compute grids and that clustered computing was the future; of course now I realize there's a lot of reasons why that doesn't actually work. Considering this appears to be a start-up ad, I hope the author knows something I don't.
> see CEO of Tailscale apenwarr's vibe-researched thread
“Vibe-research” is now a core part of my vocabulary.
A big advantage of clusters, and horizontal scaling in general, is the ability to easily dynamically scale to meet demand.
If you're running a system on a single machine that has N GB of memory and you need to scale to N+1, what do you do? Provision a new machine and migrate everything over?
No-one operates online real-time systems like this. Clusters make it much easier and less expensive to handle this.
On top of that, it's probably true that in some pure numerical problem-count sense, "most problems" don't need a cluster, but that's misleading. It's like saying "most businesses are mom-and-pop shops." Perhaps true, but it ignores hundreds of thousands of larger businesses, or even small business that have big data needs.
There are plenty of problems that involve large amounts of data, and that's increasingly true with ML applications.
I'm at a company of ~100 people which you've probably never heard of (classified as a "small" company in government stats, so not included in the hundreds of thousands figure I mentioned above.) We have 1.9 PB of data for our main environment. When we run processes that deal with it all, the clusters scale to thousands of vCPUs and tens of terabytes of RAM.
Several processes that run daily scale to 500+ vCPUs and many TB of RAM. For the latter, the data itself could probably fit in RAM on a humongous machine, but the CPUs wouldn't fit on a single machine. And we'd have to size the machines carefully every time we start them up. Clusters can scale up dynamically according to the demands of the jobs they're executing.
so I guess idk what you mean by 'elastic' here.
You can have more than one CPU and more than one storage connected to one mainboard and that works because the interconnect fabric is very fast.
We don't have have the possibility to connect different computers at the same kind of speed that would let them work together seamlessly.
10Gbps is now very cheap and 100Gbps is viable at hobby scale. That's Ethernet. I don't know anything about CXL and so on.