I love Caddy for its easy configuration, and that's why I created Novnaode. It makes it easy to launch hosted Caddy instances (with Nginx coming soon) and puts you fully in control of how to manage the configurations.
No more being restricted by enterprise tiers or limited configuration options - just simple, powerful, and flexible global load balancing.
Check it out here: https://novanode.sh
The fixed cost per region seems like a barrier to experimenters and large development teams alike. It's not much in the grand scheme, but enough to prohibit an individual from standing something up on a whim and leaving it around. Likewise, for large development teams having a stack for every developer would be costly. In each case I'm not talking about "production" workload, but the semi-idle stacks that run for long periods, are critical, need to reflect the production setup, and don't generate revenue.
Your LBs are quick to deploy, which is super important for fluid CI/CD experience but they miss the mark without being usage based.
Do others see this the same way?
For intra-region redundancy, we deploy 2 nodes per region in HA mode. If one node fails, traffic is seamlessly routed to the other node in the same region through Fly.io's internal load balancing. This provides N+1 redundancy within each region, ensuring service continuity even during single-node failures.
Why did you choose fly.io? The traffic there are very expensive which is an issue for people who would want to run LB.
I would be interested in a hosted caddy cluster that lets me configure everything that Caddy provides without needing to fiddle with Caddyfiles or its API directly.
Pretty much the only thing we add is a storage layer for your certs so you avoid the acme rate limit for multi-region deployments
Slightly off topic, but something nice about Caddy is that it automatically falls back on ZeroSSL (if you have an email address defined) when you hit letsencrypt rate limits. I have a case where more certificates for a root domain are needed than LE is capable of providing, and this fallback solves for the rate limit problem seamlessly.
Thanks for sharing your project!
Edit: Ah, thanks @evanjrowley! I'm glad it was a typo, because otherwise the name would have doomed this baby :)
EDIT: see the other reply, appears that it handles both given it leverages Fly's Anycast setup.
The managed bits are the certs/configs/failover so that you don't need to be concerned about that.
Though for a single VPS instance it could makes sense to just host your own caddy on that node. If you need global distribution Novanode is a good answer.
Finally, make each VPS check on the health of the other to stop its DNS pointing to the other VPS: you will already have to have them check on eachother for the load checks.
It's a fun and practical exercise (you may have to write your own DNS servers), after which you can then think on how to do that for more than 2 VPS and the algorithms it entails
That would use AWS and insulate you from the details.
The fun part is learning how to do that, which gives you a better idea of how it works and full control of the solution.
You can then think about anycast or getting your own IP blocks
Before using an existing solution, I like to understand how it works to make sure I will not get bad surprises: being able to reverse and debug using assembly code can be a helpful skill, and likewise for understanding DNS.
Yesterday I vibe code a DNS server from scratch in half a day, because I wanted to test something very specific bridging DNS and mDNS. Doing the same thing for health checks and geo routing may take what, another half day?
The experience and understanding gained can help decide if it's worth using a service like route53 or not, or even better: just doing without the feature, because if you have 1 VPS, "YAGNI" is the likely answer!
If the poster is seriously thinking about scaling to 2 VPS or more, the experience gained will expose the various ways it can fail, to maybe reconsider the decision (maybe instead get beefier hardware?)
In my case, I saw the DNS-mDNS isn't much a problem, so I don't have to reconsider adding the feature I want.