This is very interesting! Would love to see it in play in Wasmer at some point.
I was aware of TinyGo, which allows compiling Go programs via LLVM (and targeting Wasm, for example). They have a very tiny footprint (programs could even run on the browser) https://tinygo.org/
Given that Go can already be compiled to WebAssembly (with the ability to use TinyGo if you want to trade-off some language features for efficiency), is there anything that would make this more attractive than the alternatives? That it's written in Rust and can be used as a library by Rust code?
The Go-in-Go compiler was significantly slower than the Go-in-C compiler that it replaced, although most users didn't notice it because the new compiler contained many algorithmic improvements that were judiciously not backported to the old compiler in order to make the transition smoother. A compiler written in Rust could conceivably be faster than the current Go compiler.
If the Go compiler was twice as fast, I wouldn't really notice.
If the Go linker was twice as fast, that would be a minor convenience, sometimes.
I wouldn't expect much more that twice, maybe thrice at the very outside. And it'd be a long journey to get there with bugs and such to work through. The blow-your-socks-off improvements come from when you start with scripting languages. Go may be among the slower compiled languages, but it's still a compiled language with performance in the compiled-language class; there's not a factor of 10 or 20 sitting on the table.
But having another implementation could be useful on its own merits. I haven't heard much about gccgo lately, though the project [1] seems to be getting commits still. A highly-compatible Go compiler that also did a lot of compile-time optimizations, the sort of code that may be more fun and somewhat more safe to write in Rust (though I would perceive that the challenge of such code is for the optimizations themselves to be correct rather than the optimization process not crashing, and Rust's ability to help with that is marginal). The resulting compiler would be slower but might be able to create much faster executables.
It really puzzles me that people complain about compilation speed in Rust these days: I've worked on pretty big Rust code bases with lots of dependencies also, and cargo check has always been pretty much instant for me, including when I'm traveling and I use my mid-range laptop from 2012! (my main desktop is from 2018, I bought it because my previous desktop, from 2009 struggled to compile servo, mostly due to having too little RAM).
Debug build take a bit longer (a few seconds) on the desktop, while still staying below a minute on the laptop (remember, I'm talking about a 12 years old Clevo laptop, not a recent Macbook). It's definitely not worse than Typescript compilation or even Javascript bundling, yet we pretty much never hear complains about how typescript has too big compile times.
Yes, it could be faster with a different compiler architecture, especially on clean release builds and that would be nice, but it's a very minor annoyance (I don't do a full release build unless I've updated my compiler version, which only happens a few times a year).
The contrast between the discourse and my day-to-day experience on near obsolete hardware is very striking.
(Compilation artifact eating up hundreds of GB of my hard drive are a much, much bigger nuisance in practice, yet nobody seem to talk about that here on HN).
> I don't do a full release build unless I've updated my compiler version, which only happens a few times a year
That's probably part of the difference. I do tens of these every single day.
GUI apps can be quite slow in debug mode, and as you say, the compilation artifacts build up quickly, which requires a cargo clean and then a fresh build.
Tens of clean builds? I'm very curious: why? (because obviously that puts you in a completely different situation compared to someone who can rely on incremental builds)
> GUI apps can be quite slow in debug mode
Full debug mode, definitely, but in that case I've always found that building the dependencies in release mode was enough, but YMMV. But then that's what incremental rebuild are about.
> and as you say, the compilation artifacts build up quickly, which requires a cargo clean and then a fresh build.
I've mostly experienced the PITA when working with multiple code bases over time or in parallel, but surely it doesn't happen every day, let alone multiple times per day, does it?
It's partly a privilege of being able to. I have an MacBook M1 Pro machine with 10 cores, so clean release builds are tolerable. The slowest project to compile I work on regularly in Servo and I can do a clean release build of that in 3-4 minutes. Most of the other projects I work on it's more 30s to 2m max.
It's also a disk space thing. Between working on multiple different projects (I have 200 projects in total in my "open source repos" directory, most of those I only interact with very occasionally, but 5-10 in a day wouldn't be particuarly unusual for me) and switching between branches within projects I can build up 10s of GBs of data in the target dir within a few hours. And I don't have the largest SSD, so that can be a problem! So it's become habit to cargo clean reasonably regularly.
Finally, sometimes I am explicitly testing compile time performance (which requires a clean build each time) or binary size (which involves using additional cargo profiles, exacerbating the disk space issues).
It's still my number one complaint about Rust, even though it has definitely gotten better over time. Partly my fault - I'm stuck on a slightly underpowered Windows machine at work. My Macs at home compile significantly faster. But as soon as I add certain crates like serde, tokio, windows, and some others, the compile times grow quickly. It also means that tasks Rust isn't necessarily designed for but can be used for (like web backends) become frustrating enough to dissuade me from using it as a do-it-all language despite certain aspects of the language being really nice. Even a 30-45 second tweak-test loop becomes annoying after a while. Again more of a personal problem than anything, but the point is I personally am constantly frustrated with the compile times.
> This sample code took 12 minutes on a clean build on my travel netbook, now dead.
Clean builds are slow indeed. But they are also once every six week at most if switch to the latest compiler at every release.
> Get the community editions of Delphi, FreePascal, or D and see what a fast build means.
Honestly, who cares about the difference between 1s vs 100ms vs 10ms for a build though? Rust compilation isn't optimal by any means, and it wouldn't have been workable at all in the 90s, but computers are so fast today (even 13-years old computers) it rarely matters in practice IMHO.
Roc team does, that was one of the reasons they dropped Rust for Zig, even though Zig is yet to reach 1.0.
As do many of us, as we know how fast builds can be with complex languages, e.g. add OCaml to the list of faster than Rust compiler toolchains, while having a ML type system.
> But because of the later stage when it becomes 600ms vs 60s.
What later stage though, as I said I worked with big code bases on old hardware without issues.
I'm simply not convinced that there exist a situation where incremental rebuild of the crate you're working on is going to take 60s, at all, especially if you're using hardware from this decade.
I must be doing something wrong because incremental builds regularly take 30-60 seconds for me. Much more if I add a dependency. And I try to keep my crates small.
As a sibling comment points out, it's likely to be mostly link time, not compilation time.
The most recent Rust version ships with `lld` so it shouldn't be the case anymore (afaik `lld` is a bit slower than the `mold` linker, but it's close, much closer than the system linker that was previously being used by default).
On a Macbook M2 Pro, on a project with loads of services, 210k loc, a full rebuild takes 70 seconds. Incremental takes 36s.
For one service, full rebuild in 16s and incremental 0.6s.
It's not blazing fast but considering the scale of the project, it's not that bad, especially since I rarely rebuild every service at the same time.
That's strange. Humongous k8s Go projects (>500k LOC) build in a third of that time. Do you have lots of deps in your `go.mod` ? Is the project abusing codegen ?
Consider upgrading your hardware if/when you get a chance to (obviously this is expensive). My builds (Rust not Go, but it might well be similar?) got 10x faster when I upgraded from a 2015 MBP to an M1. I suspect 2019 to M4 might be similar.
There's no “big tutorial” though. There's a section about compilation time performance[1] but it's arguably not “big”, and the most impactful parts of it is about linking time, not compilation time. And half of the section is now obsolete since rust uses `lld` by default.
> This sample code took 12 minutes on a clean build on my travel netbook, now dead.
> Maybe nowadays it is faster, I have not bothered since I made the RIR exercise.
Took me 18 seconds on a M4 Pro.
Please stop spreading FUD about Rust. Compile times are much better now then what they were and are constantly improving. Maybe it will never be as fast as one of those old languages that you like that nobody uses anymore but it's plenty usable.
> Do you have 10 year old netbooks to give to everyone? because this seems to be required to have slow compile times in Rust.
Unfortunately not all of us have an economical situation that allow us to sponsor Trump gifts every couple of years.
How many of those thousands of software projects that do use Rust, can be show as counter example to slow compilation times on hardware that common people usually buy and keep around?
Especially in those countries that are outside tier 1 in world economy, getting computers from whatever parts western no longer considers usable for their daily tasks.
A 10 year old netbook is also not the average computer and yet we are to believe that 12 minute compile times for some small hobby project are the normal and rust sucks.
It is when people have more important things to spend money on.
It is also not normal to expect people to spend 2 000 euros to enjoy fast compilation times, when other programming languages require cheaper budgets with faster compilation times, since MS-DOS on lousy hardware from today's standards.
You don't care, other people's do, and who cares most drives adoption.
> A compiler written in Rust could conceivably be faster than the current Go compiler.
Is that really relevant, though? A compiler written in Rust is unlikely to be that much faster than a compiler written in Go. Most users might not notice a tiny difference in build times.
The Go compiler is already ridiculously fast. As far as I know the garbage collector usually doesn't even activate for short-lived programs, which compilation usually is. Turning garbage collection off entirely doesn't have much of an impact on build times.
What significant opportunities exist for performance with a Rust implementation that aren't possible in Go?
The original port was slower because it was a near straight transpile impl of the original C compiler. It didn't do anything to try to speed things up, they went for correctness first. Then in subsequent releases they worked on speed improvements.
While Its a cool experiment. Is there some purpose I'm missing? Go can already do this natively and compilation speed is already its selling point so not sure how rust could help there.
Seems like effort would be better towards improving rust compilation speed. Unless you just wanted to create a compiler for learning or HN points which here ya go.
While interesting for the author, as learning exercise, the existing reference compiler is a much better proposition, being bootstraped and proving the point Go is usable for systems programming.
Unless writing compilers, linkers, assemblers, a GC runtime is no longer considered systems programming.
I was aware of TinyGo, which allows compiling Go programs via LLVM (and targeting Wasm, for example). They have a very tiny footprint (programs could even run on the browser) https://tinygo.org/
But this approach is very interesting. I wonder how much compatible Goiaba is with Go vs TinyGo https://tinygo.org/docs/reference/lang-support/stdlib/
If the Go linker was twice as fast, that would be a minor convenience, sometimes.
I wouldn't expect much more that twice, maybe thrice at the very outside. And it'd be a long journey to get there with bugs and such to work through. The blow-your-socks-off improvements come from when you start with scripting languages. Go may be among the slower compiled languages, but it's still a compiled language with performance in the compiled-language class; there's not a factor of 10 or 20 sitting on the table.
But having another implementation could be useful on its own merits. I haven't heard much about gccgo lately, though the project [1] seems to be getting commits still. A highly-compatible Go compiler that also did a lot of compile-time optimizations, the sort of code that may be more fun and somewhat more safe to write in Rust (though I would perceive that the challenge of such code is for the optimizations themselves to be correct rather than the optimization process not crashing, and Rust's ability to help with that is marginal). The resulting compiler would be slower but might be able to create much faster executables.
[1]: https://github.com/golang/gofrontend
Compilation speed is not something I worry about in Go, versus Rust, which I seldom bother with nowadays, compilation speed being one of the reasons.
Debug build take a bit longer (a few seconds) on the desktop, while still staying below a minute on the laptop (remember, I'm talking about a 12 years old Clevo laptop, not a recent Macbook). It's definitely not worse than Typescript compilation or even Javascript bundling, yet we pretty much never hear complains about how typescript has too big compile times.
Yes, it could be faster with a different compiler architecture, especially on clean release builds and that would be nice, but it's a very minor annoyance (I don't do a full release build unless I've updated my compiler version, which only happens a few times a year).
The contrast between the discourse and my day-to-day experience on near obsolete hardware is very striking.
(Compilation artifact eating up hundreds of GB of my hard drive are a much, much bigger nuisance in practice, yet nobody seem to talk about that here on HN).
That's probably part of the difference. I do tens of these every single day.
GUI apps can be quite slow in debug mode, and as you say, the compilation artifacts build up quickly, which requires a cargo clean and then a fresh build.
Tens of clean builds? I'm very curious: why? (because obviously that puts you in a completely different situation compared to someone who can rely on incremental builds)
> GUI apps can be quite slow in debug mode
Full debug mode, definitely, but in that case I've always found that building the dependencies in release mode was enough, but YMMV. But then that's what incremental rebuild are about.
> and as you say, the compilation artifacts build up quickly, which requires a cargo clean and then a fresh build.
I've mostly experienced the PITA when working with multiple code bases over time or in parallel, but surely it doesn't happen every day, let alone multiple times per day, does it?
It's partly a privilege of being able to. I have an MacBook M1 Pro machine with 10 cores, so clean release builds are tolerable. The slowest project to compile I work on regularly in Servo and I can do a clean release build of that in 3-4 minutes. Most of the other projects I work on it's more 30s to 2m max.
It's also a disk space thing. Between working on multiple different projects (I have 200 projects in total in my "open source repos" directory, most of those I only interact with very occasionally, but 5-10 in a day wouldn't be particuarly unusual for me) and switching between branches within projects I can build up 10s of GBs of data in the target dir within a few hours. And I don't have the largest SSD, so that can be a problem! So it's become habit to cargo clean reasonably regularly.
Finally, sometimes I am explicitly testing compile time performance (which requires a clean build each time) or binary size (which involves using additional cargo profiles, exacerbating the disk space issues).
https://github.com/pjmlp/gwc-rs
Maybe nowadays it is faster, I have not bothered since I made the RIR exercise.
Get the community editions of Delphi, FreePascal, or D and see what a fast build means.
Better yet, take the lastest version of Turbo Pascal for MS-DOS, meaning 7, and try it out on FreeDOS.
Clean builds are slow indeed. But they are also once every six week at most if switch to the latest compiler at every release.
> Get the community editions of Delphi, FreePascal, or D and see what a fast build means.
Honestly, who cares about the difference between 1s vs 100ms vs 10ms for a build though? Rust compilation isn't optimal by any means, and it wouldn't have been workable at all in the 90s, but computers are so fast today (even 13-years old computers) it rarely matters in practice IMHO.
As do many of us, as we know how fast builds can be with complex languages, e.g. add OCaml to the list of faster than Rust compiler toolchains, while having a ML type system.
I definitely do. Not necessarily because of the 10ms vs 1s. But because of the later stage when it becomes 600ms vs 60s.
What later stage though, as I said I worked with big code bases on old hardware without issues.
I'm simply not convinced that there exist a situation where incremental rebuild of the crate you're working on is going to take 60s, at all, especially if you're using hardware from this decade.
The most recent Rust version ships with `lld` so it shouldn't be the case anymore (afaik `lld` is a bit slower than the `mold` linker, but it's close, much closer than the system linker that was previously being used by default).
(Not affiliated with the project. Just switched to it and never looked back.)
I'd be thrilled to have it build in 300ms.
(Using a macbook pro 2019)
Wait, aren't Go builds supposed to be fast?
There's no “big tutorial” though. There's a section about compilation time performance[1] but it's arguably not “big”, and the most impactful parts of it is about linking time, not compilation time. And half of the section is now obsolete since rust uses `lld` by default.
[1] https://bevy.org/learn/quick-start/getting-started/setup/#en...
Edit: oh I get it you probably meant “where lld is set as default ” which is currently Linux only.
Lld is supported by the other platforms though, so you can just copy-paste the three lines of configuration given on the Bevy page and call it a day.
> Maybe nowadays it is faster, I have not bothered since I made the RIR exercise.
Took me 18 seconds on a M4 Pro.
Please stop spreading FUD about Rust. Compile times are much better now then what they were and are constantly improving. Maybe it will never be as fast as one of those old languages that you like that nobody uses anymore but it's plenty usable.
I would gladly take one.
And the Roc team as well, maybe they would revert back their decision on moving away from Rust to Zig due to compile times.
> I would gladly take one.
Do you have 10 year old netbooks to give to everyone? because this seems to be required to have slow compile times in Rust.
> And the Roc team as well, maybe they would revert back their decision on moving away from Rust to Zig due to compile times.
More cherry picked examples, you sure love those.
Like whats the point of bringing this up? Do you want me to show you the thousands of software projects that do use rust as a counter example?
Obviously no programming language is one size fits all.
Unfortunately not all of us have an economical situation that allow us to sponsor Trump gifts every couple of years.
How many of those thousands of software projects that do use Rust, can be show as counter example to slow compilation times on hardware that common people usually buy and keep around?
Especially in those countries that are outside tier 1 in world economy, getting computers from whatever parts western no longer considers usable for their daily tasks.
Maybe they can afford to wait.
M4 pro isn't your average computer though.
But as I said, clean builds aren't the most common experience either.
It is also not normal to expect people to spend 2 000 euros to enjoy fast compilation times, when other programming languages require cheaper budgets with faster compilation times, since MS-DOS on lousy hardware from today's standards.
You don't care, other people's do, and who cares most drives adoption.
The production (clang backend) parallel build of V language takes about 3.2 seconds. All on an m1 mac. Even the go compiler seems slow in comparison.
Is that really relevant, though? A compiler written in Rust is unlikely to be that much faster than a compiler written in Go. Most users might not notice a tiny difference in build times.
What significant opportunities exist for performance with a Rust implementation that aren't possible in Go?
https://github.com/golang/go/issues/73608
Sounds like they want to maybe include https://github.com/usbarmory/tamago in the compiler.
Seems like effort would be better towards improving rust compilation speed. Unless you just wanted to create a compiler for learning or HN points which here ya go.
Unless writing compilers, linkers, assemblers, a GC runtime is no longer considered systems programming.