I love that it goes both ways, about plus and minus of both languages, including rewriting back into C++ when it made sense, and the side joke about rewriting existing CLI tools in Rust.
While C++ isn't perfect, has the warts of a 50 year's old language, and probably will never match Rust's safety, we would already be in a much better place if at least everyone used the tools at their disposal from the last 30 years.
While I would advise to use Rust for some security critical scenarios, there are many others where it is still getting there, and there are other requirements to take into account other than affine types.
Coming from C++, my favorite take on Rust is that it is fundamentally about productivity.
Avoiding UB is a serious drain on productivity in C++, and every new language or library feature comes with additional pitfalls, increasing the mental load.
This is to say: The benefit of Rust is not actually about "security critical scenarios", but much more generally about delivering the same quality of code in a fraction of the time.
While I agree with the general sentiment, while it doesn't sort out all safety issues, actually programming in C++ instead of C with C++ compiler, and making use of the pleothora of analysers would already prevent many issues.
It's not only avoid UB in one area, it's about hidden footguns which you may not know about. I have been using C++ for a decade now and there are still UB causes I learn about every week or so.
What I dislike of C++ is that it grew to become a monster of a language, containing all programming paradigms and ideas, good or bad, known to mankind.
It's so monstrously huge no human can hold its entire complexity in his head.
C++ allows you to do things in 10000 different ways and developer would do just that. Often in the same code base.
That being said, I would use a sane subset of C++ every day over Rust. It's not that I hate Rust or that I don't think is good, technically sound and capable. It just doesn't fit the way I think and I like to work.
I like to keep a simple model in mind. For me, the memory is just a huge array from which we copy data to CPU cache, move some to CPU registers, execute instructions and fetch data from the registers and put it again in some part of that huge array, to be used later. Rust adds a lot of complexity over this simple mental model of mine.
> C++ can be safe enough if you proceed with care.
The problem with this is if you have a team working on a C++ product you will need some people who can catch memory bugs to review every code before merging. Even with this approach it still possible to missed some memory bugs since the reviewer need to fully understand each object lifetime, which is time consuming during code review.
I'm working on a company that run on a server application written in C/C++. The code base is very large and we always have memory bugs that required ASAN on production to fix the bugs. We have started migrating each part to Rust one year ago and we never have a single crash from Rust code. The reason we choose Rust is because it is a server application that computation intensive, latency sensitive and large amount of active connections.
Try keep using Rust until you comfortable with it and you will like it. It fit with your simple mental model. I can say this because I was a C++ users for the whole life and switched to Rust recently.
> The problem with this is if you have a team working on a C++ product you will need some people who can catch memory bugs to review every code before merging. Even with this approach it still possible to missed some memory bugs since the reviewer need to fully understand each object lifetime, which is time consuming during code review.
Nah, if you're trying to match every "new" with a "delete" during the code review, you've already lost the battle. You can probably succeed when the code is added, but then the edits start to flow and sooner or later it's gone. Reviews are mostly good to catch design problems, not bugs.
The only reliable approach I know is to have a strict rule of never mixing memory management with business logic. Nothing else works well enough but this one however works remarkably well.
Business logic should rely on containers, starting with simple unique_ptrs and vectors and going deeper and deeper into the custom land when appropriate. If you can't find a suitable standard container, you build a custom one. The principal difference of "writing a custom container when you need it" compared to "integrate custom memory-management into the business logic when you need it" is that containers are:
* well understood
* well tested
* relatively small code-wise
* almost never change once implemented
None of the above applies to the business logic, it's the complete opposite.
Think of it kind of like programming in Java: someone has to write the memory management and it's a hell of job. However once this is done, programming the ever-changing business logic is easy and safe.
You can live the same life in C++ AND also have the ability to put on the "Doomguy of the memory management" shoes whenever you feel like it. Just don't forget to take of the shoes of the "business logic guy" when you do it, you can't wear both at the time.
I was pretty hyped on C++ during the (early?) 2010s, hoping that eventually I get to work on project that has really serious number crunching needs, and ... and then ... the big guns. Now reading this feels like the best argument for Rust/Scala. :)
Yes, but that is _incredibly_ time consuming. You have to set up asan, msan, tsan, and valgrind. If you want linting you need to do shenanigans to wire up clang-tidy.
I also like simple mental models. I like not having to figure out the cmake modifications to pull in a new library. I like having a search engine when I need a new library for x. I like when libraries return Result<Ok, Err> instead of ping ponging between C libraries which indicate errors using retval flags or C++ libraries that throw std::runtime_error(). I like not dealing with void* pointer casting .
Yeah, I’m leaning towards zig but I’m a bit on the fence still.
For Rust, I kind of got tired of writing unsafe rust for embedded, but that’s addressable afaik. The real dealbreaker was that after 10k+ lines of code I still will pop open the source of a library that solves a simple problem and the code looks indecipherable. I also don’t really agree with the dependency explosion that cargo encourages.
Zig is very nice in that it has the most ergonomic struct usage I’ve encountered. The stdlib could really use some improvement though. Comptime is very cool, but I also worry if the community will get undisciplined with it.
Zig is fantastic, however there are a few issues (mostly related to its immature/wip status):
- the build system is constantly changing in a breaking way (between releases some of the repos I have on GH no longer build and need their build.zig to be updated).
- the comptime section of the docs needs to be heavily expanded, I'd love to see them take common Go interfaces and redo them in Zig (like io.Writer, io.Reader), breaking down the process step-by-step. It took me a little bit longer than it should've to efficiently use comptime.
- a whole section dedicated to things like using WaitGroup and multithreading, for those coming from langs like Go. Also, higher-level concurrency primitives like channels would be fantastic.
- a better import system for external zig libraries, the zig fetch => .dependency => .root_module.addImport stuff is not as straightforward as it should be although for someone coming from C it definitely does feel like using meson
None of these are critical and again, all signs of Zig's "youth".
When I say "Rust alternative" it's precisely because it competes in the same space: very low-level, no GC, extremely high performance constraints, safety guarantees.
Re: safety guarantees, much digital ink has been spilled on how Zig can give Rust a run for its money when it comes to safety.
When people say: Rust <=> C++, Zig <=> C; they forget that C++ was precisely meant to be an enhanced C, which is what Zig is trying to accomplish. They simply eschewed chasing complexity as the holiest of holy grails, which in turn leads to the cognitive load of writing/reading Zig code to be MUCH smaller than C++ or other langs in that space.
All that said, I'd never recommend a company build their product on Zig just yet, at least not with some kind of red telephone to the Zig team or a dedicated Zig developer, given it's still not fully mature.
Re: managing memory yourself, Zig's defer makes this much, much more straightforward than you would think, and feeding in your own allocators can simplify this in many cases (and make testing for leaks much easier).
It has the same safety guarantees than Modula-2 and Object Pascal have been offering for decades, but apparently curly brackets and @ everywhere is better than begin/end.
I can think of three things off the top of my head:
- Rust doesn’t let you pretend that memory is a flat array of bytes
- Single ownership of data can be annoying in some cases
- The borrow checker pointing out that you’re trying to do something stupid with pointers (again) can be annoying
Of course, I’m of the opinion that the hassles are worth it, especially the borrow checker. Almost every time I have to fight the borrow checker, it’s because I haven’t thought properly about the pointers involved and tried to do something stupid.
How does single ownership conflict with the idea that memory is a flat array of bytes?
Further, the borrow checker does not care about pointers, only references. With pointers, you are on your own. It is true that using pointers in Rust is more cumbersome than it could be. But it is much easier to compartmentalise the pointer parts into separate functions and expose references instead.
I agree that some paradigms and patterns are genuinely difficult to use, e.g. any intrusive data structure, but I do not see the contentious link between simple memory models and the borrow checker and the like.
Single ownership and memory not being flat are separate points.
And I guess I'm imprecise saying pointers where I mean borrowed values, but my point is that a borrow is just a pointer with additional type checking. More formally: C pointers are a complete but unsound formal system, whereas Rust borrows are sound but incomplete. And every time I get in a fight with the borrow checker, it's because I'm doing something unsound, not because the system is incomplete.
I love this. We are a rust shop and we use clickhouse a fair bit. We’ve been quite impressed with its speed and flexibility. I’m glad to see this kind of direct, real-world feedback around both the benefits and difficulties of mixing rust and C++, which we have also had to do a bit of (albeit in the opposite direction: a smattering of C++ in a sea of rust).
I’m not sure if the poster here is the post author, but it would be great if the author would consider filling out this survey that was recently released asking for feedback on the future of rust’s vision: https://blog.rust-lang.org/2025/04/04/vision-doc-survey.html
I’d love to see rust become the de facto standard for cross-language portable code by virtue of its ease of use, but as this and our experience highlights, there’s some way to go yet!
Oh one more note, regarding hermetic builds: I have tried to package clickhouse in nix for our dev environment and CI, but its build is pretty complicated, so I resorted to just pulling the precompiled binaries.
Nix, via the standard rust integration or via something like crane, is actually quite nice for building rust/C++ combo projects, so it’d be awesome if the team might consider this as a means of achieving reproducibility. I’d imagine they’d have an easier time of it than I did, given they are more familiar with their own build process.
> If you do an experiment and say "C++" anywhere on the Internet, in a minute someone will chime in and educate you about the existence of Rust.
Many people see this as a problem. The response to TypeScript choosing Go over Rust was pretty gross imho, no one should be abused for choosing a language.
The actual issue wasn't Go over Rust, rather having key people responsible for C# design, on a Microsoft project, going for a Google language.
While at the same time, the .NET team routinely talks about .NET image problem outside traditional Microsoft shops, which naturally decisions like this aren't helping a tiny bit.
Yeah loved the language and the IDE, but only boring Enterprise stuff built with it. Also C# got so complex, since it had to absorb every idea from F# rather than making F# a viable programming language on itself and improving interop...
At a given point after being a C# programmer for years I still encountered patterns that were completely unreadable to me.
That's what happens when you have an high level VM that wants to support high level concepts from multiple high level languages, and all your languages need to be able to talk to each other.
Same thing will happen to Wasm as it decides to add more and more high level stuff “to avoid shipping multiple GCs” and “to get different languages to talk to each other.” As soon as you want to abstract over more than “a portable CPU and memory” you get into that mess.
Never worked in the past better than JVM and CLR, but let's keep trying.
Do you have a specific snippet in mind which demonstrates the issue? It is likely more of a team or a community issue when it comes to writing unreadable code than a language one since it tends to happen in every sufficiently powerful language.
> rather than making F# a viable programming language
F# is a viable language aside from using specific few libraries that don’t play with it nicely or around writing ref struct heavy code. I’m not sure what makes you think it is not. In comparison, it is probably more viable for shipping products than Scala, Clojure, OCaml and Haskell.
I've quit working on .net solutions a few years ago, but I've built a feature in F# about 7 year ago. While the language was wonderful, the IDE support was very minimal. Documentation and examples were really hard to find. So I don't feel it's really pushed as an alternative.
C#: A lot of LINQ style code was really hard to grok for me. Like a language in a language. The language got really huge in general. While it was fine as a "better Java" for most purposes.
I think it speaks to an incredibly pragmatic viewpoint though. When you take a look at your flagship language/ecosystem and you say, "hey, this is great for building entire systems for doctor/patient data... or perhaps even banking software"... but recognize that "it's probably not the best thing for building a compiler/tooling". Google themselves show the same pragmatism when NOT using Go for Android. They prefer Kotlin these days.
Complete different reasoning, choosing Go instead of Kotlin would mean rewrite from scratch 100% of Android userspace, minus the C++ libraries for Treble drivers, graphics and ART toolchain.
And Google did do exactly that with Fucshia, which doesn't seem to be going to power anything beyond Nest screens.
Do you want more portable than bytecode with a dynamic compiler? Apparenty the greatest thing on Earth as per WebAssembly folks.
As for AOT compilation, there have been multiple approaches since the early days, and the latest, Native AOT is good enough for everything required to write a TypeScript compiler, including better WebAssembly support than the Go compiler, thanks to Blazor infrastructure.
A Rustacean implied Go was not memory safe and that Microsoft couldn't understand the power of Rust. Steve Klabnik & others told them off. But other Rustaceans, like Patrick Walton, argued that Go has memory safety issues in theory.
Rustacean, Gopher... this is an embarrassing way of looking at it.
And, speaking of, Go is not a memory safe language when you reach for its concurrency primitives as it very easily lets you violate memory safety (as opposed to Rust, .NET and JVM, where instead you get logic bugs but not memory safety ones).
That's really dumb, but it's hard to call one one instance abuse. Now, I do beleive it turns into abuse if it a bunch of people do it. Especially if they are bringing their insults specifically into your space (repo forge, mailing lists, flooding your replies on social media, etc).
The Rust community is unfortunately plagued by this subset of devs who are zealous (and downright toxic) in their shilling for their favorite language.
Before anyone gets triggered and starts typing up a reply: "SUBSET" is the word I used.
So is any big enough community. Look at that "Why Go?" discussions, and you'll see a lot more loud, obnoxious C# devs, but somehow Rust is the worst.
There was also a Rewrite it in LISP post[1] fan. Where is the
"The Lisp community is unfortunately plagued by this subset of devs who are zealous (and downright toxic) in their shilling for their favorite language."
Somehow in the past 3-4 years it's only been that subset of Rust devs that have been wailing about: "WHY NOT REWRITE IT IN RUST?".
Often they're mostly the same type: anime pfp, walls of text of pompous technobabble as if they were some elite caste of arcane cyberpriests preaching the gospel of Rust, etc.
There's a reason "just rewrite it in Rust" has become such a meme.
Sure, you got me. If someone says the earth is actually a cube, I have to go defend the spherical chads.
In this case, it's the sheer disconnect between RIIR askers, and the sheer number of people painting Rust devs with a continent wide brush.
It's very in-group vs out-group reasoning.
Let's demonstrate it. For example: I'm a Java dev, and I see a Java dev, being a moron, so I'll say "What a moron". But if that guy was a C# dev, I'm going to say "C# devs are morons". See the error committed here?
> There's a reason "just rewrite it in Rust" has become such a meme.
Just because something is a meme, doesn't make it true, either. Like it was true around the time of Rust 1.0. But that's been like 10 years ago.
It isn't even the worst part about Rust community.
Ah, so no abuse took place then? Interesting how it works. One could look at it as well-deserved frustration. No one would've batted an eye were Rust to be chosen, but opting into Go over C# or F# is an unquestionably poor long-term decision.
This guy seems to be both very positive about Rust and unfairly cynical about it at the same time...
Rust is a really fantastic language but having worked on a mixed C++/Rust codebase I can see why they had so many issues. Rust just wasn't really designed with C++ interop in mind so it's kind of painful to use them together. Impressive that they made it work.
"unfairly"? A lot of issues are obvious deficiencies of Rust, including immaturity of the ecosystem, integration issues, complexity, monomorphization bloat, supply chain issues. Now, all languages have issues, and Rust is certainly a nice language overall. The main issue with Rust is that is has been oversold as a panacea for safety using exaggerated arguments. So a bit of cynicism seems entirely fair.
> The main issue with Rust is that is has been oversold as a panacea for safety using exaggerated arguments.
I know you know this, but Rust does provide essentially complete memory and lifetime safety if you stay within the bounds of safe. Standard C/C++ tooling has no way to even reliably detect memory safety violations, let alone fix them. It's trivial to write buffer overflows that escape ASAN, and missing a single violation invalidates the semantic meaning of the entire program (particularly in C++), which means virtually all nontrivial programs in C/C++ have UB somewhere (a point we disagree on).
Safe rust doesn't guarantee all the other possible definitions of safety, but neither does any other mainstream language. I don't think it serves any useful purpose to complain that the rust folks have oversold their safety arguments by "only" eliminating the biggest cause of safety issues. Stroustrup harps on this a lot and it comes across as very disingenuous given the state of C++.
Let's start with this "... Rust does provide essentially complete memory and lifetime safety if you stay within the bounds of safe." Technically true, except for the fine print. In many scenarios you have to use "unsafe", e.g. to get equal performance to C in something as trivial as a matrix transpose or when interfacing to other code (cf. this article) Let's compare this to "It's trivial to write buffer overflows that escape ASAN,". The question is how difficult it is avoid writing such overflows. But by turning this around, you already made a biased and misleading statement. Or "and missing a single violation invalidates the semantic meaning of the entire program (particularly in C++)," again technically true, but a meaningless talking point. The only practical question is whether a violation leads to exploitable bug and and how difficult / costly it is to exploit. This "no semantic meaning" comment completely disregards the reality of mitigation which often are very effective. Continuing with "which means virtually all nontrivial programs in C/C++ have UB somewhere (a point we disagree on)." I actually agree with this, but most UB is irrelevant and easily mitigated, e.g. signed overflow is not an issue at all because you just tell your compiler to turn them into traps. And "only eliminating the biggest cause of safety issues." Memory safety is nowhere even close the biggest cause of safety issues in IT. For example, it is - for me personally - completely irrelevant. I never got hacked by a memory safety issue. There are entirely different things I worry about, which Rust makes harder to deal with, e.g. supply chain security.
We have very different priorities. Because I work on safety critical systems, language semantics are very important in order to enable things like formal executable semantics and certified compilers. When I sign my name to an inspection, the inspection is relative to the language semantics. Things like exploitability and even performance are pretty far down the list of concerns, unless they happen to impact functional safety.
In many scenarios you have to use "unsafe"
I don't agree, because unsafe is a part of the language. It's widely accepted practice in C & C++ to only use carefully chosen subsets of each language and this is enforced with linters and coding guidelines. You can straightforwardly ban unsafe the same way, or review uses more carefully, etc.
The question is how difficult it is avoid writing such overflows.
It's nigh-impossible as far as I can tell. I already use formal methods, sanitizers, testing, static analysis, careful design, valgrind, intensive reviews, MISRA, etc. I can still quickly find new issues by firing up the fuzzer or looking at another team's code. Other large projects like Chrome and Linux have thousands of competent eyes on them and still deal with these issues too. What is everyone missing?
e.g. signed overflow is not an issue at all because you just tell your compiler to turn them into traps.
Leaving aside the unnecessarily hyperbolic point enabling traps for my systems might literally kill someone, traps usually aren't a well supported operational mode. GCC's ftrapv is broken for example, ubsan isn't recommended for production, and GCC doesn't implement ubsan-minimal. MSVC doesn't support overflow traps at all, nor do most certified compilers. "Just use clang" obviously isn't what you're intending here, so I'm unsure how to interpret this.
In regards to memory safety being the biggest issue, I'm referencing the "70% of high severity bugs" numbers that have been put out by Microsoft and the Chrome teams and repeated by CISA in their memory safety roadmaps.
It's great that you haven't experienced large numbers of memory safety issues, but I can only speak to the lived experience of heartbleed and others. I see memory safety issues daily. I don't see supply chain attacks frequently and given how much publicity accompanied the discovery of the XZ attack, I suspect that's true for others.
So you’re using formal methods, sanitizers, the whole gamut of verification and yet you can “quickly” find new issues just by fuzzing. Sounds like there’s some significant problem there that you’re not mentioning.
Microsoft and Google have a ton of legacy code, they need to have high performance because they’re pushing everything to the web in order to spy better on people, they always churn their software and they are a very juicy target. As far as I’m concerned, they should rewrite everything in Rust and stop telling other people what to do.
But of course, they also need to sell Rust to the public, otherwise they would run out of developers or would have to maintain everything themselves. Hence the cheerleading.
This blog post is much closer to the reality of using Rust in production. In fact I’d add a couple of pitfalls myself:
* original cheerleader gets bored of the Rust rewrite/moves on and the project dies.
* original cheerleader moves on and the project lives under maintenance with non-Rust programmers which do not enjoy working on it and delay and reject changes and/or feature requests.
> So you’re using formal methods, sanitizers, the whole gamut of verification and yet you can “quickly” find new issues just by fuzzing. Sounds like there’s some significant problem there that you’re not mentioning.
Or perhaps you're missing the super-text, that all those things were insufficient to make C safe.
The "significant problem" is the same that every other organization faces: the testing and validation isn't quite as good as it could be and I know where to poke.
Maybe adding to the last point: I worry far more that some employee or student in my lab causes a disaster by downloading compromised python package than I get hacked by any memory safety issue. We do not use Rust, but Cargo would also be a massive concern. Rustup helps to destroy decades of user education that you do not download and run scripts from the internet.
I am fully able to appreciate that memory safety is important and Rust stepped up the game in mainstream programming. I think this is cool. But the exclusive and exaggerated focus on this does more harm than good. Memory safety is certainly much more important for advertisement companies such as Google to secure their mobile spying platforms than it is for me. The religious drive to push Rust everywhere to achieve a relatively modest[1] practical improvement in memory safety clearly shows that some part of the community rather naively adopted the priorities of certain tech companies at the cost of other - maybe more relevant - things.
1. I am fully able to understand the 100% guarantees that Rust can provide when sticking to safe Rust are conceptionally a fundamental step forward compared to what C provides out of the box. But this should not be misrepresented as a huge practical step forward to what can be achieved in memory safety already in C/C++ if one cares about it.
You seem to consider convenient dependency management a security hazard, which I have always found to be a pretty weird take. It logically follows that the severe difficulty of managing dependencies in C and C++ projects is actually a security feature, or how else are we supposed to understand this opinion?
Let's not pretend that anything is better on the traditional C/C++ side, where the approach is usually one or more of:
1. Vendoring dependencies in-tree. This can result in security problems from missing out on bugfixes upstream.
2. Reinventing functionality that would otherwise be served by a dependency. This can result in security problems from much less battle-tested, buggy in-house implementation. In closed-source code, this is effectively security by obscurity.
I've seen both of these cause issues in large C++ projects.
For reference, the Rust/Cargo ecosystem contains a lot of tools and infrastructure to address supply-chain security, but it will always be a difficult problem to solve.
Regardless of the programming language it is a security hazard, that is why we have now SBOM in the industry, and many corporations have procedures in place before adding that cool dependency into the project.
Regardless it is cargo, vcpkg/conan, nuget, maven, npm,....
It isn't validated by legal and IT for upload into internal repos, doesn't get used.
The rust language is not well-specified, and if you take rust as the language specified by the compiler, then it has many soundness bugs. So even if you stay within "safe rust", you can segfault.
The "memory safety" of rust is oversold since "safety" is not formally proven for the rust language. While anecdotally memory-related bugs seem less likely, rust without unsafe is not absolutely safe.
> If you do an experiment and say "C++" anywhere on the Internet, in a minute someone will chime in and educate you about the existence of Rust.
> I know examples when engineers rewrite code from Rust in Rust if they like to rewrite everything in Rust.
> our engineers become too nauseous from Rust poisoning
> So now they [Rust devs] can write something other than new versions of old terminal applications.
> someone shows PRQL, everyone else thinks "What a wonderful idea, and, also, Rust" and gives this project a star on GitHub. This is, by the way, how most of Rust projects get their stars on GitHub. It doesn't look like someone wants to use this language, but what we want is to ride the hype.
> we started to understand that it would be hard to get rid of Rust, and we could tolerate it.
It's a very shitty attitude and not even accurate. You see this attitude from old C/C++ devs quite a lot, it's just very weird that he has that attitude and then also seems to be simultaneously quite keen to use Rust. Very weird!
Anyway those are just the non-technical things. On the technical side:
> Fully offline builds
They solved it by vendoring but this is the obvious solution and also applies to C++.
> Segfault in Rust
They tried to do a null-terminated read of a string that wasn't null-terminated. Nothing to do with Rust. That would be an error in C++ too. In fact this is a strong argument for Rust.
> Panic
C/C++ code aborts. Or more commonly it crashes in a very difficult to debug way. I'll take panics any day.
> Sanitizers require nightly
Ok fair enough but this seems relatively minor.
> Rust's OpenSSL links with the system library by default and you have to set an environment variable to statically link it.
They set the environment variable. Frankly this is a million times easier than doing the same thing in C++.
I'll stop there, but overall it seems like a lot of "this is a problem we had with Rust" where it should really be more like "this is something we had to do when using C++ with Rust".
If I try to read between the lines, I think the vibe comes from feeling / being pressured to use Rust without really seeing the point and it causing frustration in this context.
I think your vibe is more weird. If people have issues with Rust, it is a "shitty attitude". While, of course, C/C++ just objectively suck, right?
I think the comment you responded to gives pretty clear reasons why the take is weird or a "shitty attitude".
Competent C++ developers are the first to admit that C++ objectively sucks. It's a bad language, but that doesn't mean there aren't good reasons to use it. Claiming that C++ is great is a weird hill to die on.
> As a downside, Rust libraries typically have a large fan-out of dependencies, much like Node.js. This requires taking care to avoid the blow-up of dependencies, and to deal with annoyances of dependabot.
In the linked situation, the were using the library of a binary. This get into the tension between "make it easy for `cargo install` (and have a `cli` feature be default) and "make it easy for `cargo add` (and make `cli` opt-in).
This is not a great experience and we should improve it. There was an RFC to auto-enable features when a build-target is built (allowing `cli` to be opt-in but `cargo install` to auto-opt-in), rather than skip it, but the dev experience needed work, The maintainer can split the package which helps with semver for the two sides but needs to break one side to do so and if its the bin, people need to discover the suffix (`-bin`, `-cli`, etc).
Current workarounds:
- `cargo add skim` will show the `cli` feature is enabled and you can re-run with `--no-default-features`
- if `cli` wasn't a default, `cargo install skim` would suggest adding `--features cli`
Re: panics: If you have a single long lived process that must do multiple short-lived things (web requests, say) and a panic in one of them MUST NOT take down the whole process, is that extremely difficult to pull off in Rust? I thought you could set up panic boundaries much like you would use catch-all exception handlers around e.g. each web request or similar, in other languages?
You can install a global panic handler to avoid bringing the whole process down. Instead of aborting, take the stack trace, print it, perhaps raise to sentry, and kill the specific "work unit" that caused it. This "work unit" can be a thread of a task, depending on how the application is architected.
This is precisely what Tokio does: by default, a panic in async code will only bring down the task that panicked instead of the whole application. In the context of a server, where you'll spawn a task for each request, you have no way to bring down the whole application (), only your current scope.
(): there could be other issues, like mutex poisoning, which is why nobody uses the stdlib's mutexes. But the general point still stands.
In the context of Tokio, the tokio's native mutexes / locking primitives. For sync code, parking_lot is the de facto replacement for the stdlib's ones.
I don't remember where I read it, but it has been admitted that having synchronization primitives with poisoning in the stdlib was a mistake, and "simpler" ones without it.
for context: a mutex is poisoned should a panic occur while the mutex is held. it is then assumed the guarded data to be broken or in an unknown state, thus poisoned.
Since Rust is not a managed/high-level language, panics are unrecoverable crashes so they need to be dealt at a higher-level, i.e the OS, with appropriate supervisor systems like systemd, or having a master Rust process that spawns subprocesses, and react when one of them abnormally terminates with regular POSIX APIs.
On a platform like Elixir, for example, you can deal with process crashes because everything runs on top of a VM, which is at all effects and purposes your OS, and provides process supervision APIs.
Rust can be optionally compiled in a panic=abort mode, but by default panics are recoverable. From implementation perspective Rust panics are almost identical to C++ exceptions.
In very early pre-1.0 prototypes Rust was meant to have isolated tasks that are killed on panic. As Rust became more low-level, it turned into terminating a whole OS thread on panic, and since Rust 1.9.0, it's basically just a try/catch with usage guidelines.
But no few would write a process-per-request web server today for example. And if a single process web server handles 100 requests you would then accept that one bad request one tore down the handling of the 99 others. Even if you have a watchdog that restarts the service after the one request choked, you wouldn't save the 99 requests that were in-flight on the same process.
Can't you catch_unwind for each request handler, if one chokes then you just ignore that request. If you worry about that messing anything up, then you can tear down and restart your process after that, so the 99 other requests get a chance to complete?
This is factually incorrect. The behavior you describe with Elixir (sic) is precisely what most Rust async runtimes do. (sic because it's Erlang that's to thank)
IMHO that is the sensible thing to do for pretty much any green thread or highly concurrent application. e.g. Golang does the same: panicking will only bring down the goroutine and not the whole process.
Love this approach — Rust in the right places.
I’ve been wondering if using Wasm modules (e.g. from MoonBit) for isolated components might offer a similar balance: memory safety without full rewrite.
One thing I often see pop up in larger projects, which in the article is likely the fault of way to large symbols, is overuse of generics/type state/etc.
Or you could formulate this as needless obsession with not using `dyn`.
And sure generics are more powerful, dyn has limitations, etc. etc.
It's one of this "Misconceptions Programmers believe about Monomorphisation vs. Virtual Calls" things as in:
TL;DR: dyn isn't as bad as some people make it out to be; Weather perf. or convenience it can be the better choice. Any absolute recommendation of always use this or that is wrong.
- wrong: monomorphisation is always faster; reason: monomorphisation pollutes the instruction cache way worse, as such in some situations switching some parts (not all parts) to virtual calls and similar approaches can lead to major performance improvements. Good example here are various experiments about how to implement something like serde but faster and with less binary size.
- wrong: monomorphisation was picked in rust because it's better for rust; right: it was picked because it is reasonable good and was viable to implement with available resources. (for low level languages it's still better then only using vtables, but technically transparent hybrid solutions are even more desirable)
- wrong: virtual calls are always slow in microbenchmarks; right: while they are more work to do modern cpus have gotten very very good at optimizing them, under the right conditions the might be literally as fast as normal function calls (but most times they are slightly slower until mono. trashes icache too much)
- wrong: monomorphisation is always better for the optimizer; right: monomorphisation gives the optimizer more choices, but always relevant or useful choices but they always add more work it has to do, so slower compiler times and if you are unlucky it will miss more useful optimizations due to noise
- wrong: in rust generics are always more convenient to use; right: Adding a generic (e.g. to accomodate a return position impl trait) in the wrong place can lead you to having to write generic parameters all through the code base. But `dyn` has much more limitations/constraints, so for both convenience and performance it's a trade of which more often favors monomorphisation, but not as much as many seem to believe.
- wrong: always using dyn works; right: dyn doesn't work for all code and even if it would using it everywhere can put too much burden on the branch predictor and co. making vcalls potentially as slow as some people thing they are (it's kinda similar to how to much monomorphisation is bad for the icache and it's predictors, if we gloss over a ton of technical details)
So all in all understand what your tools entail, instead of just blindly using them.
And yes that's not easy.
It's on of the main differences between a junior and a senior skill level.
As a junior you follow rules, guidelines (or imitate other) when to use which tool. As a senior you deeply understand why the rules, guidelines, actions of other people are the way they are and in turn know when to diverge from it.
My understanding of why Rust does monomorphization by default is that it wanted maximum performance, since it was meant to replace C++. Excellent post!
Those github pr linked from the blog don't give me much confident:
Links to "Better C++" is a PR for removing c++ template for build time.
Unwinding stack in a "funny" way.
PR comment saying something shouldn't be public and go on merging anyway.
I love that it goes both ways, about plus and minus of both languages, including rewriting back into C++ when it made sense, and the side joke about rewriting existing CLI tools in Rust.
While C++ isn't perfect, has the warts of a 50 year's old language, and probably will never match Rust's safety, we would already be in a much better place if at least everyone used the tools at their disposal from the last 30 years.
While I would advise to use Rust for some security critical scenarios, there are many others where it is still getting there, and there are other requirements to take into account other than affine types.
Coming from C++, my favorite take on Rust is that it is fundamentally about productivity.
Avoiding UB is a serious drain on productivity in C++, and every new language or library feature comes with additional pitfalls, increasing the mental load.
This is to say: The benefit of Rust is not actually about "security critical scenarios", but much more generally about delivering the same quality of code in a fraction of the time.
While I agree with the general sentiment, while it doesn't sort out all safety issues, actually programming in C++ instead of C with C++ compiler, and making use of the pleothora of analysers would already prevent many issues.
Better be 80% safer than none at all.
It's not only avoid UB in one area, it's about hidden footguns which you may not know about. I have been using C++ for a decade now and there are still UB causes I learn about every week or so.
C++ can be safe enough if you proceed with care.
What I dislike of C++ is that it grew to become a monster of a language, containing all programming paradigms and ideas, good or bad, known to mankind.
It's so monstrously huge no human can hold its entire complexity in his head.
C++ allows you to do things in 10000 different ways and developer would do just that. Often in the same code base.
That being said, I would use a sane subset of C++ every day over Rust. It's not that I hate Rust or that I don't think is good, technically sound and capable. It just doesn't fit the way I think and I like to work.
I like to keep a simple model in mind. For me, the memory is just a huge array from which we copy data to CPU cache, move some to CPU registers, execute instructions and fetch data from the registers and put it again in some part of that huge array, to be used later. Rust adds a lot of complexity over this simple mental model of mine.
> C++ can be safe enough if you proceed with care.
The problem with this is if you have a team working on a C++ product you will need some people who can catch memory bugs to review every code before merging. Even with this approach it still possible to missed some memory bugs since the reviewer need to fully understand each object lifetime, which is time consuming during code review.
I'm working on a company that run on a server application written in C/C++. The code base is very large and we always have memory bugs that required ASAN on production to fix the bugs. We have started migrating each part to Rust one year ago and we never have a single crash from Rust code. The reason we choose Rust is because it is a server application that computation intensive, latency sensitive and large amount of active connections.
Try keep using Rust until you comfortable with it and you will like it. It fit with your simple mental model. I can say this because I was a C++ users for the whole life and switched to Rust recently.
> The problem with this is if you have a team working on a C++ product you will need some people who can catch memory bugs to review every code before merging. Even with this approach it still possible to missed some memory bugs since the reviewer need to fully understand each object lifetime, which is time consuming during code review.
Nah, if you're trying to match every "new" with a "delete" during the code review, you've already lost the battle. You can probably succeed when the code is added, but then the edits start to flow and sooner or later it's gone. Reviews are mostly good to catch design problems, not bugs.
The only reliable approach I know is to have a strict rule of never mixing memory management with business logic. Nothing else works well enough but this one however works remarkably well.
Business logic should rely on containers, starting with simple unique_ptrs and vectors and going deeper and deeper into the custom land when appropriate. If you can't find a suitable standard container, you build a custom one. The principal difference of "writing a custom container when you need it" compared to "integrate custom memory-management into the business logic when you need it" is that containers are:
* well understood
* well tested
* relatively small code-wise
* almost never change once implemented
None of the above applies to the business logic, it's the complete opposite.
Think of it kind of like programming in Java: someone has to write the memory management and it's a hell of job. However once this is done, programming the ever-changing business logic is easy and safe.
You can live the same life in C++ AND also have the ability to put on the "Doomguy of the memory management" shoes whenever you feel like it. Just don't forget to take of the shoes of the "business logic guy" when you do it, you can't wear both at the time.
I was pretty hyped on C++ during the (early?) 2010s, hoping that eventually I get to work on project that has really serious number crunching needs, and ... and then ... the big guns. Now reading this feels like the best argument for Rust/Scala. :)
> The code base is very large and we always have memory bugs that required ASAN on production to fix the bugs
This is a big part of why Rust works. We also never have errors that we can't reproduce in development.
> if you proceed with care
Yes, but that is _incredibly_ time consuming. You have to set up asan, msan, tsan, and valgrind. If you want linting you need to do shenanigans to wire up clang-tidy.
I also like simple mental models. I like not having to figure out the cmake modifications to pull in a new library. I like having a search engine when I need a new library for x. I like when libraries return Result<Ok, Err> instead of ping ponging between C libraries which indicate errors using retval flags or C++ libraries that throw std::runtime_error(). I like not dealing with void* pointer casting .
I find Zig to be a saner Rust alternative, with the caveat that it is still a immature (but getting there).
Give it a few years and it will be a very strong contender.
The true C successor.
Yeah, I’m leaning towards zig but I’m a bit on the fence still.
For Rust, I kind of got tired of writing unsafe rust for embedded, but that’s addressable afaik. The real dealbreaker was that after 10k+ lines of code I still will pop open the source of a library that solves a simple problem and the code looks indecipherable. I also don’t really agree with the dependency explosion that cargo encourages.
Zig is very nice in that it has the most ergonomic struct usage I’ve encountered. The stdlib could really use some improvement though. Comptime is very cool, but I also worry if the community will get undisciplined with it.
Zig is fantastic, however there are a few issues (mostly related to its immature/wip status):
- the build system is constantly changing in a breaking way (between releases some of the repos I have on GH no longer build and need their build.zig to be updated).
- the comptime section of the docs needs to be heavily expanded, I'd love to see them take common Go interfaces and redo them in Zig (like io.Writer, io.Reader), breaking down the process step-by-step. It took me a little bit longer than it should've to efficiently use comptime.
- a whole section dedicated to things like using WaitGroup and multithreading, for those coming from langs like Go. Also, higher-level concurrency primitives like channels would be fantastic.
- a better import system for external zig libraries, the zig fetch => .dependency => .root_module.addImport stuff is not as straightforward as it should be although for someone coming from C it definitely does feel like using meson
None of these are critical and again, all signs of Zig's "youth".
>Also, higher-level concurrency primitives like channels would be fantastic.
That can be done trough a library.
Agreed! There is actually a nice C library (libdill: https://libdill.org).
Would still like it as a first-class language construct.
I like Zig, too. Not sure if it's a Rust alternative as in you still have to manage the memory yourself.
But is much simpler, easier to read, easier to understand, easier to follow and easier to reason about. It's less verbose and more productive.
It feels like what C would look like had it been invented today.
When I say "Rust alternative" it's precisely because it competes in the same space: very low-level, no GC, extremely high performance constraints, safety guarantees.
Re: safety guarantees, much digital ink has been spilled on how Zig can give Rust a run for its money when it comes to safety.
When people say: Rust <=> C++, Zig <=> C; they forget that C++ was precisely meant to be an enhanced C, which is what Zig is trying to accomplish. They simply eschewed chasing complexity as the holiest of holy grails, which in turn leads to the cognitive load of writing/reading Zig code to be MUCH smaller than C++ or other langs in that space.
All that said, I'd never recommend a company build their product on Zig just yet, at least not with some kind of red telephone to the Zig team or a dedicated Zig developer, given it's still not fully mature.
Re: managing memory yourself, Zig's defer makes this much, much more straightforward than you would think, and feeding in your own allocators can simplify this in many cases (and make testing for leaks much easier).
It has the same safety guarantees than Modula-2 and Object Pascal have been offering for decades, but apparently curly brackets and @ everywhere is better than begin/end.
Curly braces have taken over the world (let's ignore Python for now...).
If you present Zig like a C successor (which C++ was at the moment of its inception), I totally agree.
Zig is decent as a systems programming language. It's good they don't add lots of features and keep it simple.
The only downside I see is companies aren't investing in it much.
What complexity specifically does Rust add to that model?
I can think of three things off the top of my head:
- Rust doesn’t let you pretend that memory is a flat array of bytes - Single ownership of data can be annoying in some cases - The borrow checker pointing out that you’re trying to do something stupid with pointers (again) can be annoying
Of course, I’m of the opinion that the hassles are worth it, especially the borrow checker. Almost every time I have to fight the borrow checker, it’s because I haven’t thought properly about the pointers involved and tried to do something stupid.
How does single ownership conflict with the idea that memory is a flat array of bytes?
Further, the borrow checker does not care about pointers, only references. With pointers, you are on your own. It is true that using pointers in Rust is more cumbersome than it could be. But it is much easier to compartmentalise the pointer parts into separate functions and expose references instead.
I agree that some paradigms and patterns are genuinely difficult to use, e.g. any intrusive data structure, but I do not see the contentious link between simple memory models and the borrow checker and the like.
Single ownership and memory not being flat are separate points.
And I guess I'm imprecise saying pointers where I mean borrowed values, but my point is that a borrow is just a pointer with additional type checking. More formally: C pointers are a complete but unsound formal system, whereas Rust borrows are sound but incomplete. And every time I get in a fight with the borrow checker, it's because I'm doing something unsound, not because the system is incomplete.
even in c, you can't assume memory is a flat array of bytes. pointers have provenance, and compilers exploit this: https://godbolt.org/z/ondGh4Ynn
I love this. We are a rust shop and we use clickhouse a fair bit. We’ve been quite impressed with its speed and flexibility. I’m glad to see this kind of direct, real-world feedback around both the benefits and difficulties of mixing rust and C++, which we have also had to do a bit of (albeit in the opposite direction: a smattering of C++ in a sea of rust).
I’m not sure if the poster here is the post author, but it would be great if the author would consider filling out this survey that was recently released asking for feedback on the future of rust’s vision: https://blog.rust-lang.org/2025/04/04/vision-doc-survey.html
I’d love to see rust become the de facto standard for cross-language portable code by virtue of its ease of use, but as this and our experience highlights, there’s some way to go yet!
Oh one more note, regarding hermetic builds: I have tried to package clickhouse in nix for our dev environment and CI, but its build is pretty complicated, so I resorted to just pulling the precompiled binaries.
Nix, via the standard rust integration or via something like crane, is actually quite nice for building rust/C++ combo projects, so it’d be awesome if the team might consider this as a means of achieving reproducibility. I’d imagine they’d have an easier time of it than I did, given they are more familiar with their own build process.
> If you do an experiment and say "C++" anywhere on the Internet, in a minute someone will chime in and educate you about the existence of Rust.
Many people see this as a problem. The response to TypeScript choosing Go over Rust was pretty gross imho, no one should be abused for choosing a language.
The actual issue wasn't Go over Rust, rather having key people responsible for C# design, on a Microsoft project, going for a Google language.
While at the same time, the .NET team routinely talks about .NET image problem outside traditional Microsoft shops, which naturally decisions like this aren't helping a tiny bit.
Yeah loved the language and the IDE, but only boring Enterprise stuff built with it. Also C# got so complex, since it had to absorb every idea from F# rather than making F# a viable programming language on itself and improving interop...
At a given point after being a C# programmer for years I still encountered patterns that were completely unreadable to me.
That's what happens when you have an high level VM that wants to support high level concepts from multiple high level languages, and all your languages need to be able to talk to each other.
Same thing will happen to Wasm as it decides to add more and more high level stuff “to avoid shipping multiple GCs” and “to get different languages to talk to each other.” As soon as you want to abstract over more than “a portable CPU and memory” you get into that mess.
Never worked in the past better than JVM and CLR, but let's keep trying.
Do you have a specific snippet in mind which demonstrates the issue? It is likely more of a team or a community issue when it comes to writing unreadable code than a language one since it tends to happen in every sufficiently powerful language.
C# did not “have to absorb every idea from F#”. This is not how programming language development works. You can read LDM notes at https://github.com/dotnet/csharplang/discussions?discussions... and specs are documented in the repo.
> rather than making F# a viable programming language
F# is a viable language aside from using specific few libraries that don’t play with it nicely or around writing ref struct heavy code. I’m not sure what makes you think it is not. In comparison, it is probably more viable for shipping products than Scala, Clojure, OCaml and Haskell.
I've quit working on .net solutions a few years ago, but I've built a feature in F# about 7 year ago. While the language was wonderful, the IDE support was very minimal. Documentation and examples were really hard to find. So I don't feel it's really pushed as an alternative.
C#: A lot of LINQ style code was really hard to grok for me. Like a language in a language. The language got really huge in general. While it was fine as a "better Java" for most purposes.
I think it speaks to an incredibly pragmatic viewpoint though. When you take a look at your flagship language/ecosystem and you say, "hey, this is great for building entire systems for doctor/patient data... or perhaps even banking software"... but recognize that "it's probably not the best thing for building a compiler/tooling". Google themselves show the same pragmatism when NOT using Go for Android. They prefer Kotlin these days.
Complete different reasoning, choosing Go instead of Kotlin would mean rewrite from scratch 100% of Android userspace, minus the C++ libraries for Treble drivers, graphics and ART toolchain.
And Google did do exactly that with Fucshia, which doesn't seem to be going to power anything beyond Nest screens.
.NET isn't famous for producing portable binaries and I don't think it's being any better now other than some experimental modes that break most code.
Do you want more portable than bytecode with a dynamic compiler? Apparenty the greatest thing on Earth as per WebAssembly folks.
As for AOT compilation, there have been multiple approaches since the early days, and the latest, Native AOT is good enough for everything required to write a TypeScript compiler, including better WebAssembly support than the Go compiler, thanks to Blazor infrastructure.
> no one should be abused for choosing a language
Can you link to the abuse?
https://news.ycombinator.com/item?id=43413702 is one example.
A Rustacean implied Go was not memory safe and that Microsoft couldn't understand the power of Rust. Steve Klabnik & others told them off. But other Rustaceans, like Patrick Walton, argued that Go has memory safety issues in theory.
https://dictionary.cambridge.org/dictionary/english/abuse
Rustacean, Gopher... this is an embarrassing way of looking at it.
And, speaking of, Go is not a memory safe language when you reach for its concurrency primitives as it very easily lets you violate memory safety (as opposed to Rust, .NET and JVM, where instead you get logic bugs but not memory safety ones).
https://github.com/microsoft/typescript-go/discussions/411
Some of the worst comments have been scrubbed but they might be in one of the internet archival sites.
I recall a thread on Twitter where someone called the TypeScript developers "brain dead *tards" for using Go over Rust.
You'll get loud, obnoxious idiots in any big enough crowd. Also [source needed].
That's really dumb, but it's hard to call one one instance abuse. Now, I do beleive it turns into abuse if it a bunch of people do it. Especially if they are bringing their insults specifically into your space (repo forge, mailing lists, flooding your replies on social media, etc).
The Rust community is unfortunately plagued by this subset of devs who are zealous (and downright toxic) in their shilling for their favorite language.
Before anyone gets triggered and starts typing up a reply: "SUBSET" is the word I used.
So is any big enough community. Look at that "Why Go?" discussions, and you'll see a lot more loud, obnoxious C# devs, but somehow Rust is the worst.
There was also a Rewrite it in LISP post[1] fan. Where is the
[1]https://github.com/microsoft/typescript-go/discussions/411#d...FWIW, Go used to have that issue of "I'm a 10x dev writing type-safe compiled microservices in Go (not golang) wowzers".
But that subset of devs has largely disappeared/moved on to other langs.
.....and you got triggered.
Somehow in the past 3-4 years it's only been that subset of Rust devs that have been wailing about: "WHY NOT REWRITE IT IN RUST?".
Often they're mostly the same type: anime pfp, walls of text of pompous technobabble as if they were some elite caste of arcane cyberpriests preaching the gospel of Rust, etc.
There's a reason "just rewrite it in Rust" has become such a meme.
> .....and you got triggered.
Sure, you got me. If someone says the earth is actually a cube, I have to go defend the spherical chads.
In this case, it's the sheer disconnect between RIIR askers, and the sheer number of people painting Rust devs with a continent wide brush.
It's very in-group vs out-group reasoning.
Let's demonstrate it. For example: I'm a Java dev, and I see a Java dev, being a moron, so I'll say "What a moron". But if that guy was a C# dev, I'm going to say "C# devs are morons". See the error committed here?
> There's a reason "just rewrite it in Rust" has become such a meme.
Just because something is a meme, doesn't make it true, either. Like it was true around the time of Rust 1.0. But that's been like 10 years ago.
It isn't even the worst part about Rust community.
>In this case, it's the sheer disconnect between RIIR askers, and the sheer number of people painting Rust devs with a continent wide brush.
I have no idea how you can accuse me of this when I made a big disclaimer in my original comment:
>Before anyone gets triggered and starts typing up a reply: "SUBSET" is the word I used.
> "SUBSET" is the word I used.
So? There is a subset of X lang (both bigger and smaller than Rust), complaining why not rewrite in X.
But only Rust devs ever get the flak. Because it's a played out meme, or something. Or it triggers the Rust dev, whatever.
> caste of arcane cyberpriests preaching the gospel of
Hey, that's a productive attitude when attempting to fix CI!
(or figuring out why smaller compiler output performs worse)
Ah, so no abuse took place then? Interesting how it works. One could look at it as well-deserved frustration. No one would've batted an eye were Rust to be chosen, but opting into Go over C# or F# is an unquestionably poor long-term decision.
This guy seems to be both very positive about Rust and unfairly cynical about it at the same time...
Rust is a really fantastic language but having worked on a mixed C++/Rust codebase I can see why they had so many issues. Rust just wasn't really designed with C++ interop in mind so it's kind of painful to use them together. Impressive that they made it work.
"unfairly"? A lot of issues are obvious deficiencies of Rust, including immaturity of the ecosystem, integration issues, complexity, monomorphization bloat, supply chain issues. Now, all languages have issues, and Rust is certainly a nice language overall. The main issue with Rust is that is has been oversold as a panacea for safety using exaggerated arguments. So a bit of cynicism seems entirely fair.
> monomorphization bloat
Especially because the fix is so easy, it could just be fixed by the compiler on the fly.
If you have a function
and call it two times The compiler will generate two functions This can be fixed by just proxying the duplicated call like this> The main issue with Rust is that is has been oversold as a panacea for safety using exaggerated arguments.
I know you know this, but Rust does provide essentially complete memory and lifetime safety if you stay within the bounds of safe. Standard C/C++ tooling has no way to even reliably detect memory safety violations, let alone fix them. It's trivial to write buffer overflows that escape ASAN, and missing a single violation invalidates the semantic meaning of the entire program (particularly in C++), which means virtually all nontrivial programs in C/C++ have UB somewhere (a point we disagree on).
Safe rust doesn't guarantee all the other possible definitions of safety, but neither does any other mainstream language. I don't think it serves any useful purpose to complain that the rust folks have oversold their safety arguments by "only" eliminating the biggest cause of safety issues. Stroustrup harps on this a lot and it comes across as very disingenuous given the state of C++.
Your comment is a perfect example for those exaggerated claims.
Which claims do you think are exaggerated?
Let's start with this "... Rust does provide essentially complete memory and lifetime safety if you stay within the bounds of safe." Technically true, except for the fine print. In many scenarios you have to use "unsafe", e.g. to get equal performance to C in something as trivial as a matrix transpose or when interfacing to other code (cf. this article) Let's compare this to "It's trivial to write buffer overflows that escape ASAN,". The question is how difficult it is avoid writing such overflows. But by turning this around, you already made a biased and misleading statement. Or "and missing a single violation invalidates the semantic meaning of the entire program (particularly in C++)," again technically true, but a meaningless talking point. The only practical question is whether a violation leads to exploitable bug and and how difficult / costly it is to exploit. This "no semantic meaning" comment completely disregards the reality of mitigation which often are very effective. Continuing with "which means virtually all nontrivial programs in C/C++ have UB somewhere (a point we disagree on)." I actually agree with this, but most UB is irrelevant and easily mitigated, e.g. signed overflow is not an issue at all because you just tell your compiler to turn them into traps. And "only eliminating the biggest cause of safety issues." Memory safety is nowhere even close the biggest cause of safety issues in IT. For example, it is - for me personally - completely irrelevant. I never got hacked by a memory safety issue. There are entirely different things I worry about, which Rust makes harder to deal with, e.g. supply chain security.
We have very different priorities. Because I work on safety critical systems, language semantics are very important in order to enable things like formal executable semantics and certified compilers. When I sign my name to an inspection, the inspection is relative to the language semantics. Things like exploitability and even performance are pretty far down the list of concerns, unless they happen to impact functional safety.
I don't agree, because unsafe is a part of the language. It's widely accepted practice in C & C++ to only use carefully chosen subsets of each language and this is enforced with linters and coding guidelines. You can straightforwardly ban unsafe the same way, or review uses more carefully, etc. It's nigh-impossible as far as I can tell. I already use formal methods, sanitizers, testing, static analysis, careful design, valgrind, intensive reviews, MISRA, etc. I can still quickly find new issues by firing up the fuzzer or looking at another team's code. Other large projects like Chrome and Linux have thousands of competent eyes on them and still deal with these issues too. What is everyone missing? Leaving aside the unnecessarily hyperbolic point enabling traps for my systems might literally kill someone, traps usually aren't a well supported operational mode. GCC's ftrapv is broken for example, ubsan isn't recommended for production, and GCC doesn't implement ubsan-minimal. MSVC doesn't support overflow traps at all, nor do most certified compilers. "Just use clang" obviously isn't what you're intending here, so I'm unsure how to interpret this.In regards to memory safety being the biggest issue, I'm referencing the "70% of high severity bugs" numbers that have been put out by Microsoft and the Chrome teams and repeated by CISA in their memory safety roadmaps.
It's great that you haven't experienced large numbers of memory safety issues, but I can only speak to the lived experience of heartbleed and others. I see memory safety issues daily. I don't see supply chain attacks frequently and given how much publicity accompanied the discovery of the XZ attack, I suspect that's true for others.
So you’re using formal methods, sanitizers, the whole gamut of verification and yet you can “quickly” find new issues just by fuzzing. Sounds like there’s some significant problem there that you’re not mentioning.
Microsoft and Google have a ton of legacy code, they need to have high performance because they’re pushing everything to the web in order to spy better on people, they always churn their software and they are a very juicy target. As far as I’m concerned, they should rewrite everything in Rust and stop telling other people what to do.
But of course, they also need to sell Rust to the public, otherwise they would run out of developers or would have to maintain everything themselves. Hence the cheerleading.
This blog post is much closer to the reality of using Rust in production. In fact I’d add a couple of pitfalls myself:
* original cheerleader gets bored of the Rust rewrite/moves on and the project dies.
* original cheerleader moves on and the project lives under maintenance with non-Rust programmers which do not enjoy working on it and delay and reject changes and/or feature requests.
> So you’re using formal methods, sanitizers, the whole gamut of verification and yet you can “quickly” find new issues just by fuzzing. Sounds like there’s some significant problem there that you’re not mentioning.
Or perhaps you're missing the super-text, that all those things were insufficient to make C safe.
The "significant problem" is the same that every other organization faces: the testing and validation isn't quite as good as it could be and I know where to poke.
Maybe adding to the last point: I worry far more that some employee or student in my lab causes a disaster by downloading compromised python package than I get hacked by any memory safety issue. We do not use Rust, but Cargo would also be a massive concern. Rustup helps to destroy decades of user education that you do not download and run scripts from the internet.
I am fully able to appreciate that memory safety is important and Rust stepped up the game in mainstream programming. I think this is cool. But the exclusive and exaggerated focus on this does more harm than good. Memory safety is certainly much more important for advertisement companies such as Google to secure their mobile spying platforms than it is for me. The religious drive to push Rust everywhere to achieve a relatively modest[1] practical improvement in memory safety clearly shows that some part of the community rather naively adopted the priorities of certain tech companies at the cost of other - maybe more relevant - things.
1. I am fully able to understand the 100% guarantees that Rust can provide when sticking to safe Rust are conceptionally a fundamental step forward compared to what C provides out of the box. But this should not be misrepresented as a huge practical step forward to what can be achieved in memory safety already in C/C++ if one cares about it.
You seem to consider convenient dependency management a security hazard, which I have always found to be a pretty weird take. It logically follows that the severe difficulty of managing dependencies in C and C++ projects is actually a security feature, or how else are we supposed to understand this opinion?
Let's not pretend that anything is better on the traditional C/C++ side, where the approach is usually one or more of:
1. Vendoring dependencies in-tree. This can result in security problems from missing out on bugfixes upstream.
2. Reinventing functionality that would otherwise be served by a dependency. This can result in security problems from much less battle-tested, buggy in-house implementation. In closed-source code, this is effectively security by obscurity.
I've seen both of these cause issues in large C++ projects.
For reference, the Rust/Cargo ecosystem contains a lot of tools and infrastructure to address supply-chain security, but it will always be a difficult problem to solve.
Regardless of the programming language it is a security hazard, that is why we have now SBOM in the industry, and many corporations have procedures in place before adding that cool dependency into the project.
Regardless it is cargo, vcpkg/conan, nuget, maven, npm,....
It isn't validated by legal and IT for upload into internal repos, doesn't get used.
Those same practices can be applied here where it matters.
The rust language is not well-specified, and if you take rust as the language specified by the compiler, then it has many soundness bugs. So even if you stay within "safe rust", you can segfault.
The "memory safety" of rust is oversold since "safety" is not formally proven for the rust language. While anecdotally memory-related bugs seem less likely, rust without unsafe is not absolutely safe.
iirc the formal correctness of Rusts memory model was proven by Ralf Jung https://research.ralfj.de/thesis.html
Yes. I was thinking of this:
> If you do an experiment and say "C++" anywhere on the Internet, in a minute someone will chime in and educate you about the existence of Rust.
> I know examples when engineers rewrite code from Rust in Rust if they like to rewrite everything in Rust.
> our engineers become too nauseous from Rust poisoning
> So now they [Rust devs] can write something other than new versions of old terminal applications.
> someone shows PRQL, everyone else thinks "What a wonderful idea, and, also, Rust" and gives this project a star on GitHub. This is, by the way, how most of Rust projects get their stars on GitHub. It doesn't look like someone wants to use this language, but what we want is to ride the hype.
> we started to understand that it would be hard to get rid of Rust, and we could tolerate it.
It's a very shitty attitude and not even accurate. You see this attitude from old C/C++ devs quite a lot, it's just very weird that he has that attitude and then also seems to be simultaneously quite keen to use Rust. Very weird!
Anyway those are just the non-technical things. On the technical side:
> Fully offline builds
They solved it by vendoring but this is the obvious solution and also applies to C++.
> Segfault in Rust
They tried to do a null-terminated read of a string that wasn't null-terminated. Nothing to do with Rust. That would be an error in C++ too. In fact this is a strong argument for Rust.
> Panic
C/C++ code aborts. Or more commonly it crashes in a very difficult to debug way. I'll take panics any day.
> Sanitizers require nightly
Ok fair enough but this seems relatively minor.
> Rust's OpenSSL links with the system library by default and you have to set an environment variable to statically link it.
They set the environment variable. Frankly this is a million times easier than doing the same thing in C++.
I'll stop there, but overall it seems like a lot of "this is a problem we had with Rust" where it should really be more like "this is something we had to do when using C++ with Rust".
Weird vibe anyway.
If I try to read between the lines, I think the vibe comes from feeling / being pressured to use Rust without really seeing the point and it causing frustration in this context.
I think your vibe is more weird. If people have issues with Rust, it is a "shitty attitude". While, of course, C/C++ just objectively suck, right?
I think the comment you responded to gives pretty clear reasons why the take is weird or a "shitty attitude".
Competent C++ developers are the first to admit that C++ objectively sucks. It's a bad language, but that doesn't mean there aren't good reasons to use it. Claiming that C++ is great is a weird hill to die on.
Check the post date. It was published on April first
They link to actual issues in their bug tracker, so if it was a joke, it was an impressive long con.
> As a downside, Rust libraries typically have a large fan-out of dependencies, much like Node.js. This requires taking care to avoid the blow-up of dependencies, and to deal with annoyances of dependabot.
In the linked situation, the were using the library of a binary. This get into the tension between "make it easy for `cargo install` (and have a `cli` feature be default) and "make it easy for `cargo add` (and make `cli` opt-in).
This is not a great experience and we should improve it. There was an RFC to auto-enable features when a build-target is built (allowing `cli` to be opt-in but `cargo install` to auto-opt-in), rather than skip it, but the dev experience needed work, The maintainer can split the package which helps with semver for the two sides but needs to break one side to do so and if its the bin, people need to discover the suffix (`-bin`, `-cli`, etc).
Current workarounds:
- `cargo add skim` will show the `cli` feature is enabled and you can re-run with `--no-default-features`
- if `cli` wasn't a default, `cargo install skim` would suggest adding `--features cli`
This web is impossible to scroll. Sadly, when I see that I don't think I need to read about any technical stuff they did anymore.
Re: panics: If you have a single long lived process that must do multiple short-lived things (web requests, say) and a panic in one of them MUST NOT take down the whole process, is that extremely difficult to pull off in Rust? I thought you could set up panic boundaries much like you would use catch-all exception handlers around e.g. each web request or similar, in other languages?
You can install a global panic handler to avoid bringing the whole process down. Instead of aborting, take the stack trace, print it, perhaps raise to sentry, and kill the specific "work unit" that caused it. This "work unit" can be a thread of a task, depending on how the application is architected.
This is precisely what Tokio does: by default, a panic in async code will only bring down the task that panicked instead of the whole application. In the context of a server, where you'll spawn a task for each request, you have no way to bring down the whole application (), only your current scope.
(): there could be other issues, like mutex poisoning, which is why nobody uses the stdlib's mutexes. But the general point still stands.
> there could be other issues, like mutex poisoning, which is why nobody uses the stdlib's mutexes.
What does everyone use instead?
In the context of Tokio, the tokio's native mutexes / locking primitives. For sync code, parking_lot is the de facto replacement for the stdlib's ones.
I don't remember where I read it, but it has been admitted that having synchronization primitives with poisoning in the stdlib was a mistake, and "simpler" ones without it.
for context: a mutex is poisoned should a panic occur while the mutex is held. it is then assumed the guarded data to be broken or in an unknown state, thus poisoned.
parking_lot
Since Rust is not a managed/high-level language, panics are unrecoverable crashes so they need to be dealt at a higher-level, i.e the OS, with appropriate supervisor systems like systemd, or having a master Rust process that spawns subprocesses, and react when one of them abnormally terminates with regular POSIX APIs.
On a platform like Elixir, for example, you can deal with process crashes because everything runs on top of a VM, which is at all effects and purposes your OS, and provides process supervision APIs.
Rust can be optionally compiled in a panic=abort mode, but by default panics are recoverable. From implementation perspective Rust panics are almost identical to C++ exceptions.
For servers that must not suddenly die, it's wise to use panic=unwind and catch_unwind at task/request boundaries (https://doc.rust-lang.org/stable/std/panic/fn.catch_unwind.h...)
In very early pre-1.0 prototypes Rust was meant to have isolated tasks that are killed on panic. As Rust became more low-level, it turned into terminating a whole OS thread on panic, and since Rust 1.9.0, it's basically just a try/catch with usage guidelines.
But no few would write a process-per-request web server today for example. And if a single process web server handles 100 requests you would then accept that one bad request one tore down the handling of the 99 others. Even if you have a watchdog that restarts the service after the one request choked, you wouldn't save the 99 requests that were in-flight on the same process. Can't you catch_unwind for each request handler, if one chokes then you just ignore that request. If you worry about that messing anything up, then you can tear down and restart your process after that, so the 99 other requests get a chance to complete?
This is factually incorrect. The behavior you describe with Elixir (sic) is precisely what most Rust async runtimes do. (sic because it's Erlang that's to thank)
IMHO that is the sensible thing to do for pretty much any green thread or highly concurrent application. e.g. Golang does the same: panicking will only bring down the goroutine and not the whole process.
https://github.com/UoCCS/project-GROS
Love this approach — Rust in the right places. I’ve been wondering if using Wasm modules (e.g. from MoonBit) for isolated components might offer a similar balance: memory safety without full rewrite.
Given the date on the post, I can’t tell if this is real.
One thing I often see pop up in larger projects, which in the article is likely the fault of way to large symbols, is overuse of generics/type state/etc.
Or you could formulate this as needless obsession with not using `dyn`.
And sure generics are more powerful, dyn has limitations, etc. etc.
It's one of this "Misconceptions Programmers believe about Monomorphisation vs. Virtual Calls" things as in:
TL;DR: dyn isn't as bad as some people make it out to be; Weather perf. or convenience it can be the better choice. Any absolute recommendation of always use this or that is wrong.
- wrong: monomorphisation is always faster; reason: monomorphisation pollutes the instruction cache way worse, as such in some situations switching some parts (not all parts) to virtual calls and similar approaches can lead to major performance improvements. Good example here are various experiments about how to implement something like serde but faster and with less binary size.
- wrong: monomorphisation was picked in rust because it's better for rust; right: it was picked because it is reasonable good and was viable to implement with available resources. (for low level languages it's still better then only using vtables, but technically transparent hybrid solutions are even more desirable)
- wrong: virtual calls are always slow in microbenchmarks; right: while they are more work to do modern cpus have gotten very very good at optimizing them, under the right conditions the might be literally as fast as normal function calls (but most times they are slightly slower until mono. trashes icache too much)
- wrong: monomorphisation is always better for the optimizer; right: monomorphisation gives the optimizer more choices, but always relevant or useful choices but they always add more work it has to do, so slower compiler times and if you are unlucky it will miss more useful optimizations due to noise
- wrong: in rust generics are always more convenient to use; right: Adding a generic (e.g. to accomodate a return position impl trait) in the wrong place can lead you to having to write generic parameters all through the code base. But `dyn` has much more limitations/constraints, so for both convenience and performance it's a trade of which more often favors monomorphisation, but not as much as many seem to believe.
- wrong: always using dyn works; right: dyn doesn't work for all code and even if it would using it everywhere can put too much burden on the branch predictor and co. making vcalls potentially as slow as some people thing they are (it's kinda similar to how to much monomorphisation is bad for the icache and it's predictors, if we gloss over a ton of technical details)
So all in all understand what your tools entail, instead of just blindly using them.
And yes that's not easy.
It's on of the main differences between a junior and a senior skill level.
As a junior you follow rules, guidelines (or imitate other) when to use which tool. As a senior you deeply understand why the rules, guidelines, actions of other people are the way they are and in turn know when to diverge from it.
My understanding of why Rust does monomorphization by default is that it wanted maximum performance, since it was meant to replace C++. Excellent post!
Those github pr linked from the blog don't give me much confident:
Links to "Better C++" is a PR for removing c++ template for build time. Unwinding stack in a "funny" way. PR comment saying something shouldn't be public and go on merging anyway.