This is one the reasons I find it so silly when people disregard Zig «because it’s just another memory unsafe language»: There’s plenty of innovation within Zig, especially related to comptime and metaprogramming. I really hope other languages are paying attention and steals some of these ideas.
«inline else» is also very powerful tool to easily abstract away code with no runtime cost.
What I’ve seen isn’t people disregarding Zig because it’s just another memory-unsafe language, but rather disqualifying Zig because it’s memory-unsafe, and they don’t want to deal with that, even if some other aspects of the language are rather interesting and compelling. But once you’re sold on memory safety, it’s hard to go back.
This is really the crust of the argument. I absolutely love the Rust compiler for example, going back to Zig would feel a regression to me. There is a whole class of bugs that my brain now assumes the compiler will handle for me.
Problem is, like they say the stock market has predicted nine of the last five recessions, the Rust compiler stops nine of every five memory safety issues. Put another way, while both Rust and Zig prevent memory safety issues, Zig does it with false negatives while Rust does it with false positives. This is by necessity when using the type system for that job, but it does come at a cost that disqualifies Rust for others...
Nobody knows whether Rust and/or Zig themselves are the future of low-level programming, but I think it's likely that the future of low-level programming is that programmers who prefer one approach would use a Rust-like language, while those who prefer the other approach would use a Zig-like language. It will be intesting to see whether the preferences are evenly split, though, or one of them has a clear majority support.
C++ already illustrates this idea you're talking about and we know exactly where this goes. Rust's false positives are annoying, so programmers are encouraged to further improve the borrowck and language features to reduce them. But the C++ or Zig false negatives just means your program malfunctions in unspecified ways and you may not even notice, so programmers are encouraged to introduce more and more such cases to the compiler.
The drift over time is predictable, compared to ten years ago Rust has fewer false positives, C++ has more false negatives.
You are correct to observe that there is no middle choice here, that's Rice's Theorem, non-trivial semantic correctness is Undecidable. But I would argue we already know what you're calling the "false positive" scenario is also not useful, we're just not at the point where people stop doing it anyway.
> C++ already illustrates this idea you're talking about and we know exactly where this goes.
No, it doesn't. Zig is safer than C++ (and it's much simpler, which also has an effect on correctness).
Making up some binary distinction and then deciding that because C++ falls on the same side of it as Zig (except it doesn't, because Zig eliminates out-of-bounds access to the same degree as Rust, not C++) then what applies to one must apply to the other. There is simply no justification to make that equivalence.
> There is no middle choice here, that's Rice's Theorem, non-trivial semantic correctness is Undecidable.
That's nothing to do with Rice's theorem. Proving some properties with the type system isn't a general algorithm; it's a proof you have to work for in every program you write individually. There are languages (Idris, ATS) that allow you to prove any correctness property using the type system, with no false positives. It's a matter of the effort required, and there's nothing binary about that.
To get a sense of the theoretical effort (the practical effort is something to be measured empirically, over time) consider the set of all C programs and the effort it would take to rewrite an arbitrary selection of them in Rust (while maintaining similar performance and footprint characteristics). I believe the effort is larger than doing the same to translate a JS program to a Haskell program.
> There is simply no justification to make that equivalence.
I explained in some detail exactly why this equivalence exists. I actually have a small hope that this time there are enough people who think it's a bad idea that we don't have to watch this play out for decades before the realisation as we did with C and C++.
Yes it's exactly Rice's Theorem, it's that simple and that drastic. You can choose what to do when you're not sure, but you can't choose (no matter how much effort you imagine applying) to always be sure†, that Undecidability is what Henry Rice proved. The languages you mention choose to treat "not sure" the same as "nope", like Rust does, you apparently prefer languages like Zig or C++ which instead treat "not sure" as "it's fine". I have explained why that's a terrible idea already.
The underlying fault, which is why I'm confident this reproduces, is in humans. To err is human. We are going to make mistakes and under the Rust model we will curse, perhaps blame the compiler, or the machine, and fix our mistake. In C++ or Zig our mistake compiles just fine and now the software is worse.
† For general purpose languages. One clever trick here is that you can just not be a general purpose language. Trivial semantic properties are easily decided, so if your language can make the desired properties trivial then there's no checking and Rice's Theorem doesn't apply. The easy example is, if my language has no looping type features, no recursive calls, nothing like that, all its programs trivially halt - a property we obviously can't decidably check in a general purpose language.
> I explained in some detail exactly why this equivalence exists.
No, you assumed that Zig and C++ are equivalent and concluded that they'll follow a similar trajectory. It's your premise that's unjustified.
A problem you'd have to contend with is that Rust is much more similar to C++ than Zig in multiple respects, which may matter more or less than the level of safety when predicting the language trajectory.
> But you can't choose (no matter how much effort you imagine applying) to always be sure
That is not Rice's theorem. You can certainly choose to prove every program correct. What you cannot do is have a general mechanism that would prove all programs in a certain language correct.
> One clever trick here is that you can just not be a general purpose language.
That's not so much a clever trick as the core of all simple (i.e. non-dependent) type systems. Type-safety in those languages then trivially implies some property, which is an inductive invariant (or composable invariant) that's stronger than some desired property. E.g. in Rust, "borrow/lifetime-safety" is stronger than UAF-safety.
However, because an effort to prove any property must exist, we can find it for some language that trivially offers it by looking at the cost of translating a correct program in some other language that doesn't guarantee the property to one that does. The reason why it's more of a theoretical point than a practical one is because it could be reasonably argued that writing a memory-safety program in C is harder than doing it in Rust in the first place, but either way, there's some effort there that isn't there when writing the program in, say, Java.
I've been hearing about how I'll inevitably write all this unsafe Rust for... four years now.
Some time back I checked and I had written exactly one unsafe block, and so I inspected it again and I realised two things:
1. It was no longer necessary, Rust could now just do this safely. I rewrote it in safe Rust.
2. It was technically Undefined Behaviour, predictably given the chance to shoot myself in the foot that's exactly what I had done. Like a lot of C and C++ it likely wouldn't in fact blow my foot off in any real scenario, but who knows? Not me, that's for sure.
Which is why there is an effort to formally verify the unsafe use in the Rust standard library.
I would also say that unsafe causes a very different human reaction.
When like Zig, C or C++ everything is potentially unsafe then you can't scrutinize everything.
When submitting a PR in Rust containing unsafe code everyone wants to understand what happens because it is both rare, and everyone are cautious about the dangers posed. The first question on everyone's mind always is: Does this need unsafe?
Suppose I have a self-contained Zig project and it has a nasty memory safety bug - how can I identify where the cause might be? What parts of my project source are potentially unsafe ?
You've said it's not everything, so, what's excluded? What can I rule out?
The same useless claim could be made for C and with the same effect.
The trick Rust is doing here that Zig is not is that Rust's safe contracts are always what we would call wide contracts. As a safe Rust programmer it's never your fault because you were "holding it wrong". For example If you insist on sorting a Vec<Foozle> even though Foozles all claim they're greater even than themselves, Rust doesn't say (as C and C++ do) too bad, you broke it so now all bets are off, sorting won't be useful in Rust because Foozles don't have a coherent ordering, but your program is fine. In fact today it's quite fast to uselessly "sort" that container.
Zig has numerous narrow contracts, which means when you write Zig touching any of those contracts it is your responsibility as a Zig programmer to ensure all their requirements were upheld, and when you in turn create code or types you will likely find you add yet further narrowness - so you can be and in practice often are, "holding it wrong".
> The same useless claim could be made for C and with the same effect
It really can't be.
Memory safety is problematic because it's a common cause of some dangerous bugs. Of the two main kinds of memory safety, Rust generally eliminates both, leaving only unsafe Rust and foreign code as possible sites of memory unsafety. Zig, on the other hand, generally eliminates only the more dangerous kind, leaving only unsafe Zig and foreign code as possible sites of that.
Mind you, the vast majority of horrific, catastrophic bugs are not due to UAF. So if we get a horrific, catastrophic bug in Rust, we can eliminate UAF as a cause leaving us only with most possible causes, just as in most programming languages used to write most of the software in the world already.
This point of ha-ha, you also got a segfault while I only got all other bugs doesn't make sense from a software correctness perspective.
There is no binary line between Rust and Zig that makes Zig's superior safety to C that couldn't also be put between Rust and languages that make far stronger guarantees, putting Rust in the same bucket as C. If you think that the argument, "Rust, just like C, is unable to guarantee the vast majority of correctness properties that ATS can, therefore it is equally useless" is silly, then so is trying to put Zig and C in the same bucket.
If you believe that eliminating certain classes of bugs is important for correctness even when you don't eliminate most bugs, then I don't see how a language that eliminates the more dangerous class of the two that Rust eliminates is "just as useless" as a language that eliminates neither.
I have been programming in both C++ and Java for a very long time, and while I appreciate Java's safety, the main difference between the two languages for me hasn't been a different in correctness but in productivity. That productivity comes from Java's superior abstraction - I can make many different kinds of local changes without affecting other code at all, and that is not the case in a low-level language, be it C, C++, Zig, or Rust. I think it's good that Zig and Rust offer bounds ("spatial") safety. I also think it's good that Rust offers UAF ("temporal") safety, but I find the price of that too high for my liking.
Of course, my experience is not universal because I use C++ only for really low-level stuff (mostly when working on the HotSpot VM these days) where both Zig and Rust would have been used in their unsafe flavours anyway, because I'm more than happy to pay the increased memory footprint for higher productivity in other cases.
Bounds safety by default, nullability is opt-in and checks are enforced by the type-system, far less "undefined behaviour", less implicit integer casting (the ergonomics could still use some work here), etc.
This is on top of the cultural part, which has led to idiomatic Zig being less likely to heap allocate in the first place, and more likely to consider ownership in advance. This part shouldn't be underestimated.
You presumably intend "shouldn't be underestimated" rather than "can't be". I agree that culture is crucial, but the technology needs to support that culture and in this respect Zig's technology is lacking. I would love to imagine that the culture drives technology such that Zig will fix the problem before 1.0, but Zig is very much an auteur language like Jai or Odin, Andrew decides and he does not seem to have quite the same outlook so I do not expect that.
> Maybe if someone bends over backwards to rationalize it, but not in any real sense.
In a simple, real sense. Zig prevents out-of-bounds access just as Rust does; C++ doesn't. Interestingly, almost all of Rust's complexity is invested in the less dangerous kind of memory unsafety (https://cwe.mitre.org/top25/archive/2024/2024_cwe_top25.html).
> You can't build RAII and moves into zig.
So RAII is part of the definition of memory safety now?
Why not just declare memory safety to be "whatever Rust does", say that anything that isn't exactly that is worthless, and be done with that, since that's the level of the arguments anyway.
We could, of course, argue over which of Rust, Zig, and C++ offers the best contribution to correctness beyond the sound guarantees they make, except these are empirical arguments with little empirical data to make any determination, which is part of my point.
Software correctness is such a complicated topic and, if anything, it's become more, not less, mysterious over the decades (see Tony Hoare's astonishment that unsound methods have proven more effective than sound methods in many regards). It's now understood to be a complicated game of confidence vs cost that depends on a great many factors. Those who claim to have definitive solutions don't know what they're talking about (or are making unfounded extrapolations).
Then why do my data structures detect if I go out of bounds?
Interestingly, almost all of Rust's complexity is invested in the less dangerous kind of memory unsafety
I didn't say anything about rust.
So RAII is part of the definition of memory safety now?
Yes. You can clean up memory allocations automatically with destructors and have value semantics for memory that is on the heap.
Why not just declare memory safety to be "whatever Rust does", say that anything that isn't exactly that is worthless, and be done with that, since that's the level of the arguments anyway.
Why are you talking about rust here? Focus on what I'm saying.
We could, of course, argue over which of Rust, Zig, and C++
if anything, it's become more, not less, mysterious over the decades
Says who?
I don't care about rust or zig, I'm saying that these are solved problems in C++ and I don't have to deal with them. Zig does not have destructors and move semantics.
> Then why do my data structures detect if I go out of bounds?
I didn't mean you can't write C++ code that enforces that, I said C++ itself doesn't enforce it.
> Yes. You can clean up memory allocations automatically with destructors and have value semantics for memory that is on the heap.
Surely there are other ways to do that. E.g. Zig has defer. You can say that you may forget to write defer, which is true, but the implicitness of RAII has cause (me, at least) many problems over the years. It's a pros-and-cons thing, and Zig chooses the side of explicitness.
> Why are you talking about rust here? Focus on what I'm saying.
You're right, sorry :)
> Says who?
Says most people in the field of software correctness (and me https://pron.github.io). In the seventies, the prevalent opinion was that proofs of correctness would be the only viable approach to correctness. Since then, we've learnt two things, both of which were surprising.
The first was new results in the computational complexity of model checking (not to be confused with the computational complexity of model checkers; we're talking about the intrinsic computation complexity of the model checking problem, i.e. the problem of knowing whether a program satisfies some correctness property, regardless of how we learn that). This included results (e.g. by Philippe Schnoebelen) showing that even though there would be the reasonable expectation that language abstractions could make the problem easier, even in the worst case - it doesn't.
The second was that unsound techniques, including engineering best practices, have proven far more effective than was thought possible in the seventies. This came as quite a shock to formal methods people (most famously, Tony Hoare, who wrote a famous paper about it).
As a result, the field of software correctness has shifted its main focus from proving program correct to finding interesting confidence/cost tradeoffs to reduce the number of bugs, realising that there's no single best path to more correctness (as far as we know today).
> I'm saying that these are solved problems in C++ and I don't have to deal with them. Zig does not have destructors and move semantics.
That's true, but these are not memory safety guarantees. These are mechanisms that could mitigate bugs (though perhaps cause others), and Zig has other, different mechanisms to mitigate bugs (though perhaps cause others). E.g. see how easy it is to write a type-safe printf in Zig compared to C++, or how Zig handles various numeric overflow issues compared to C++. So it's true that C++ has some features we may find helpful that Zig doesn't and vice-versa, we can't judge which of them leads to more correct programs. All I said was that Zig offers more safety guarantees than C++, which it does.
And C has free, but you have to remember to use it and use it correctly every single time instead of the memory working by default with no intervention.
Says most people in the field of software correctness
Not true, the last 30 years have had much safer languages than before java, scripting languages, modern C++ and rust.
That's true, but these are not memory safety guarantees.
Pragmatically they mean you don't have to worry about bounds checking or memory deallocation and it stops being a problem. Zig doesn't have this and it doesn't have safety guarantees either.
> And C has free, but you have to remember to use it and use it correctly every single time instead of the memory working by default with no intervention.
Tangential, but memory leaks are not considered a safety issue, especially by those who do like to contrast with Rust (as it isn't prevented in Rust).
If we're talking about features that help (though not completely avoid) some bugs, you can't just consider the features C++ has and Zig doesn't, but also consider the relevant features Zig has and C++ doesn't.
Like I said, I don't know which of those two languages results in more correct programs (just as I don't know the answer for Zig vs Rust), but I do know that Zig offers more safety guanrantees than C++, and Rust offers more safety guarantees than Zig. I certainly don't claim that more safety guarantess always equals more correctness at a lower cost.
Even more tangentially, in the Java world we have this thing called "integrity" (https://openjdk.org/jeps/8305968) which is the ability of Java code to locally establish inviolate invariants that are guaranteed to hold globally (unless the application author - importantly not any library code - explicitly allows them to be violated). C++ scores quite low on the integrity front, as virtually all intended invariants can be violated without a global flag, sometimes in ways that are hard to detect. In both Rust and Zig, integrity violations are generally easier to at least detect (although in Zig they're sometimes harder to establish in the first place; this is intentional, and I don't entirely agree with the justification for that, although I can see its merits in a low-level language).
> Not true, the last 30 years have had much safer languages than before java, scripting languages, modern C++ and rust.
I don't see how that contradicts what I said, especially since language that offer even more correctness - such as Idris or ATS - have had effectively zero adoption. The languages that have succeeded are safer than C or FORTRAN, but also clearly compromise on what they offer (compared to Idris/ATS) because of costs. They very much embody the an acceptance of tradeoffs, and much of the memory safety in most safe languages is offered through GCs, that come with the cost of higher memory footprint. If anything, their growing popularity has come due to advancements in GCs.
Rust (you brought it up this time) is particularly interesting, because it offers something different than before to prevent UAF but at a higher cost than previous popular safe languages. While I don't know how popular Rust will be in the future, its current adoption is quite significantly lower than any language that's ever become popular at the same age.
> Pragmatically they mean you don't have to worry about bounds checking or memory deallocation and it stops being a problem
I haven't noticed that either one of these has "stopped being a problem", and I think that those who either sell or buy Rust do so because they believe these are still significant problems in C++ (and I would agree, except I think there are worse problems in C++ - that Rust, unfortunately, adopted - even with respect to correctness, that Zig attempts to solve).
> Zig doesn't have this and it doesn't have safety guarantees either
Zig definitely has safety guarantees around bounds and numeric overflow that C++ doesn't.
in the Java world we have this thing called "integrity"
Your claim was that zig is 'safer' than C++
Zig definitely has safety guarantees around bounds and numeric overflow that C++ doesn't.
This can be built in to a class too if someone really wants a bunch of branching in their math.
It seems like now safety is being redefined to say that memory leaks don't count and numeric overflow needs to be done like zig. If your program leaks memory, it eventually crashes if it runs indefinitely and that means you need to free memory, which means you need to free it at the right time only once.
There is no one definitive definition of memory safety, but it generally refers to things that can lead to undefined behaviour (in the C and C++ sense), usually due to "type confusion" (or sometimes "heap pollution"), i.e. referencing an address of memory that contains data of one type as if it were another, which can happen due to both bounds or UAF violations. Memory leaks don't cause undefined behaviour.
> This can be built in to a class too if someone really wants a bunch of branching in their math.
Let me say this again: The Zig language, just like Rust, guarantees that there are no bounds violations (except in syntactically demarcated unsafe code). C++ just doesn't do that.
That is not to say that the lack of this guarantee in C++ means you can't write correct programs in C++ as easily as in Zig or in Rust, but it is, nevertheless, a difference in the guarantees made by the language.
> It seems like now safety is being redefined to say that memory leaks don't count and numeric overflow needs to be done like zig
Memory unsafety is generally considered to be some subset of undefined behaviour (possibly including all undefined behaviour). Out-of-memory and stack overflow errors are definitely problems, but as they don't cause undefined behaviour (well, depending on stack protection) they're not usually regarded in the class of properties called memory safety.
Numeric overflows, on the other hand, might also not be regarded as memory safety, but they are very much undefined behaviour in both C and C++.
Memory leaks are also a safety issue. Especially not running destructors can be a safety issue, but also a resource leak is at least a DoS. IIRC Rust also included not having memory leaks earlier in their definition of memory safety, but dropped it later.
The vast majority of catastrophic problems - nearly all of them, in the grand scheme of things - including those that can cause total system failure or theft of all data are not considered memory safety issues (which is one of the reasons that memory safety is overestimated or at least misunderstood, IMO, and why I prefer to talk about correctness in general). Memory safety refers to a specific kind of problems that correspond to undefined behaviour in C or C++. Memory safety issues are not neessarily any more or less sever than any other program weakness, it's just that for a long time they've been associated with low-level programming.
I'm not aware of any popular language - even a high level one - that prevents memory leaks with any kind of guarantee (although these come in different flavours too, and some kinds are prevented in Java). C/C++/Rust/Zig certainly don't.
Memory safety - as now being popularized by Rust in its current form - mostly corresponds to not having UB in C or C++. My point is that this not the only definition and not even the definition Rust started with.
Memory leaks are often a part of the definition of memory safety because otherwise it is trivial to fix use-after-free, i.e. simple never free the memory. Rust dropped this part because it was too hard. So in some sense the cheated a little bit.
Well, when Rust came out I had only been programming in C and C++ for about 15, maybe 20 years, but I think that even then we generally used memory safety to refer to problems that can cause "type confusion". In any event, given that none of the languages mentioned here - C, C++, Zig, or Rust - prevent memory leaks, I don't think that the question of whether or not we include it under the umbrella of memory safety could offer insight on the interesting distinctions between these languages.
the Zig language, just like Rust, guarantees that there are no bounds violations (except in syntactically demarcated unsafe code). C++ just doesn't do that.
You said that already, but when saying zig is safer than C++, pragmatically it isn't because C++ bounds checks in the standard library but zig can never have the automatic resource management that C++ has, and that's what people use all day every day.
We keep talking about completely different things. If we're talking about "features that can help reduce some bug" then C++ or Rust have some that Zig doesn't and Zig has some that C++ or Rust don't. Which ends up more pragmatic is an empirical question that's hard to answer without data, but certainly focusing only on what C++ has and Zig doesn't while ignoring what Zig has that C++ doesn't is a strange way to compare things (BTW, I've been programming in C++ for almost 3 decades, and I really dislike RAII and try to avoid it).
But if we're talking about memory safety - which is something very specific - then, for whatever it's worth, Zig is more memory-safe than C++ and Rust is more memory-safe than Zig.
We keep talking about completely different things.
You said zig is safer than C++, then to make that argument you keep trying to redefine what safety means to include only features in the language syntax but not done in libraries while saying memory leaks don't matter and automatically freeing memory correctly doesn't matter.
I am not redefining what safety means. I am using the same definition of safety used in this entire thread by those debating the pros and cons of Rust being safer than Zig.
I definitely didn't say that memory leaks don't matter. They could possibly matter more than memory safety. They are just not called memory safety bugs, or code injection bugs, or off-by-one bugs. Memory safety is a name given to a class of bugs that lead to undefined behaviour in C or C++. It's not necessarily the most important class of bugs, but it is one, and when we're talking about preventing code injection or memory safety issues, we're not talking about preventing memory leaks - even if they're worse.
Now, if you want to talk about memory leaks and not memory safety (again, it's just a name given to some bugs and not others) then C, C++, Zig, and Rust, do not prevent them. Java prevents the kind "I forgot to free this object" kind, but not "I forgot about this object" kind.
Now, because unlike memory safety, none of these languages prevents memory leaks, it's really hard to say which of them leads to the fewest memory leaks. You really like C++'s destructors and find them useful, I really hate C++'s destructors and find them harmful, and we all have different opinions on why our way is better when it comes to memory leaks. What we don't have is data. So you can say having destructors helps and I can say no they don't until the end of time, but there's no way of knowing which is really better. So all we can do now, is to use the things we find useful to us without making broad generalisations about software correctness that we can't actually support with any evidence.
Unless you actually use the simplicity to apply formal methods I don't think simplicity make a language safer. The exact opposite. You can see it play out in the C vs C++ arena. C++ is essentially just a more complex C. But I trust modern C++ much more in terms of memory safety.
> Unless you actually use the simplicity to apply formal methods I don't think simplicity make a language safer.
That depends what you mean by "safer", but it is an empirical fact that unsound methods (like tests and code reviews) are extremely effective at preventing bugs, so the claim that formal methods are the only way is just wrong (and I say this as a formal methods guy, although formal methods have come a long way since the seventies, when we thought the point was to prove programs correct).
> The exact opposite. You can see it play out in the C vs C++ arena. C++ is essentially just a more complex C. But I trust modern C++ much more in terms of memory safety.
I don't understand the logical implication. From the fact that there exists a complicating extension of a language that's safer in some practical way than the original you conclude that complexity always offers correctness benefits? This just doesn't follow logically, and you can immediately see it's false because Zig is both simpler and safer than C++ (and it's safer than C++ even if its simplicity had no correctness benefits at all).
> That depends what you mean by "safer", but it is an empirical fact that unsound methods (like tests and code reviews) are extremely effective at preventing bugs, so the claim that formal methods are the only way is just wrong (and I say this as a formal methods person)
I agree that tests and reviews are somewhat effective. That's not the point. The point is that if you look at the history of programming languages simplicity in general goes against safety. Simplicity also goes against human understanding of code. C and assembly are extremely simple compared to java, python, C#, typescript etc. yet programs written in C and assembly are much harder to understand for humans. This isn't just a PL thing either. Simplicity is not the same as easy, it often is the opposite.
> I don't understand the logical implication. From the fact that there exists a complicating extension of a language that's safer in some practical way than the original you conclude that complexity always offers correctness benefits? This just doesn't follow logically, and you can immediately see it's false because Zig is both simpler and safer than C++ (and it's safer than C++ even if its simplicity had no correctness benefits at al
It's the greatest example of you take a simple language, you add a ton of complexity and it becomes more safe. You are right that zig is simpler and safer, but it's a green field language. Else I might as well say rust is more safe than zig and also more complex. The point is as to isolate simplicity as the factor as much as possible.
I would even say that zig willingly sacrifices safety on the alter of simplicity.
> The point is that if you look at the history of programming languages simplicity in general goes against safety... C and assembly are extremely simple compared to java, python, C#, typescript
But Java and Python are simpler yet safer than C++, so I don't understand what trend you can draw if there are examples in both directions.
> It's the greatest example of you take a simple language, you add a ton of complexity and it becomes more safe.
But I didn't mean to imply that's not possible to add safety with complexity. I meant that when the sound guarantees are the same in two languages, then there's an argument to be made that the simpler one would be easier to write more correct programs in. Of course, in this case Zig is not only simpler than C++, but actually offers more sound safety guarantees.
So far I think the adoption in critical infrastructure (Linux, AWS, Windows, etc.) is clearly in Rust favor but I agree that something at some point will replace Rust. My belief is that more guardrails will end up winning no matter the language since the last 50 years of progamming have shown us we can't rely on humans to write bug free code and it is even worse with LLM.
In practice, almost all memory safety related bugs caught by the Rust compiler are caught by the Zig safe build modes at run time. This is strictly worse in isolation, but when you factor in the fact that the rest of the language is much easier to reason about, the better C interop, the simple yet powerful metaprogramming, and the great built in testing tools, the tradeoffs start to become a lot more interesting.
catching at compile time is much better, though. there are plenty of strange situations that can happen that you'll not reach in runtime (for example, odds of running into a tripwire increase over time, things that can only happen after certain amount of memory fragmentation -- maybe you forgot an errdefer somewhere, etc.)
I think the problem with this attitude is the compiler becomes a middle manager you have to appease rather than a collaborator. Certainly there are advantages to having a manager, but if you go off the beaten track with Rust, you will not have a good time. I write most of my code in Zig these days and I think being able to segfault is a small price to pay to never have to see `Arc<RefCell<Foo<Bar<Whatever>>>` again.
I view it as a wonderful collaborator, it tells me automatically were my code is wrong and it gets better with every release, I can't complain really. I think a segfault is a big price to pay, but it depends on the criticality of it I guess.
You can write rust without over-using traits. Regrettably, many rust libs and domains encourage patterns like that. One of the two biggest drawbacks of the rust ecosystem.
I can't imagine writing c++ or c these days without static analysis or the various llvm sanitizers. I would think the same applies to zig. Rather than need these additional tools, rust gives you most of their benefits in the compiler. Being able to write bugs and have the code run isn't really something to boast about.
I would rather rely on a bunch of sanitizers and static analysis because it is more representative of the core problem I am solving: Producing machine code. If I want Rust to solve these problems for me I now have to write code in the Rust model, which is a layer of indirection that I have found more trouble than it's worth.
would you be satisfied if there was a static safety checker? (or if it were a compiler plugin that you trigger by running a slightly different command?). Note that zig compiles as a single object, so if you import a library and the library author does not do safety checking, your program would still do the safety checking if it doesn't cross a C abi boundary.
As someone who uses D and has been doing things like what you see in the post for a long time, I wonder why other languages would put attention to these tricks and steal them when they have been completely ignoring them forever when done in D. Perhaps Zig will make these features more popular, but I'm skeptic.
I was trying to implement this trick in D using basic enum, but couldn't find a solution that works at compile-time, like in Zig. Could you show how to do that?
import std.meta: AliasSeq;
enum E { a, b, c }
void handle(E e)
{
// Need label to break out of 'static foreach'
Lswitch: final switch (e)
{
static foreach (ab; AliasSeq!(E.a, E.b))
{
case ab:
handleAB();
// No comptime switch in D
static if (ab == E.a)
handleA();
else static if (ab == E.b)
handleB();
else
static assert(false, "unreachable");
break Lswitch;
}
case E.c:
handleC();
break;
}
}
Thanks! That indeed does the equivalent as the Zig code... but feels a bit pointless to do that in D, I think?
Could've done this and be as safe, but perhaps it loses the point of the article:
enum U { A, B, C }
void handle(U e)
{
with (U)
final switch (e) {
case A, B:
handleAB();
if (e == A) handleA(); else handleB();
break;
case C:
handleC();
break;
}
}
This perspective that many people take on memory-safety of Rust seems really
"interesting".
Unfortunately for all fanatics, language really doesn't matter that much.
I have been using KDE for years now and it works perfectly good for me. It has no issues/crashes, it has many features in terms of desktop environment and also many programs that come with it like music player, video player, text editor, terminal etc. and they all work perfectly well for me. Almost all of this is written in C++. No need to mention the classic linux/chromium etc. etc which are all written in c++/c.
I use Ghostty which is written in zig, it is amazingly polished and works super well as well.
I have built and used a lot of software written in Rust as well and they worked really well too.
At some point you have to admit, what matters is the people writing software, the amount of effort that goes into it etc. it is not the langauge.
As far as memory-safety goes, it really isn't close to being the most important thing unless you are writing security critical stuff. Even then just using Rust isn't as good as you might think, I uncountered a decent amount of segfaults, random crashes etc. using very popular Rust libraries as well. In the end just need to put in the effort.
I'm not saying language doesn't matter but it isn't even close to being the most important thing.
> As far as memory-safety goes, it really isn't close to being the most important thing unless you are writing security critical stuff.
Safety is the selling point of Rust, but it's not the only benefit from a technical point of view.
The language semantics force you to write programs in a way that is most convenient for the optimizing compiler.
Not always, but in many cases, it's likely that a program written in Rust will be highly and deeply optimized. Of course, you can follow the same rules in C or Zig, but you would have to control more things manually, and you'd always have to think about what the compiler is doing under the hood.
It's true that neither safety nor performance are critical for many applications, but from this perspective, you could just use a high-level environment such as the JVM. The JVM is already very safe, just less performant.
Also, treating all languages that don't ensure full memory safety as if they're equally problematic is silly. The reason not ensuring memory safety is bad is because memory unsafety as at the root of some bugs that are both common, dangerous, and hard to catch. Only not all kinds of memory unsafety are equally problematic, Zig does ensure the lack of the the most dangerous kind of unsafety (out-of-bounds access) while making the other kind (use-after-free) easier to find.
That the distinction between "fully memory safe" and "not fully memory safe" is binary is also silly not just because of the above, but because no lanugage, not even Java, is truly "fully memory safe", as programs continue to employ components not written in memory safe languages.
Furthermore, Zig has (or intends to have) novel features (among low-level languages) that help reduce bugs beyond those caused by memory unsafety.
If you one day write a blog, I would want to subscribe.
Your writing feels accessible. I find it makes complex topics approachable. Or at least, it gives me a feel of concepts that I would otherwise have no grasp on. Other online writing tends to be permeated by a thick lattice of ideology or hyper-technical arcanery that inhibits understanding.
> Any rhetorical device that equates Java/C# (any memory safe Turing language ) safety with C is most likely a fallacy.
I agree, but I didn't do any of that. If anything my point was that 1. safety is clearly not a binary thing and no one really treats it as such (even those who claim it is a binary distinction) and 2. that trying to extrapolate from one language to another based on choosing some property that we think is the most relevant one may be assuming that which we seek to prove.
Saying that C, C++, and Zig are "the same" because they all make fewer guanratees than Rust is as silly as saying C, C++, Zig, and Rust are the same because they all offer fewer guarantees than ATS, or that Rust and Java are the same because they offer similar guarantees but with very different complexity costs.
Also, the focus on memory safety is justified because of the security bugs it causes, but the two major kinds of unsafety (out-of-bounds access and use-after free) aren't equally dangerous, and Rust pays most of its complexity cost to prevent the less dangerous of the two (https://cwe.mitre.org/top25/archive/2024/2024_cwe_top25.html). There's even more nuance here, because some techniques focus on reducing the risk of exploitable use-after-free bugs without preventing it or even making it easier to detect at all (https://www.cl.cam.ac.uk/~tmj32/papers/docs/ainsworth20-sp.p...).
It's all a matter of degree, both when it comes to the risk as well as to the cost of avoiding it. Not much here, beyond the very basics, is simple or obvious.
If you want to read some more even nuanced things I've written about software correctness, you can find some old stuff here: https://pron.github.io
I interpreted his post as saying it's not binary safe/unsafe, but rather a spectrum, with Java safer than C because of particular features that have pros and cons, not because of a magic free safe/unsafe switch. He's advocating for more nuance, not less.
Yeah, it's not binary; it's just a step function. /s
No, it's as close to binary as you can get. Is your only source of Undefined Behavior FFI specially marked functions and/or packages? Have you checked data races for violating thread safety invariants? If yes - You're safe.
Is Go in mostly safer than C++? Maybe. But you can never prove that about either of them. So while you may pretend one is safer than the other, it's a bit like picking which boat is taking on more water.
Can you prove Rust code is safe? Well there is the simple way - no unsafe. But what about unsafe blocks? Yes, you can prove it for them as well. If the unsafe code block is it will note safety invariants and why are they preserved by unsafe block. Can this be practically done? Depends on the crate, but with enough effort, yes.
Can you show RCE using this? Because, to this day, no one has been able to show me a reasonable program that someone would write and that would result in RCE from "Go memory unsafety" presented in this article. Meanwhile, I can show you thousands of examples and CVEs of how you can easily get RCE using C++.
> Can you prove Rust code is safe? Well there is the simple way - no unsafe. But what about unsafe blocks? Yes, you can prove it for them as well. If the unsafe code block is it will note safety invariants and why are they preserved by unsafe block. Can this be practically done? Depends on the crate, but with enough effort, yes.
You can’t prove Rust code "safe" in the absolute. Safety guarantees apply to safe Rust under the language’s (still evolving) rules, and even then the compiler/backend must uphold them. We still hit unsoundness[1] and miscompiles in safe code (equal pointers comparing unequal... [2]), and the official unsafe code guidelines are not a finalized spec. So documenting invariants in unsafe helps a lot, but it’s not a formal proof, especially across crates and compiler versions.
Right but I think people are disappointed because we finally got a language that has memory safety without GC, so Zig seems like a step backwards. Even if it is much much better than C (clearly), it's hard to get excited about a language that "unsolves" a longstanding problem.
> not even Java, is truly "fully memory safe", as programs continue to employ components not written in memory safe languages.
> I think people are disappointed because we finally got a language that has memory safety without GC, so Zig seems like a step backwards
Memory safety (like soundly ensuring any non-trivial property) must come at a cost (that's just complexity theory). You can pay for it with added footprint (Java) or with added effort (Rust). Some people are disappointed that Zig offer more safety than C++ but less than Rust in exchange for other important benefits, while others are disappointed that the price you have to pay for even more safety in Rust is not a price they're happy to pay.
BTW, many Rust programs do use GC (that's what Rc/Arc are), it's just one that optimises for footprint rather than speed (which is definitely okay when you don't use the GC as much as in Java, but it's not really "without GC", either, when many programs do rely on GC to some extent).
> This is a silly point.
Why? It shows that even those who wish to make the distinction seem binary themselves accept that it isn't, and really believe that it matters just how much risk you take and how much you pay to reduce it.
(You could even point out that memory corruption can occur at the hardware level, so not only is the promise of zero memory corruption not necessarily worth any price, but it is also unattainable, even in principle, and if that were truly the binary line, then all of software is on the same side of it.)
> You can pay for it with added footprint (Java) or with added effort (Rust)
... or runtime errors (C, Zig presumably).
Ok Zig is clearly better than C in that regard but I think it remains to be seen if it is better enough.
> many Rust programs do use GC (that's what Rc/Arc are)
This is not what most people mean when they say GC.
> Why?
Because when we're talking about the memory safety of a language we're talking about the code you write in that language (and excluding explicit opt-in to memory unsafe behaviour, e.g. `unsafe` or Python's `ctypes`).
Saying "Java isn't memory safe because you can call C" is like saying "bicycles can fly because you can put them on a plane".
I've seen a few new languages come along that were inspired by zig's comptime/metaprogramming in the same language concept.
Zig I think has potential but it hasn't stabilized enough yet for broad adoption. That means it'll be awhile before it's built an ecosystem (libraries, engines etc.) that is useful to developers that don't care about language design.
This isn’t intended as flamebait. I’m trying to understand Zig’s long-term positioning and design philosophy. I have serious confusion about the type of problems Zig is aiming to solve. In my view, Zig is not solving the actual hard problems in systems programming and it doesn't have the foundation to either.
Memory safety? Still entirely manual. Race conditions? Nothing in the language prevents them. There’s no ownership model, no lifetime analysis, no way to tie resource management to the type system. Compare that to Rust’s borrow checker or modern C++’s RAII and concepts. Zig’s type system is shallow. comptime is nice for generating code, but it doesn’t give you formal guarantees or expressive power for invariants, safety, or correctness.
The type system itself has no serious formal grounding. It can’t encode complex invariants, can’t track aliasing, can’t enforce concurrency safety and can’t model safe resource lifetimes. These aren’t academic extras — they’re exactly what decades of research in programming languages, operating systems and concurrent computing tell us you need to scale safety and correctness. Zig ignores them. Performance? When the policy is in the type (allocator choice, borrowing/ownership, fusion shape), Rust/C++ compilers can specialize, inline, and eliminate overhead. In Zig, the same policies are usually runtime values or conventions, which means more indirect calls, more defensive copies and fewer whole-program optimizations.
Concurrency is another major gap and in a real systems language, it cannot be an afterthought. Even if Zig isn’t currently aiming to solve concurrency or safety, a “serious” systems language inevitably has to, because these are the problems that determine scalability, maintainability and security over decades. The async model in Zig is little more than manual coroutine lowering: the compiler rewrites your function into a state machine and leaves correctness entirely to the programmer. There’s no structured concurrency, no safe cancellation, no prevention of shared-state hazards. Without a concurrency model that integrates into the type system, you can’t make guarantees about thread safety or race freedom and you end up relying entirely on discipline (which doesn’t scale).
Even in its most-touted features, Zig seems to be solving syntactic sugar problems, not the important systems problems. defer and errdefer? They’re effectively cleaner syntax for patterns C has had for decades through GNU’s __attribute__((cleanup)) or macro-based scope guards. Error unions? A nice alternative to out-parameters but just syntactic polish over an old idea. comptime? A more integrated macro system but still aimed at reducing boilerplate rather than providing deeper correctness guarantees.
The allocator interface? Another missed opportunity. Zig could have made it type-aware, preventing allocator misuse and catching entire classes of errors at compile time. Instead, it’s basically malloc/free with slightly cleaner function signatures. No safety net, no policy enforcement.
Zig discards decades of research in type systems, concurrency models, safety guarantees, and memory management, then reimplements C with a few ergonomic wins and leaves the hard problems untouched. It’s a restart without the research and not systems language evolution.
I am not a Rust fanatic but by contrast if you’re moving away from C++ or C, Rust actually tackles the big issues. It enforces memory safety without a garbage collector, prevents data races in safe code through its ownership and type system, offers structured concurrency with async/await and has been battle-tested in production for everything from browser engines to operating systems to databases. It is built on decades of progress and integrates those lessons into a language designed to scale correctness and performance together.
In my own code (primarily C++ and Rust), Zig wouldn’t solve a single core problem I face. Memory safety would still be my responsibility, concurrency would still be entirely manual, performance tuning would remain just as challenging and the type system wouldn’t catch the subtle bugs that matter most. The net result would be cosmetic changes paired with fewer correctness guarantees. Even C, for all its flaws, is better defined than Zig (both in having a detailed, standardized specification and in benefiting from partial formalization).
I am eager and optimistic that Zig starts taking itself seriously as a systems language. With new talent, deeper engagement with existing research and a focus on solving the actual hard problems, not just smoothing over C’s syntax, Zig could grow into something much more than it is today. But until then, the question remains: what problems is Zig actually solving that make it worth adopting over Rust or even modern C++? What concrete systems programming problems has Zig’s development team personally run into that shaped its design and are those really the most critical issues worth addressing in a new systems language?
If all it offers is nicer syntax over the same old pitfalls, I don’t see it and I don’t see why anyone betting on long-term systems software should.
`comptime unreachable` isn't the underlying cause and not the place you should fix in this case. A good error tells you what caused it, and where to fix it.
I can't take zig as seriously as rust due to lack of data race safety. There are just too many bugs that can happen when you have threads, share state between those threads and manually manage memory. There are so many bugs I've written because I did this wrong for many years but didn't realize until I wrote rust. I don't trust myself or anyone to get this right.
Note that only one or two lines of this file is needed here, the rest is for other experiments... And a user would simply use the "literal" macro. I also do not think the Zig version is really that much clearer or obvious. I would not recommend to write such code anyway, neither in C nor in Zig. The "unreachable" version is far better.
This post shows how versatile Zig's comptime is not only in terms of expressing what to pre-compute before the program ever runs, but also for doing arbitrary compile time bug-checks like these. At least to me, the former is a really obvious use-case and I have no problem using that to my advantage like that. But I often seem to overlook the latter, even though it could prove really valuable.
It's not an optimization. What gets evaluated via the lazy evaluation is well defined. Control flow which has a value defined at comptime will only evaluate the path taken. In the op example, the block is evaluated twice, once for each enum value, and the inner switch is followed at comptime so only one prong is evaluated.
Nope, this is not relying on optimization, it's just how compile time evaluation works. The language guarantees "folding" here regardless of optimization level in use. The inline keyword used in the original post is not an optimization hint, it does a specific thing. It forces the switch prong to be evaluated for all possible values. This makes the value comptime, which makes it possible to have a comptime unreachable prong when switching on it.
There are similarities here to C++ if constexpr and static_assert, if those are familiar to you.
Well, for example you may have some functions which accept types and return types, which are not compatible with some input types, and indicate their incompatibility by raising an error so that compilation fails. If the program actually does not pass some type to such a function that leads to this sort of error, it would seem like a bug for the compiler to choose to evaluate that function with that argument anyway, in the same way that it would be a bug if I had said "template" throughout this comment. And it is not generally regarded as a deficiency in C++ that if the compiler suddenly chose to instantiate every template with every value or type, some of the resulting instantiations would not compile.
Is there a reason the Zig compiler can't perform type-narrowing for `u` within the `U::A(_) | U::B(_)` "guard", rendering just the set of 2 cases entirely necessary and sufficient (obviating the need for any of the solutions in the blog post)?
I'm not familiar with Zig, but also ready to find out I'm not as familiar with type systems as I thought.
and this is in a situation where this level of performance optimization is actually valuable to spend time on. it's nice that Zig lets you achieve it while reusing the logic.
Just having a comptime unreachable feature seems pretty cool. Common C++ compilers have the worst version of this with __builtin_unreachable() -- they don't do any verification the site is unreachable, and just let the optimizer go to town. (I use/recommend runtime assert/fatal/abort over that behavior most days of the week.)
The code example will work even if `u` is only known at runtime. That's because the inner switch is not matching on `u`, it's matching on `ab`, which is known at compile time due to the use of `inline`.
That may be confusing, but basically `inline` is generating different code for the branches .a and .b, so in those cases the value of `ab` is known at compile time. So, the inner switch is running at compile time too. In the .a branch it just turns into a call to handle_a(), and in the .b branch it turns into a call to handle_b().
The problem this is meant to solve is that sometimes a human thinking about the logic of the program can see it is impossible to reach some code (ie it is statically certain) but the language syntax and type system alone would not see the impossibility. So you can help the compiler along.
It is not meant for asserting dynamic “unreachability” (which is more like an assertion than a proof).
Sure, because it's compile-time code inside a (semantically) run-time check. In recent Rust versions you can do
fn main() {
const {
if false {
let _:() = panic!();
}
}
}
which compiles as expected. (Note that if the binding were `const` instead of `let`, it'd still have failed to compile, because the semantics don't change.)
It's fine that we want a constant, it's fine that this constant would, when being computed at compile time, panic if false was true, because it is not.
I have no idea what that's trying to do. A demonstration that rust is a large language with different dialects! A terse statement with multiple things I don't understand:
- Assigning a const conditionally?
- Naming a const _ ?
- () as a type?
- Assigning a panic to a constant (or variable) ?
To me it might as well be:
fn main() {
match let {
if ()::<>unimplemented!() -> else;
}
}
This is one the reasons I find it so silly when people disregard Zig «because it’s just another memory unsafe language»: There’s plenty of innovation within Zig, especially related to comptime and metaprogramming. I really hope other languages are paying attention and steals some of these ideas.
«inline else» is also very powerful tool to easily abstract away code with no runtime cost.
What I’ve seen isn’t people disregarding Zig because it’s just another memory-unsafe language, but rather disqualifying Zig because it’s memory-unsafe, and they don’t want to deal with that, even if some other aspects of the language are rather interesting and compelling. But once you’re sold on memory safety, it’s hard to go back.
This is really the crust of the argument. I absolutely love the Rust compiler for example, going back to Zig would feel a regression to me. There is a whole class of bugs that my brain now assumes the compiler will handle for me.
Problem is, like they say the stock market has predicted nine of the last five recessions, the Rust compiler stops nine of every five memory safety issues. Put another way, while both Rust and Zig prevent memory safety issues, Zig does it with false negatives while Rust does it with false positives. This is by necessity when using the type system for that job, but it does come at a cost that disqualifies Rust for others...
Nobody knows whether Rust and/or Zig themselves are the future of low-level programming, but I think it's likely that the future of low-level programming is that programmers who prefer one approach would use a Rust-like language, while those who prefer the other approach would use a Zig-like language. It will be intesting to see whether the preferences are evenly split, though, or one of them has a clear majority support.
C++ already illustrates this idea you're talking about and we know exactly where this goes. Rust's false positives are annoying, so programmers are encouraged to further improve the borrowck and language features to reduce them. But the C++ or Zig false negatives just means your program malfunctions in unspecified ways and you may not even notice, so programmers are encouraged to introduce more and more such cases to the compiler.
The drift over time is predictable, compared to ten years ago Rust has fewer false positives, C++ has more false negatives.
You are correct to observe that there is no middle choice here, that's Rice's Theorem, non-trivial semantic correctness is Undecidable. But I would argue we already know what you're calling the "false positive" scenario is also not useful, we're just not at the point where people stop doing it anyway.
> C++ already illustrates this idea you're talking about and we know exactly where this goes.
No, it doesn't. Zig is safer than C++ (and it's much simpler, which also has an effect on correctness).
Making up some binary distinction and then deciding that because C++ falls on the same side of it as Zig (except it doesn't, because Zig eliminates out-of-bounds access to the same degree as Rust, not C++) then what applies to one must apply to the other. There is simply no justification to make that equivalence.
> There is no middle choice here, that's Rice's Theorem, non-trivial semantic correctness is Undecidable.
That's nothing to do with Rice's theorem. Proving some properties with the type system isn't a general algorithm; it's a proof you have to work for in every program you write individually. There are languages (Idris, ATS) that allow you to prove any correctness property using the type system, with no false positives. It's a matter of the effort required, and there's nothing binary about that.
To get a sense of the theoretical effort (the practical effort is something to be measured empirically, over time) consider the set of all C programs and the effort it would take to rewrite an arbitrary selection of them in Rust (while maintaining similar performance and footprint characteristics). I believe the effort is larger than doing the same to translate a JS program to a Haskell program.
> There is simply no justification to make that equivalence.
I explained in some detail exactly why this equivalence exists. I actually have a small hope that this time there are enough people who think it's a bad idea that we don't have to watch this play out for decades before the realisation as we did with C and C++.
Yes it's exactly Rice's Theorem, it's that simple and that drastic. You can choose what to do when you're not sure, but you can't choose (no matter how much effort you imagine applying) to always be sure†, that Undecidability is what Henry Rice proved. The languages you mention choose to treat "not sure" the same as "nope", like Rust does, you apparently prefer languages like Zig or C++ which instead treat "not sure" as "it's fine". I have explained why that's a terrible idea already.
The underlying fault, which is why I'm confident this reproduces, is in humans. To err is human. We are going to make mistakes and under the Rust model we will curse, perhaps blame the compiler, or the machine, and fix our mistake. In C++ or Zig our mistake compiles just fine and now the software is worse.
† For general purpose languages. One clever trick here is that you can just not be a general purpose language. Trivial semantic properties are easily decided, so if your language can make the desired properties trivial then there's no checking and Rice's Theorem doesn't apply. The easy example is, if my language has no looping type features, no recursive calls, nothing like that, all its programs trivially halt - a property we obviously can't decidably check in a general purpose language.
> I explained in some detail exactly why this equivalence exists.
No, you assumed that Zig and C++ are equivalent and concluded that they'll follow a similar trajectory. It's your premise that's unjustified.
A problem you'd have to contend with is that Rust is much more similar to C++ than Zig in multiple respects, which may matter more or less than the level of safety when predicting the language trajectory.
> But you can't choose (no matter how much effort you imagine applying) to always be sure
That is not Rice's theorem. You can certainly choose to prove every program correct. What you cannot do is have a general mechanism that would prove all programs in a certain language correct.
> One clever trick here is that you can just not be a general purpose language.
That's not so much a clever trick as the core of all simple (i.e. non-dependent) type systems. Type-safety in those languages then trivially implies some property, which is an inductive invariant (or composable invariant) that's stronger than some desired property. E.g. in Rust, "borrow/lifetime-safety" is stronger than UAF-safety.
However, because an effort to prove any property must exist, we can find it for some language that trivially offers it by looking at the cost of translating a correct program in some other language that doesn't guarantee the property to one that does. The reason why it's more of a theoretical point than a practical one is because it could be reasonably argued that writing a memory-safety program in C is harder than doing it in Rust in the first place, but either way, there's some effort there that isn't there when writing the program in, say, Java.
And yet, in reality, Rust is also on the "if I am not sure I simply attest that it is fine" side on the fence.
I've been hearing about how I'll inevitably write all this unsafe Rust for... four years now.
Some time back I checked and I had written exactly one unsafe block, and so I inspected it again and I realised two things:
1. It was no longer necessary, Rust could now just do this safely. I rewrote it in safe Rust.
2. It was technically Undefined Behaviour, predictably given the chance to shoot myself in the foot that's exactly what I had done. Like a lot of C and C++ it likely wouldn't in fact blow my foot off in any real scenario, but who knows? Not me, that's for sure.
You are already narrowing this down to only memory safety, which is part one of the Rust fallacies.
Which is why there is an effort to formally verify the unsafe use in the Rust standard library.
I would also say that unsafe causes a very different human reaction.
When like Zig, C or C++ everything is potentially unsafe then you can't scrutinize everything.
When submitting a PR in Rust containing unsafe code everyone wants to understand what happens because it is both rare, and everyone are cautious about the dangers posed. The first question on everyone's mind always is: Does this need unsafe?
> When like Zig, C or C++ everything is potentially unsafe
It is not true that in Zig "everything is potentially unsafe". Zig offers bounds safety, which, BTW, eliminates the most dangerous kind of memory unsafety (https://cwe.mitre.org/top25/archive/2024/2024_cwe_top25.html).
Suppose I have a self-contained Zig project and it has a nasty memory safety bug - how can I identify where the cause might be? What parts of my project source are potentially unsafe ?
You've said it's not everything, so, what's excluded? What can I rule out?
You can rule out a bounds violations from all but specifically marked unsafe code.
The same useless claim could be made for C and with the same effect.
The trick Rust is doing here that Zig is not is that Rust's safe contracts are always what we would call wide contracts. As a safe Rust programmer it's never your fault because you were "holding it wrong". For example If you insist on sorting a Vec<Foozle> even though Foozles all claim they're greater even than themselves, Rust doesn't say (as C and C++ do) too bad, you broke it so now all bets are off, sorting won't be useful in Rust because Foozles don't have a coherent ordering, but your program is fine. In fact today it's quite fast to uselessly "sort" that container.
Zig has numerous narrow contracts, which means when you write Zig touching any of those contracts it is your responsibility as a Zig programmer to ensure all their requirements were upheld, and when you in turn create code or types you will likely find you add yet further narrowness - so you can be and in practice often are, "holding it wrong".
> The same useless claim could be made for C and with the same effect
It really can't be.
Memory safety is problematic because it's a common cause of some dangerous bugs. Of the two main kinds of memory safety, Rust generally eliminates both, leaving only unsafe Rust and foreign code as possible sites of memory unsafety. Zig, on the other hand, generally eliminates only the more dangerous kind, leaving only unsafe Zig and foreign code as possible sites of that.
Mind you, the vast majority of horrific, catastrophic bugs are not due to UAF. So if we get a horrific, catastrophic bug in Rust, we can eliminate UAF as a cause leaving us only with most possible causes, just as in most programming languages used to write most of the software in the world already.
This point of ha-ha, you also got a segfault while I only got all other bugs doesn't make sense from a software correctness perspective.
There is no binary line between Rust and Zig that makes Zig's superior safety to C that couldn't also be put between Rust and languages that make far stronger guarantees, putting Rust in the same bucket as C. If you think that the argument, "Rust, just like C, is unable to guarantee the vast majority of correctness properties that ATS can, therefore it is equally useless" is silly, then so is trying to put Zig and C in the same bucket.
If you believe that eliminating certain classes of bugs is important for correctness even when you don't eliminate most bugs, then I don't see how a language that eliminates the more dangerous class of the two that Rust eliminates is "just as useless" as a language that eliminates neither.
I have been programming in both C++ and Java for a very long time, and while I appreciate Java's safety, the main difference between the two languages for me hasn't been a different in correctness but in productivity. That productivity comes from Java's superior abstraction - I can make many different kinds of local changes without affecting other code at all, and that is not the case in a low-level language, be it C, C++, Zig, or Rust. I think it's good that Zig and Rust offer bounds ("spatial") safety. I also think it's good that Rust offers UAF ("temporal") safety, but I find the price of that too high for my liking.
Of course, my experience is not universal because I use C++ only for really low-level stuff (mostly when working on the HotSpot VM these days) where both Zig and Rust would have been used in their unsafe flavours anyway, because I'm more than happy to pay the increased memory footprint for higher productivity in other cases.
And then we forget about all other types of undefined behavior?
I would say that a cursory search on ”segfault” in the Bun repo tells a different story.
https://github.com/oven-sh/bun/issues?q=is%3Aissue%20state%3...
What is your reason to claim zig is safer than c++?
Bounds safety by default, nullability is opt-in and checks are enforced by the type-system, far less "undefined behaviour", less implicit integer casting (the ergonomics could still use some work here), etc.
This is on top of the cultural part, which has led to idiomatic Zig being less likely to heap allocate in the first place, and more likely to consider ownership in advance. This part shouldn't be underestimated.
> This part can't be underestimated.
You presumably intend "shouldn't be underestimated" rather than "can't be". I agree that culture is crucial, but the technology needs to support that culture and in this respect Zig's technology is lacking. I would love to imagine that the culture drives technology such that Zig will fix the problem before 1.0, but Zig is very much an auteur language like Jai or Odin, Andrew decides and he does not seem to have quite the same outlook so I do not expect that.
> You presumably intend "shouldn't be underestimated" rather than "can't be".
Good call, I've fixed that.
> Zig is safer than C++
Maybe if someone bends over backwards to rationalize it, but not in any real sense. Zig doesn't have automatic memory management or move semantics.
In C++ you can put bounds checking in your data structures and it is already in the standard data structures. You can't build RAII and moves into zig.
> Maybe if someone bends over backwards to rationalize it, but not in any real sense.
In a simple, real sense. Zig prevents out-of-bounds access just as Rust does; C++ doesn't. Interestingly, almost all of Rust's complexity is invested in the less dangerous kind of memory unsafety (https://cwe.mitre.org/top25/archive/2024/2024_cwe_top25.html).
> You can't build RAII and moves into zig.
So RAII is part of the definition of memory safety now?
Why not just declare memory safety to be "whatever Rust does", say that anything that isn't exactly that is worthless, and be done with that, since that's the level of the arguments anyway.
We could, of course, argue over which of Rust, Zig, and C++ offers the best contribution to correctness beyond the sound guarantees they make, except these are empirical arguments with little empirical data to make any determination, which is part of my point.
Software correctness is such a complicated topic and, if anything, it's become more, not less, mysterious over the decades (see Tony Hoare's astonishment that unsound methods have proven more effective than sound methods in many regards). It's now understood to be a complicated game of confidence vs cost that depends on a great many factors. Those who claim to have definitive solutions don't know what they're talking about (or are making unfounded extrapolations).
C++ doesn't.
Then why do my data structures detect if I go out of bounds?
Interestingly, almost all of Rust's complexity is invested in the less dangerous kind of memory unsafety
I didn't say anything about rust.
So RAII is part of the definition of memory safety now?
Yes. You can clean up memory allocations automatically with destructors and have value semantics for memory that is on the heap.
Why not just declare memory safety to be "whatever Rust does", say that anything that isn't exactly that is worthless, and be done with that, since that's the level of the arguments anyway.
Why are you talking about rust here? Focus on what I'm saying.
We could, of course, argue over which of Rust, Zig, and C++
if anything, it's become more, not less, mysterious over the decades
Says who?
I don't care about rust or zig, I'm saying that these are solved problems in C++ and I don't have to deal with them. Zig does not have destructors and move semantics.
> Then why do my data structures detect if I go out of bounds?
I didn't mean you can't write C++ code that enforces that, I said C++ itself doesn't enforce it.
> Yes. You can clean up memory allocations automatically with destructors and have value semantics for memory that is on the heap.
Surely there are other ways to do that. E.g. Zig has defer. You can say that you may forget to write defer, which is true, but the implicitness of RAII has cause (me, at least) many problems over the years. It's a pros-and-cons thing, and Zig chooses the side of explicitness.
> Why are you talking about rust here? Focus on what I'm saying.
You're right, sorry :)
> Says who?
Says most people in the field of software correctness (and me https://pron.github.io). In the seventies, the prevalent opinion was that proofs of correctness would be the only viable approach to correctness. Since then, we've learnt two things, both of which were surprising.
The first was new results in the computational complexity of model checking (not to be confused with the computational complexity of model checkers; we're talking about the intrinsic computation complexity of the model checking problem, i.e. the problem of knowing whether a program satisfies some correctness property, regardless of how we learn that). This included results (e.g. by Philippe Schnoebelen) showing that even though there would be the reasonable expectation that language abstractions could make the problem easier, even in the worst case - it doesn't.
The second was that unsound techniques, including engineering best practices, have proven far more effective than was thought possible in the seventies. This came as quite a shock to formal methods people (most famously, Tony Hoare, who wrote a famous paper about it).
As a result, the field of software correctness has shifted its main focus from proving program correct to finding interesting confidence/cost tradeoffs to reduce the number of bugs, realising that there's no single best path to more correctness (as far as we know today).
> I'm saying that these are solved problems in C++ and I don't have to deal with them. Zig does not have destructors and move semantics.
That's true, but these are not memory safety guarantees. These are mechanisms that could mitigate bugs (though perhaps cause others), and Zig has other, different mechanisms to mitigate bugs (though perhaps cause others). E.g. see how easy it is to write a type-safe printf in Zig compared to C++, or how Zig handles various numeric overflow issues compared to C++. So it's true that C++ has some features we may find helpful that Zig doesn't and vice-versa, we can't judge which of them leads to more correct programs. All I said was that Zig offers more safety guarantees than C++, which it does.
Zig has defer.
And C has free, but you have to remember to use it and use it correctly every single time instead of the memory working by default with no intervention.
Says most people in the field of software correctness
Not true, the last 30 years have had much safer languages than before java, scripting languages, modern C++ and rust.
That's true, but these are not memory safety guarantees.
Pragmatically they mean you don't have to worry about bounds checking or memory deallocation and it stops being a problem. Zig doesn't have this and it doesn't have safety guarantees either.
>C has free; zig has defer
But even in a c++ destructer if you forget to dealloc a private heap allocation ... then you're in the same darn place.
> And C has free, but you have to remember to use it and use it correctly every single time instead of the memory working by default with no intervention.
Tangential, but memory leaks are not considered a safety issue, especially by those who do like to contrast with Rust (as it isn't prevented in Rust).
If we're talking about features that help (though not completely avoid) some bugs, you can't just consider the features C++ has and Zig doesn't, but also consider the relevant features Zig has and C++ doesn't.
Like I said, I don't know which of those two languages results in more correct programs (just as I don't know the answer for Zig vs Rust), but I do know that Zig offers more safety guanrantees than C++, and Rust offers more safety guarantees than Zig. I certainly don't claim that more safety guarantess always equals more correctness at a lower cost.
Even more tangentially, in the Java world we have this thing called "integrity" (https://openjdk.org/jeps/8305968) which is the ability of Java code to locally establish inviolate invariants that are guaranteed to hold globally (unless the application author - importantly not any library code - explicitly allows them to be violated). C++ scores quite low on the integrity front, as virtually all intended invariants can be violated without a global flag, sometimes in ways that are hard to detect. In both Rust and Zig, integrity violations are generally easier to at least detect (although in Zig they're sometimes harder to establish in the first place; this is intentional, and I don't entirely agree with the justification for that, although I can see its merits in a low-level language).
> Not true, the last 30 years have had much safer languages than before java, scripting languages, modern C++ and rust.
I don't see how that contradicts what I said, especially since language that offer even more correctness - such as Idris or ATS - have had effectively zero adoption. The languages that have succeeded are safer than C or FORTRAN, but also clearly compromise on what they offer (compared to Idris/ATS) because of costs. They very much embody the an acceptance of tradeoffs, and much of the memory safety in most safe languages is offered through GCs, that come with the cost of higher memory footprint. If anything, their growing popularity has come due to advancements in GCs.
Rust (you brought it up this time) is particularly interesting, because it offers something different than before to prevent UAF but at a higher cost than previous popular safe languages. While I don't know how popular Rust will be in the future, its current adoption is quite significantly lower than any language that's ever become popular at the same age.
> Pragmatically they mean you don't have to worry about bounds checking or memory deallocation and it stops being a problem
I haven't noticed that either one of these has "stopped being a problem", and I think that those who either sell or buy Rust do so because they believe these are still significant problems in C++ (and I would agree, except I think there are worse problems in C++ - that Rust, unfortunately, adopted - even with respect to correctness, that Zig attempts to solve).
> Zig doesn't have this and it doesn't have safety guarantees either
Zig definitely has safety guarantees around bounds and numeric overflow that C++ doesn't.
memory leaks are not considered a safety issue
Who told you that?
in the Java world we have this thing called "integrity"
Your claim was that zig is 'safer' than C++
Zig definitely has safety guarantees around bounds and numeric overflow that C++ doesn't.
This can be built in to a class too if someone really wants a bunch of branching in their math.
It seems like now safety is being redefined to say that memory leaks don't count and numeric overflow needs to be done like zig. If your program leaks memory, it eventually crashes if it runs indefinitely and that means you need to free memory, which means you need to free it at the right time only once.
> Who told you that?
There is no one definitive definition of memory safety, but it generally refers to things that can lead to undefined behaviour (in the C and C++ sense), usually due to "type confusion" (or sometimes "heap pollution"), i.e. referencing an address of memory that contains data of one type as if it were another, which can happen due to both bounds or UAF violations. Memory leaks don't cause undefined behaviour.
> This can be built in to a class too if someone really wants a bunch of branching in their math.
Let me say this again: The Zig language, just like Rust, guarantees that there are no bounds violations (except in syntactically demarcated unsafe code). C++ just doesn't do that.
That is not to say that the lack of this guarantee in C++ means you can't write correct programs in C++ as easily as in Zig or in Rust, but it is, nevertheless, a difference in the guarantees made by the language.
> It seems like now safety is being redefined to say that memory leaks don't count and numeric overflow needs to be done like zig
Memory unsafety is generally considered to be some subset of undefined behaviour (possibly including all undefined behaviour). Out-of-memory and stack overflow errors are definitely problems, but as they don't cause undefined behaviour (well, depending on stack protection) they're not usually regarded in the class of properties called memory safety.
Numeric overflows, on the other hand, might also not be regarded as memory safety, but they are very much undefined behaviour in both C and C++.
Memory leaks are also a safety issue. Especially not running destructors can be a safety issue, but also a resource leak is at least a DoS. IIRC Rust also included not having memory leaks earlier in their definition of memory safety, but dropped it later.
The vast majority of catastrophic problems - nearly all of them, in the grand scheme of things - including those that can cause total system failure or theft of all data are not considered memory safety issues (which is one of the reasons that memory safety is overestimated or at least misunderstood, IMO, and why I prefer to talk about correctness in general). Memory safety refers to a specific kind of problems that correspond to undefined behaviour in C or C++. Memory safety issues are not neessarily any more or less sever than any other program weakness, it's just that for a long time they've been associated with low-level programming.
I'm not aware of any popular language - even a high level one - that prevents memory leaks with any kind of guarantee (although these come in different flavours too, and some kinds are prevented in Java). C/C++/Rust/Zig certainly don't.
Memory safety - as now being popularized by Rust in its current form - mostly corresponds to not having UB in C or C++. My point is that this not the only definition and not even the definition Rust started with.
Memory leaks are often a part of the definition of memory safety because otherwise it is trivial to fix use-after-free, i.e. simple never free the memory. Rust dropped this part because it was too hard. So in some sense the cheated a little bit.
Well, when Rust came out I had only been programming in C and C++ for about 15, maybe 20 years, but I think that even then we generally used memory safety to refer to problems that can cause "type confusion". In any event, given that none of the languages mentioned here - C, C++, Zig, or Rust - prevent memory leaks, I don't think that the question of whether or not we include it under the umbrella of memory safety could offer insight on the interesting distinctions between these languages.
the Zig language, just like Rust, guarantees that there are no bounds violations (except in syntactically demarcated unsafe code). C++ just doesn't do that.
You said that already, but when saying zig is safer than C++, pragmatically it isn't because C++ bounds checks in the standard library but zig can never have the automatic resource management that C++ has, and that's what people use all day every day.
We keep talking about completely different things. If we're talking about "features that can help reduce some bug" then C++ or Rust have some that Zig doesn't and Zig has some that C++ or Rust don't. Which ends up more pragmatic is an empirical question that's hard to answer without data, but certainly focusing only on what C++ has and Zig doesn't while ignoring what Zig has that C++ doesn't is a strange way to compare things (BTW, I've been programming in C++ for almost 3 decades, and I really dislike RAII and try to avoid it).
But if we're talking about memory safety - which is something very specific - then, for whatever it's worth, Zig is more memory-safe than C++ and Rust is more memory-safe than Zig.
We keep talking about completely different things.
You said zig is safer than C++, then to make that argument you keep trying to redefine what safety means to include only features in the language syntax but not done in libraries while saying memory leaks don't matter and automatically freeing memory correctly doesn't matter.
I am not redefining what safety means. I am using the same definition of safety used in this entire thread by those debating the pros and cons of Rust being safer than Zig.
I definitely didn't say that memory leaks don't matter. They could possibly matter more than memory safety. They are just not called memory safety bugs, or code injection bugs, or off-by-one bugs. Memory safety is a name given to a class of bugs that lead to undefined behaviour in C or C++. It's not necessarily the most important class of bugs, but it is one, and when we're talking about preventing code injection or memory safety issues, we're not talking about preventing memory leaks - even if they're worse.
Now, if you want to talk about memory leaks and not memory safety (again, it's just a name given to some bugs and not others) then C, C++, Zig, and Rust, do not prevent them. Java prevents the kind "I forgot to free this object" kind, but not "I forgot about this object" kind.
Now, because unlike memory safety, none of these languages prevents memory leaks, it's really hard to say which of them leads to the fewest memory leaks. You really like C++'s destructors and find them useful, I really hate C++'s destructors and find them harmful, and we all have different opinions on why our way is better when it comes to memory leaks. What we don't have is data. So you can say having destructors helps and I can say no they don't until the end of time, but there's no way of knowing which is really better. So all we can do now, is to use the things we find useful to us without making broad generalisations about software correctness that we can't actually support with any evidence.
Unless you actually use the simplicity to apply formal methods I don't think simplicity make a language safer. The exact opposite. You can see it play out in the C vs C++ arena. C++ is essentially just a more complex C. But I trust modern C++ much more in terms of memory safety.
> Unless you actually use the simplicity to apply formal methods I don't think simplicity make a language safer.
That depends what you mean by "safer", but it is an empirical fact that unsound methods (like tests and code reviews) are extremely effective at preventing bugs, so the claim that formal methods are the only way is just wrong (and I say this as a formal methods guy, although formal methods have come a long way since the seventies, when we thought the point was to prove programs correct).
> The exact opposite. You can see it play out in the C vs C++ arena. C++ is essentially just a more complex C. But I trust modern C++ much more in terms of memory safety.
I don't understand the logical implication. From the fact that there exists a complicating extension of a language that's safer in some practical way than the original you conclude that complexity always offers correctness benefits? This just doesn't follow logically, and you can immediately see it's false because Zig is both simpler and safer than C++ (and it's safer than C++ even if its simplicity had no correctness benefits at all).
> That depends what you mean by "safer", but it is an empirical fact that unsound methods (like tests and code reviews) are extremely effective at preventing bugs, so the claim that formal methods are the only way is just wrong (and I say this as a formal methods person)
I agree that tests and reviews are somewhat effective. That's not the point. The point is that if you look at the history of programming languages simplicity in general goes against safety. Simplicity also goes against human understanding of code. C and assembly are extremely simple compared to java, python, C#, typescript etc. yet programs written in C and assembly are much harder to understand for humans. This isn't just a PL thing either. Simplicity is not the same as easy, it often is the opposite.
> I don't understand the logical implication. From the fact that there exists a complicating extension of a language that's safer in some practical way than the original you conclude that complexity always offers correctness benefits? This just doesn't follow logically, and you can immediately see it's false because Zig is both simpler and safer than C++ (and it's safer than C++ even if its simplicity had no correctness benefits at al
It's the greatest example of you take a simple language, you add a ton of complexity and it becomes more safe. You are right that zig is simpler and safer, but it's a green field language. Else I might as well say rust is more safe than zig and also more complex. The point is as to isolate simplicity as the factor as much as possible.
I would even say that zig willingly sacrifices safety on the alter of simplicity.
> The point is that if you look at the history of programming languages simplicity in general goes against safety... C and assembly are extremely simple compared to java, python, C#, typescript
But Java and Python are simpler yet safer than C++, so I don't understand what trend you can draw if there are examples in both directions.
> It's the greatest example of you take a simple language, you add a ton of complexity and it becomes more safe.
But I didn't mean to imply that's not possible to add safety with complexity. I meant that when the sound guarantees are the same in two languages, then there's an argument to be made that the simpler one would be easier to write more correct programs in. Of course, in this case Zig is not only simpler than C++, but actually offers more sound safety guarantees.
I do not find C code harder to understand than C++ - quite the opposite.
So far I think the adoption in critical infrastructure (Linux, AWS, Windows, etc.) is clearly in Rust favor but I agree that something at some point will replace Rust. My belief is that more guardrails will end up winning no matter the language since the last 50 years of progamming have shown us we can't rely on humans to write bug free code and it is even worse with LLM.
In practice, almost all memory safety related bugs caught by the Rust compiler are caught by the Zig safe build modes at run time. This is strictly worse in isolation, but when you factor in the fact that the rest of the language is much easier to reason about, the better C interop, the simple yet powerful metaprogramming, and the great built in testing tools, the tradeoffs start to become a lot more interesting.
catching at compile time is much better, though. there are plenty of strange situations that can happen that you'll not reach in runtime (for example, odds of running into a tripwire increase over time, things that can only happen after certain amount of memory fragmentation -- maybe you forgot an errdefer somewhere, etc.)
I think the problem with this attitude is the compiler becomes a middle manager you have to appease rather than a collaborator. Certainly there are advantages to having a manager, but if you go off the beaten track with Rust, you will not have a good time. I write most of my code in Zig these days and I think being able to segfault is a small price to pay to never have to see `Arc<RefCell<Foo<Bar<Whatever>>>` again.
I view it as a wonderful collaborator, it tells me automatically were my code is wrong and it gets better with every release, I can't complain really. I think a segfault is a big price to pay, but it depends on the criticality of it I guess.
Not to mention that `Arc` uses a GC (and not a stellar one, at that)...
You can use alternative GC such as crossbeam if you want. You're not locked into using an Arc.
Lol, what are you even trying to say here?
Is Zig such an amazing language that while using it you won't ever need reference-counted pointers?
You can write rust without over-using traits. Regrettably, many rust libs and domains encourage patterns like that. One of the two biggest drawbacks of the rust ecosystem.
I can't imagine writing c++ or c these days without static analysis or the various llvm sanitizers. I would think the same applies to zig. Rather than need these additional tools, rust gives you most of their benefits in the compiler. Being able to write bugs and have the code run isn't really something to boast about.
I would rather rely on a bunch of sanitizers and static analysis because it is more representative of the core problem I am solving: Producing machine code. If I want Rust to solve these problems for me I now have to write code in the Rust model, which is a layer of indirection that I have found more trouble than it's worth.
How do you guard concurrent access in your multithreaded code?
Due diligence every single time after the tenth refactor?
Nit: I think you want crux in that phrase, not crust.
Thanks! Cant edit anymore, I guess I was feeling hungry this morning
would you be satisfied if there was a static safety checker? (or if it were a compiler plugin that you trigger by running a slightly different command?). Note that zig compiles as a single object, so if you import a library and the library author does not do safety checking, your program would still do the safety checking if it doesn't cross a C abi boundary.
https://www.youtube.com/watch?v=ZY_Z-aGbYm8
As someone who uses D and has been doing things like what you see in the post for a long time, I wonder why other languages would put attention to these tricks and steal them when they have been completely ignoring them forever when done in D. Perhaps Zig will make these features more popular, but I'm skeptic.
I was trying to implement this trick in D using basic enum, but couldn't find a solution that works at compile-time, like in Zig. Could you show how to do that?
Thanks! That indeed does the equivalent as the Zig code... but feels a bit pointless to do that in D, I think?
Could've done this and be as safe, but perhaps it loses the point of the article:
This perspective that many people take on memory-safety of Rust seems really "interesting".
Unfortunately for all fanatics, language really doesn't matter that much.
I have been using KDE for years now and it works perfectly good for me. It has no issues/crashes, it has many features in terms of desktop environment and also many programs that come with it like music player, video player, text editor, terminal etc. and they all work perfectly well for me. Almost all of this is written in C++. No need to mention the classic linux/chromium etc. etc which are all written in c++/c.
I use Ghostty which is written in zig, it is amazingly polished and works super well as well.
I have built and used a lot of software written in Rust as well and they worked really well too.
At some point you have to admit, what matters is the people writing software, the amount of effort that goes into it etc. it is not the langauge.
As far as memory-safety goes, it really isn't close to being the most important thing unless you are writing security critical stuff. Even then just using Rust isn't as good as you might think, I uncountered a decent amount of segfaults, random crashes etc. using very popular Rust libraries as well. In the end just need to put in the effort.
I'm not saying language doesn't matter but it isn't even close to being the most important thing.
> As far as memory-safety goes, it really isn't close to being the most important thing unless you are writing security critical stuff.
Safety is the selling point of Rust, but it's not the only benefit from a technical point of view.
The language semantics force you to write programs in a way that is most convenient for the optimizing compiler.
Not always, but in many cases, it's likely that a program written in Rust will be highly and deeply optimized. Of course, you can follow the same rules in C or Zig, but you would have to control more things manually, and you'd always have to think about what the compiler is doing under the hood.
It's true that neither safety nor performance are critical for many applications, but from this perspective, you could just use a high-level environment such as the JVM. The JVM is already very safe, just less performant.
> just another memory unsafe language
Also, treating all languages that don't ensure full memory safety as if they're equally problematic is silly. The reason not ensuring memory safety is bad is because memory unsafety as at the root of some bugs that are both common, dangerous, and hard to catch. Only not all kinds of memory unsafety are equally problematic, Zig does ensure the lack of the the most dangerous kind of unsafety (out-of-bounds access) while making the other kind (use-after-free) easier to find.
That the distinction between "fully memory safe" and "not fully memory safe" is binary is also silly not just because of the above, but because no lanugage, not even Java, is truly "fully memory safe", as programs continue to employ components not written in memory safe languages.
Furthermore, Zig has (or intends to have) novel features (among low-level languages) that help reduce bugs beyond those caused by memory unsafety.
If you one day write a blog, I would want to subscribe.
Your writing feels accessible. I find it makes complex topics approachable. Or at least, it gives me a feel of concepts that I would otherwise have no grasp on. Other online writing tends to be permeated by a thick lattice of ideology or hyper-technical arcanery that inhibits understanding.
Thank you!
I did have one once (https://pron.github.io) but I don't know how accessible it is :) (two post series are book-length)
> Your writing feels accessible. I find it makes complex topics approachable
Yeah. By omitting a large swath of nuance. It reeks of "you can approximate cow with a sphere the size of Jupiter". It's baffling ludicrous.
Any rhetorical device that equates Java/C# (any memory safe Turing language ) safety with C is most likely a fallacy.
> Any rhetorical device that equates Java/C# (any memory safe Turing language ) safety with C is most likely a fallacy.
I agree, but I didn't do any of that. If anything my point was that 1. safety is clearly not a binary thing and no one really treats it as such (even those who claim it is a binary distinction) and 2. that trying to extrapolate from one language to another based on choosing some property that we think is the most relevant one may be assuming that which we seek to prove.
Saying that C, C++, and Zig are "the same" because they all make fewer guanratees than Rust is as silly as saying C, C++, Zig, and Rust are the same because they all offer fewer guarantees than ATS, or that Rust and Java are the same because they offer similar guarantees but with very different complexity costs.
Also, the focus on memory safety is justified because of the security bugs it causes, but the two major kinds of unsafety (out-of-bounds access and use-after free) aren't equally dangerous, and Rust pays most of its complexity cost to prevent the less dangerous of the two (https://cwe.mitre.org/top25/archive/2024/2024_cwe_top25.html). There's even more nuance here, because some techniques focus on reducing the risk of exploitable use-after-free bugs without preventing it or even making it easier to detect at all (https://www.cl.cam.ac.uk/~tmj32/papers/docs/ainsworth20-sp.p...).
It's all a matter of degree, both when it comes to the risk as well as to the cost of avoiding it. Not much here, beyond the very basics, is simple or obvious.
If you want to read some more even nuanced things I've written about software correctness, you can find some old stuff here: https://pron.github.io
I interpreted his post as saying it's not binary safe/unsafe, but rather a spectrum, with Java safer than C because of particular features that have pros and cons, not because of a magic free safe/unsafe switch. He's advocating for more nuance, not less.
Yeah, it's not binary; it's just a step function. /s
No, it's as close to binary as you can get. Is your only source of Undefined Behavior FFI specially marked functions and/or packages? Have you checked data races for violating thread safety invariants? If yes - You're safe.
Allow a bit of unsafety into the system, like Go, and the unsafety can creep into your ecosystem. See https://www.ralfj.de/blog/2025/07/24/memory-safety.html
Is Go in mostly safer than C++? Maybe. But you can never prove that about either of them. So while you may pretend one is safer than the other, it's a bit like picking which boat is taking on more water.
Can you prove Rust code is safe? Well there is the simple way - no unsafe. But what about unsafe blocks? Yes, you can prove it for them as well. If the unsafe code block is it will note safety invariants and why are they preserved by unsafe block. Can this be practically done? Depends on the crate, but with enough effort, yes.
> Is Go in mostly safer than C++? Maybe
Maybe? You forgot /s there? Asking if Go is mostly safer than C++ is like asking if child proof caps are mostly safer than mason jars for medicine.
> https://www.ralfj.de/blog/2025/07/24/memory-safety.html
Can you show RCE using this? Because, to this day, no one has been able to show me a reasonable program that someone would write and that would result in RCE from "Go memory unsafety" presented in this article. Meanwhile, I can show you thousands of examples and CVEs of how you can easily get RCE using C++.
> Can you prove Rust code is safe? Well there is the simple way - no unsafe. But what about unsafe blocks? Yes, you can prove it for them as well. If the unsafe code block is it will note safety invariants and why are they preserved by unsafe block. Can this be practically done? Depends on the crate, but with enough effort, yes.
You can’t prove Rust code "safe" in the absolute. Safety guarantees apply to safe Rust under the language’s (still evolving) rules, and even then the compiler/backend must uphold them. We still hit unsoundness[1] and miscompiles in safe code (equal pointers comparing unequal... [2]), and the official unsafe code guidelines are not a finalized spec. So documenting invariants in unsafe helps a lot, but it’s not a formal proof, especially across crates and compiler versions.
1. https://github.com/rust-lang/rust/issues/107975
2. https://github.com/rust-lang/rust/labels/I-unsound
On the safety spectrum: C/C++ -> Zig -> Go -> Rust
Right but I think people are disappointed because we finally got a language that has memory safety without GC, so Zig seems like a step backwards. Even if it is much much better than C (clearly), it's hard to get excited about a language that "unsolves" a longstanding problem.
> not even Java, is truly "fully memory safe", as programs continue to employ components not written in memory safe languages.
This is a silly point.
> I think people are disappointed because we finally got a language that has memory safety without GC, so Zig seems like a step backwards
Memory safety (like soundly ensuring any non-trivial property) must come at a cost (that's just complexity theory). You can pay for it with added footprint (Java) or with added effort (Rust). Some people are disappointed that Zig offer more safety than C++ but less than Rust in exchange for other important benefits, while others are disappointed that the price you have to pay for even more safety in Rust is not a price they're happy to pay.
BTW, many Rust programs do use GC (that's what Rc/Arc are), it's just one that optimises for footprint rather than speed (which is definitely okay when you don't use the GC as much as in Java, but it's not really "without GC", either, when many programs do rely on GC to some extent).
> This is a silly point.
Why? It shows that even those who wish to make the distinction seem binary themselves accept that it isn't, and really believe that it matters just how much risk you take and how much you pay to reduce it.
(You could even point out that memory corruption can occur at the hardware level, so not only is the promise of zero memory corruption not necessarily worth any price, but it is also unattainable, even in principle, and if that were truly the binary line, then all of software is on the same side of it.)
> You can pay for it with added footprint (Java) or with added effort (Rust)
... or runtime errors (C, Zig presumably).
Ok Zig is clearly better than C in that regard but I think it remains to be seen if it is better enough.
> many Rust programs do use GC (that's what Rc/Arc are)
This is not what most people mean when they say GC.
> Why?
Because when we're talking about the memory safety of a language we're talking about the code you write in that language (and excluding explicit opt-in to memory unsafe behaviour, e.g. `unsafe` or Python's `ctypes`).
Saying "Java isn't memory safe because you can call C" is like saying "bicycles can fly because you can put them on a plane".
Concur. This is a great feature I wish rust had. I've been bitten by the unpleasant syntax this article laments.
I've seen a few new languages come along that were inspired by zig's comptime/metaprogramming in the same language concept.
Zig I think has potential but it hasn't stabilized enough yet for broad adoption. That means it'll be awhile before it's built an ecosystem (libraries, engines etc.) that is useful to developers that don't care about language design.
This isn’t intended as flamebait. I’m trying to understand Zig’s long-term positioning and design philosophy. I have serious confusion about the type of problems Zig is aiming to solve. In my view, Zig is not solving the actual hard problems in systems programming and it doesn't have the foundation to either.
Memory safety? Still entirely manual. Race conditions? Nothing in the language prevents them. There’s no ownership model, no lifetime analysis, no way to tie resource management to the type system. Compare that to Rust’s borrow checker or modern C++’s RAII and concepts. Zig’s type system is shallow. comptime is nice for generating code, but it doesn’t give you formal guarantees or expressive power for invariants, safety, or correctness.
The type system itself has no serious formal grounding. It can’t encode complex invariants, can’t track aliasing, can’t enforce concurrency safety and can’t model safe resource lifetimes. These aren’t academic extras — they’re exactly what decades of research in programming languages, operating systems and concurrent computing tell us you need to scale safety and correctness. Zig ignores them. Performance? When the policy is in the type (allocator choice, borrowing/ownership, fusion shape), Rust/C++ compilers can specialize, inline, and eliminate overhead. In Zig, the same policies are usually runtime values or conventions, which means more indirect calls, more defensive copies and fewer whole-program optimizations.
Concurrency is another major gap and in a real systems language, it cannot be an afterthought. Even if Zig isn’t currently aiming to solve concurrency or safety, a “serious” systems language inevitably has to, because these are the problems that determine scalability, maintainability and security over decades. The async model in Zig is little more than manual coroutine lowering: the compiler rewrites your function into a state machine and leaves correctness entirely to the programmer. There’s no structured concurrency, no safe cancellation, no prevention of shared-state hazards. Without a concurrency model that integrates into the type system, you can’t make guarantees about thread safety or race freedom and you end up relying entirely on discipline (which doesn’t scale).
Even in its most-touted features, Zig seems to be solving syntactic sugar problems, not the important systems problems. defer and errdefer? They’re effectively cleaner syntax for patterns C has had for decades through GNU’s __attribute__((cleanup)) or macro-based scope guards. Error unions? A nice alternative to out-parameters but just syntactic polish over an old idea. comptime? A more integrated macro system but still aimed at reducing boilerplate rather than providing deeper correctness guarantees.
The allocator interface? Another missed opportunity. Zig could have made it type-aware, preventing allocator misuse and catching entire classes of errors at compile time. Instead, it’s basically malloc/free with slightly cleaner function signatures. No safety net, no policy enforcement.
Zig discards decades of research in type systems, concurrency models, safety guarantees, and memory management, then reimplements C with a few ergonomic wins and leaves the hard problems untouched. It’s a restart without the research and not systems language evolution.
I am not a Rust fanatic but by contrast if you’re moving away from C++ or C, Rust actually tackles the big issues. It enforces memory safety without a garbage collector, prevents data races in safe code through its ownership and type system, offers structured concurrency with async/await and has been battle-tested in production for everything from browser engines to operating systems to databases. It is built on decades of progress and integrates those lessons into a language designed to scale correctness and performance together.
In my own code (primarily C++ and Rust), Zig wouldn’t solve a single core problem I face. Memory safety would still be my responsibility, concurrency would still be entirely manual, performance tuning would remain just as challenging and the type system wouldn’t catch the subtle bugs that matter most. The net result would be cosmetic changes paired with fewer correctness guarantees. Even C, for all its flaws, is better defined than Zig (both in having a detailed, standardized specification and in benefiting from partial formalization).
I am eager and optimistic that Zig starts taking itself seriously as a systems language. With new talent, deeper engagement with existing research and a focus on solving the actual hard problems, not just smoothing over C’s syntax, Zig could grow into something much more than it is today. But until then, the question remains: what problems is Zig actually solving that make it worth adopting over Rust or even modern C++? What concrete systems programming problems has Zig’s development team personally run into that shaped its design and are those really the most critical issues worth addressing in a new systems language?
If all it offers is nicer syntax over the same old pitfalls, I don’t see it and I don’t see why anyone betting on long-term systems software should.
If you make advancements but disregard the advancements that came before you, you have a research language, not a modern usable language.
By this definition, every major programming language in use today (C, C++, Java, Python, ...) is merely a research language.
All of your examples were created three decades ago or more.
> «inline else» is also very powerful tool to easily abstract away code with no runtime cost.
Sure, but you lose the clarity of errors. The error wasn't in `comptime unreachable` but in `inline .a .b .c`.
I disagree, I would say the error is in "comptime unreachable" or maybe the whole "switch (ab)".
`comptime unreachable` isn't the underlying cause and not the place you should fix in this case. A good error tells you what caused it, and where to fix it.
Adding a new case is legitimate, failing to handle it (by reaching unreachable) is an error.
I can't take zig as seriously as rust due to lack of data race safety. There are just too many bugs that can happen when you have threads, share state between those threads and manually manage memory. There are so many bugs I've written because I did this wrong for many years but didn't realize until I wrote rust. I don't trust myself or anyone to get this right.
It is great to see other languages getting the same compile-time meta programming features as C ;-)
https://godbolt.org/z/P1r49nTWo
I didn't know godbolt let you include URLs. Fascinating: https://raw.githubusercontent.com/uecker/noplate/main/src/co...
I think Zig gets some points here for legibility ;-).
Note that only one or two lines of this file is needed here, the rest is for other experiments... And a user would simply use the "literal" macro. I also do not think the Zig version is really that much clearer or obvious. I would not recommend to write such code anyway, neither in C nor in Zig. The "unreachable" version is far better.
This post shows how versatile Zig's comptime is not only in terms of expressing what to pre-compute before the program ever runs, but also for doing arbitrary compile time bug-checks like these. At least to me, the former is a really obvious use-case and I have no problem using that to my advantage like that. But I often seem to overlook the latter, even though it could prove really valuable.
I love the idea, but something being "provable" in this way feels like relying on optimisations.
If a dead code elimination pass didn't remove the 'comptime unreachable' statement, you'll now fail to compile (I expect?)
It's inherently an incomplete heutistic. Cf. the halting problem.
Doesn't mean it's not useful.
A lot of Zig relies on compilation being lazy in the same sort of way.
For the validity of the program? As in, a program will fail to compile (or compile but be incorrect) if an optimisation misbehaves?
That sounds as bad as relying on undefined behaviour in C.
It's not an optimization. What gets evaluated via the lazy evaluation is well defined. Control flow which has a value defined at comptime will only evaluate the path taken. In the op example, the block is evaluated twice, once for each enum value, and the inner switch is followed at comptime so only one prong is evaluated.
Nope, this is not relying on optimization, it's just how compile time evaluation works. The language guarantees "folding" here regardless of optimization level in use. The inline keyword used in the original post is not an optimization hint, it does a specific thing. It forces the switch prong to be evaluated for all possible values. This makes the value comptime, which makes it possible to have a comptime unreachable prong when switching on it.
There are similarities here to C++ if constexpr and static_assert, if those are familiar to you.
Well, for example you may have some functions which accept types and return types, which are not compatible with some input types, and indicate their incompatibility by raising an error so that compilation fails. If the program actually does not pass some type to such a function that leads to this sort of error, it would seem like a bug for the compiler to choose to evaluate that function with that argument anyway, in the same way that it would be a bug if I had said "template" throughout this comment. And it is not generally regarded as a deficiency in C++ that if the compiler suddenly chose to instantiate every template with every value or type, some of the resulting instantiations would not compile.
To take an extreme example, what if I asserted the Riemann hypothesis in comptime? It's relying on comptime execution to act as a proof checker.
Which is fine for small inputs and uses, but it's not something that would scale well.
Is there a reason the Zig compiler can't perform type-narrowing for `u` within the `U::A(_) | U::B(_)` "guard", rendering just the set of 2 cases entirely necessary and sufficient (obviating the need for any of the solutions in the blog post)?
I'm not familiar with Zig, but also ready to find out I'm not as familiar with type systems as I thought.
it can narrow the payload: https://zigbin.io/7cb79d
I think the post would be more helpful if it had a concrete use case. let's say a contrived bytecode VM:
"because comptime", this is effectively the same runtime performance as the common: and this is in a situation where this level of performance optimization is actually valuable to spend time on. it's nice that Zig lets you achieve it while reusing the logic.Just having a comptime unreachable feature seems pretty cool. Common C++ compilers have the worst version of this with __builtin_unreachable() -- they don't do any verification the site is unreachable, and just let the optimizer go to town. (I use/recommend runtime assert/fatal/abort over that behavior most days of the week.)
I did not realize you could inline anything other than an `else` branch! This is a very cool use for that.
I love how this opens with the acknowledgement we've made a mess of choice-like data structure terminology!
I don't understand. Isn't this only useful if the value you match on is known at compile time?
The code example will work even if `u` is only known at runtime. That's because the inner switch is not matching on `u`, it's matching on `ab`, which is known at compile time due to the use of `inline`.
That may be confusing, but basically `inline` is generating different code for the branches .a and .b, so in those cases the value of `ab` is known at compile time. So, the inner switch is running at compile time too. In the .a branch it just turns into a call to handle_a(), and in the .b branch it turns into a call to handle_b().
The problem this is meant to solve is that sometimes a human thinking about the logic of the program can see it is impossible to reach some code (ie it is statically certain) but the language syntax and type system alone would not see the impossibility. So you can help the compiler along.
It is not meant for asserting dynamic “unreachability” (which is more like an assertion than a proof).
fn main() {
}Fails to compile in Rust.
Sure, because it's compile-time code inside a (semantically) run-time check. In recent Rust versions you can do
which compiles as expected. (Note that if the binding were `const` instead of `let`, it'd still have failed to compile, because the semantics don't change.)Perhaps more succinctly:
It's fine that we want a constant, it's fine that this constant would, when being computed at compile time, panic if false was true, because it is not.not sure it is to be equivalent to zig.
in zig they have one brach const.
in rust example from you, whole control flow ix is const. which is not rquivalent to zig. so how to have non const branches?
I have no idea what that's trying to do. A demonstration that rust is a large language with different dialects! A terse statement with multiple things I don't understand:
To me it might as well be:Why would it? If I recall correctly, const and static stuff basically gets inlined at the beginning of the program.
[dead]