Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Date bug in Rust-based coreutils affects Ubuntu 25.10 automatic updates (lwn.net)
253 points by blueflow 1 day ago | hide | past | favorite | 373 comments




Im okay with this. This is how we find out issues. As long as these are sorted before the LTS release, no problem.

Not sure if I (as an unsuspecting Ubuntu user) am really ok with this. I'm not saying they should wait until version 1.0 (currently uutils/coreutils is at 0.2.2), but at least until the green line reaches the blue line in this graph: https://github.com/uutils/coreutils?tab=readme-ov-file#gnu-t... Of course, "exotic" bugs might happen (not sure how exotic the bug that caused this issue was), but can a software really be considered production-ready if it still fails part of the testsuite of the software it should replace? I don't think so...

If you look at "date" specifically on https://uutils.github.io/coreutils/docs/test_coverage.html, it looks much worse than the overall graph suggests: 2 tests passing, 3 tests skipped, 3 with errors. Not really reassuring, right?


> If you look at "date" specifically on https://uutils.github.io/coreutils/docs/test_coverage.html, it looks much worse than the overall graph suggests: 2 tests passing, 3 tests skipped, 3 with errors. Not really reassuring, right?

That's because they added new tests to catch these cases. I recall seeing someone mention in a comment here that coreutils didn't have a test for this either.

So it is reassuring that these things actually get documented and tested.


So did they have no tests until people put the code into production and sent bug reports?

If I understood correctly, the test suite they are using are the tests used for the original coreutils. Apparently, the behavior that led to this bug wasn't covered by the original test suite (and it looks like the original coreutils just got the behavior "right" without it being tested), so the uutils guys added new tests to cover this after the bug was found.

That makes sense. However, I am generally biased against making significant changes in software (especially rewrites) without also beefing up the test suite, though.

> So did they have no tests until people put the code into production and sent bug reports?

They tested what original coreutils tested. Until other people put uutils into production, neither had a test for this case.

https://github.com/coreutils/coreutils/blob/master/tests/dat...


I think it's a bit naive to believe that the original coreutils developers only used what is now in the public test suite. Over that length of development, a lot of people probably tested a lot of things even if those tests didn't make it into the official CI suite. If you're doing a rewrite, just writing to the existing tests is really not enough.

This is going to be a problem, considering (more and more unfortunate) Ubuntu's popularity. Scripts will continue to be written against Ubuntu, and if they do not work on your Debian or whatever, it's your problem.

Same thing that happens with Alpine's shell, or macOS, or the BSDs — I work with shell all the time and often run into scripts that should work on non-bash shells, with non-GNU coreutils, but don't, because nobody cared to test them anywhere besides Ubuntu, which until now at least had the same environment as most other Linux distributions.

More pain incoming, for no technical reason at all. Canonical used to feel like a force for the good, but doesn't anymore.


This is just "change is bad" FUD.

Working on macOS for years taught me that most people are going to write code that supports what they can test even if there's a better way that works across more systems. Heck, even using 'sed -i' breaks if you go from macOS to Linux or vice-versa but if you don't have a Mac you wouldn't know.

Meanwhile, this is a rewrite of `date` (and other coreutils) with the goal of being perfectly compatible with GNU coreutils (even using the coreutils test cases), which means that differences between the two are going to reduce, not expand.

What you're complaining about here is "people only test on one platform" and your solution is that everything should stay the same and never change, and we should have homogeneity across all platforms forever so that you don't have to deal with a shell script that doesn't work. The actual solution is for more people to become aware of these differences and why best practices exist.

Note that recently Ubuntu and Debian switched /bin/sh to dash instead of bash, which then resulted in a lot of people having to fix their /bin/sh scripts to remove bashisms which then improves things for everyone across all platforms. Now Ubuntu switches to uutils and we find they have a bug in `date` because GNU coreutils didn't have a test for that either; now coreutils has a test for it too so that they don't have a regression in the future, and everyone's software gets better.


I'm not, because while operating a fleet of systems, you assume some of the parts are so reliable that you don't look into them when problems arise.

These kinds of bugs might not bug end users much, but when it becomes a fleet-wide problem, it becomes crippling.

I'm debugging a problem on a platform since this morning. At the end of the day it turned out to be the platform is sending things to somewhere it's explicitly told not to.

Result? Everything froze, without any errors. System management is hard to begin with. It becomes really hard when the tools you think you can depend breaks.

Also, consider what would be the uproar if the programming language was something else than Rust. The developers would be crucified, burned with flamethrowers, reincarnated, and tortured again until they fed-up and leave computers and start raising chicken at an off-grid location.


You are ok about the core utils being replaced by a rewrite for no obvious reason and said rewrite being so broken that an allegedly stable distribution actually can’t properly update?

I mean, all good then.


I have to say I'm rather more worried about the apparent lack of testing that their auto-update mechanism is actually updating anything (given how long it took them to notice that symptom), than that they're replacing some software with a not yet quite complete rewrite in their less-stable non-lts edition.

There is no such thing as a less-stable-non-lts edition. That’s the stable version. The LTS version is just a stable version which is getting updated for longer. Non LTS absolutely shouldn’t mean unstable.

It seems less stable in the sense that

1. It literally remains stable for less time. Nine months instead of 5+ years, up to 12 if you pay them.

2. They apparently have a history of testing changes in it.

3. They appear to only sell things like livepatch and extended support for LTS editions, and products you pay for are implicitly more stable than products you do not.


Historically also, they've pushed things out from a LTS release that could have gone in and made people wait for the next non-LTS release because they were too new or experimental. If it's good, it'll be in the next LTS, but if not, it won't and can be removed from the next non-LTS without impacting too much.

Or to use Ubuntu's own terminology: "Interim releases will introduce new capabilities from Canonical and upstream open source projects, they serve as a proving ground for these new capabilities." They also call LTS 'enterprise grade' while interims are merely production-quality. Personally I see these as different levels of stability.


> It literally remains stable for less time. Nine months instead of 5+ years, up to 12 if you pay them.

Isn't "stability" in this context a direct reference to feature set which stays stable? When a version is designated stable it stays stable. You're talking about support which can be longer or shorter regardless of feature set.

When they stop adding features, it's stable. Every old xx.04 and xx.10 version of Ubuntu is stable even today, no more features getting added to 12.10. When they stop offering support, it's unsupported. 14.04 LTS became unsupported last year but not less stable.

These are orthogonal. You can offer long term support for any possible feature combination (if you have the resources), and you can be stable with no support. In reality it's easier to freeze a feature set and support that snapshot for a long time then chase a moving target.


I can see where you're coming from, but I think I'd prefer to describe practically all stable software as living in an unstable equilibrium in the usable region of state-space. When the stabilizing force of security patches, certificate updates, updates to new hardware requirements, and so on and so forth disappears the software falls out of the usable region of space into the, I suppose stable equilibrium, of unusable software. And this fall happens quite rapidly in the case of a linux distribution.

Applying the word "stable" to things in the unusable region of state space seems technically, but only technically, correct.


Not meant as a jab at Ubuntu, but I don't think people choose Ubuntu for engineering rigor. If you want something which is dull, predictable and known for their rigor OpenBSD, illumos, FreeBSD, etc. seem like more likely choices.

Or Debian and Redhat, which have the added bonus of being "boring technology."

If you have a problem with them, 20 other people have had that same problem before you did, two of them have posted on Stackoverflow and one wrote a blog post.

OpenBSD and Illumos may be cool, but you really need to know what you're doing to use them.


For me, it's been more about the online help suggestions you're most likely to find an Ubuntu centric answer when you have issues. Of course you also have to consider the date of a Q/A and the version in question. Since perma-switching my desktop in the past few years, I've mostly used Pop, because I like most of their UI changes, including Cosmos, despite a handful of now mostly corrected issues... They tend to push features and kernel versions ahead of Ubuntu LTS.

That said, the underlying structure is still Ubuntu centered. I also like Ubuntu server, even through I don't use snaps, mostly because the install pre-configures most of the initial changes I make to debian anyway. Sudo is configured, you get an option to import your public key and preconfigure non-pwd ssh, etc. I mostly install ufw and Docker and almost everything I run goes under Docker in practice.


Historically, Ubuntu was a good choice if you were releasing a licensed OS, with minimal customization, that needed CUDA more than, say, Vixie cron.

Officially you are right, they release it as a stable OS after a few weeks of beta's.

Unofficially any serious user knows to stick to LTS for any production environment. This is by far the most common versions I encounter in the wild and on customer deployment from my experience.

In fact I don't think I ever saw someone using a non-LTS version.

Canonical certainly has these stats? Or someone operating update mirror could infer them? I'd be curious what the real world usage of different Ubuntu versions actually are.


> replaced by a rewrite for no obvious reason

Obvious reason is to have less bugs in the long run. Temporary increase during transition is expected and is not ideal, but after that there should be less of them.

It's not like C version didn't have any bugs:

https://bugs.debian.org/cgi-bin/pkgreport.cgi?archive=both;d...


> Obvious reason is to have less bugs in the long run.

The highest sounds are hardest to hear. Going forward is a way to retreat. Great talent shows itself late in life. Even a perfect program still has bugs.


They’re okay with it because the rewrite is in their preferred language.

I have no opinion whatsoever on the rewrite. It might be the best thing since sliced bread for all I know. I have trouble with integrators recklessly shipping untested dependencies however.

> I have trouble with integrators recklessly shipping untested dependencies however.

Isn't this testing what they're doing now, which is what is exposing the bugs that need to be fixed?


> It might be the best thing since sliced bread

Can someone explain to me this analogy? Because I consider sliced bread as decline. But maybe that is cultural thing.


It was combined with the toaster and sandwiches made easily, and taken away for a bit in WWII, and then came back. It was one of those advancements that "stuck".

There you go: https://time.com/3946461/sliced-bread-history/

(yes, I don't really get it either)


A knife that both slices and toasts the bread at the same time would be even better!


Knew what absolute disaster of a video this was going to be before clicking. Highly recommend watching Colin's videos, this one included, for the sheer level of "this is clearly a bad idea, let's do it" that he gives off and the things learned along the way.

The sliced bread might not be the best quality, but it is rather consistent and much less crummy when making yourself a toast or just butter+jam. No dangers of a kid cutting itself while making its own sandwich either.

Middle class people who think of cooking for themselves as a hobby maybe lose the ability to understand labor-saving technical advances. People who cook as a duty think of cutting bread as more work, which it quite obviously is.

If cooking is a hobby for you, you're seeking labor. Maybe that makes the obvious unintelligible. If you're poor and have a bunch of hungry kids waiting, you don't want the cutting board covering up half your counter space while you're carefully trying not to screw up eight slices of bread before something on the stove burns.


so? someone has to maintain software. The unix world is already in a crisis because they can't find maintainers.

Aside from there being no crises, rewriting a set of utilities with nearly no bug reports for years and for which no new features is needed accomplishes what exactly? Aside from new bugs, that is.

There surely would be a more beneficial undertaking somewhere else. If then you’d argue that they may do as they please with their time, fair, but then let’s not pretend this rewrite has any objective value aside from scratching personal itches and learning how cat and co are implemented.


This effort has produced new bug reports and test cases for upstream, clarifying their desired behavior. That's one positive side effect that helps everyone.

That's a really post-hoc rationalization for breaking Ubuntu.

I'm replying to the general case of why. Obviously breakage is unfortunate. But it's not like there's no benefit.

Unfortunately, this will leave a stink that will take a lot of bathing and deodorant to mask.

> Sudo has released a security update to address a critical vulnerability (CVE-2025-32463) in its command-line utility. This vulnerability allows an attacker to leverage sudo's -R (--chroot) option to run arbitrary commands as root, even if they are not listed in the sudoers file.

People start making sudo more secure by replacing it with sudo-rs

You: "why are we rewriting old utilities?"


>CVE-2025-32463

Looks like a logic bug to me? So rust wouldn't have helped.

Those are exactly the kind of bugs you might introduce when you do a rewrite.


One great way you can make things more secure is by reducing attack surface. sudo is huge and old, and has tons of functionality that almost no one uses (like --chroot). A from-scratch rewrite with a focus on the 5% of features that 99% of users use means less code to test and audit. Also a newer codebase that hasn't grown and mutated over the course of 35 years is going to be a lot more focused and easier to reason about.

> People start making sudo more secure by replacing it with sudo-rs

I would have much preferred if ubuntu went with run0 as the default instead of trying to rewrite sudo in rust. I like rust but the approach seems wrong from the beginning to me. The vast majority of sudo usecases are covered in run0 in a much simpler way, and many of the sudo bugs come from the complex configurations it supports (not to mention a poorly configured sudo, which is also a security hazard and quite easy to do). Let people who need sudo install and configure it for themselves, but make something simple the default, especially for a beginner distro like ubuntu.


run0 can f off along with the rest of the systemd abominations. sudo worked for decades perfectly well and didn't call for any replacement. run0, like much of the systemd projects and rust rewrites, is a solution in search of a problem.

> sudo worked for decades perfectly well

Yes, if you ignore all the bugs resulting from features that almost nobody uses.

> along with the rest of the systemd abominations

Not too interested in engaging systemd debates. I have enjoyed using systems with and without systemd, and while I understand the arguments against feature creep, I think you'd be throwing the banana out with the peel to overlook the idea behind run0.

For such a security sensitive piece of software like sudo, reducing complexity is one of the best ways to prevent logic bugs (which, as you mentioned in the sibling, is what the above bug was). If run0 can remove a bunch of unused features that are increasing complexity without any benefit, that's a win to me. Or if you don't like systemd, doas on OpenBSD is probably right up your alley with a similar(ish) philosophy.

For anyone who wants to read more about Lennart's reasoning behind run0: https://mastodon.social/@pid_eins/112353324518585654


It's a logic bug and has nothing to do with the language it's written in.

Rewriting old utilities is fine but they have to be backwards compatible.

This is not the same as fixing a bug.


The "old" version didn't have a test for the feature... the "new" version started with the tests for the "old" version... it was an easy thing to miss as a result.

As other threads have mentioned, a more advanced argument parser and detection of parsed, but unused arguments could have caught this. Of course, there's already complaints about the increase in size for the Rust versions of uutils, mostly offset by a merged binary with separate symlinks. It's a mixed bag.

But, I'm sure you'll be reverting back to Xfree86 now.


This is a short sighted opinion and riddled with prejudice

My expectation would be that every bug in a Rust replacement is going to receive the brightest spotlight that can be found.

If the rest of coreutils is bug free cast the first stone.

I do not think reimplementing stuff in Rust is a bad thing. Why? Because reimplementing stuff is a very good way to througly check the original. It is always good to have as many eyeballs om the code as possible.


Replacing battle tested software with untested rewrite is always a bad idea even if the rewrite is written a trendy language, key word being untested.

I’m still shocked by the number of people who seem to believe that the borrow checker is some kind of magic. I can assure you that the core utils have all already went through static analysers doing more checks than the Rust compiler.


> I can assure you that the core utils have all already went through static analysers doing more checks than the Rust compiler.

Some checks are pretty much impossible to do statically for C programs because of the lack of object lifetime annotations, so no, this statement can't be right.

It is true that the borrow checker doesn't prevent ALL bugs though.

Furthermore, the "bug" in this case is due to an unimplemented feature causing a flag to be silently ignored... It's not exactly something that any static analyser (or runtime ones for that matter) can prevent, unless an explicit assert/todo is added to the codepath.


Well, you can annotate C code to do a lot more than lifetime annotations today. The tooling for C analysis is best in class.

And even without annotations, you can prove safe a lot of constructs by being conservative in your analysis especially if there is no concurrency involved.

Note that I wasn't specifically commenting about this specific issue. It's more about my general fatigue regarding people implying that rewrite in Rust are always better or should be done. I like Rust but the trendiness surrounding it is annoying.


You can do a lot of things. Yes, there are formally verified programs and libraries written in C. But most C programs are not, including the GNU coreutils (although they are battle-tested). It's just the effort involved is higher and the learning curve for verifying C code correctly is staggering. Rust provides a pretty good degree of verification out-of-the-box for free.

Like any trendy language, you've got some people exaggerating the powers of the borrow checker, but I believe Rust did generally bring out a lot of good outcomes. If you're writing a new piece of systems software, Rust is pretty much a no-brainer. You could argue for a language like Zig (or Go where you're fine fine with a GC and a bit more boilerplate), but that puts even more spotlight on the fact that C is just not viable choice for most new programs anymore.

The Rewrites-in-Rust are more controversial and they are just as much as they are hyped here on HN, but I think many of them brought a lot of good to the table. It's not (just?) because the C versions were insecure, but mostly because a lot of these new Rust tools replaced C programs that had become quite stagnant. Think of ripgrep, exa/eza, sd, nushell, delta and difft, dua/dust, the various top clones. And these are just command line utilities. Rewriting something in Rust is not an inherently bad idea of what you are replacing clearly needs a modern makeover or the component is security critical and the code that you are replacing has a history of security issues.

I was always more skeptical about the coreutils rewrite project because the only practical advantage they can bring to the table is more theoretical safety. But I'm not convinced it's enough. The Rust versions are guaranteed to not have memory or concurrency related bugs (unless someone used unverified unsafe code or someone did something very silly like allocating a huge array and creating their own Von Neumann Architecture emulator just to prove you can write unsafe code in Rust). That's great, but they are also more likely to have compatibility bugs with the original tools. The value proposition here is quite mixed.

On the other hand, I think that if Ubuntu and other distros persist in trying to integrate these tools the long-term result will be good. We will get a more maintainable codebase for coreutils in the future.


> Well, you can annotate C code to do a lot more than lifetime annotations today. The tooling for C analysis is best in class.

Where can I see these annotations for coreutils?


> It is true that the borrow checker doesn't prevent ALL bugs though.

True, but "prevents all bugs" is that what the debate pretty much digests to in the "rust is better" debate. So you end up with rewrites of code which introduce errors any programmer in any language can make and since you do a full rewrite that WILL happen no matter what you do.

If that's acceptable fine, otherwise not. But you cannot hide from it.


> I’m still shocked by the number of people who seem to believe that the borrow checker is some kind of magic.

It shouldn't be magic. It should be routine and boring, like no-nulls, pure functions, static typing and unit tests.


> I can assure you that the core utils have all already went through static analysers doing more checks than the Rust compiler.

Yet, this effort has still found bugs in the upstream project! No codebase is perfect.


> I can assure you that the core utils have all already went through static analysers doing more checks than the Rust compiler.

I'd be very interested in reading more about this. Could you please explain what are these checks and how they are qualitatively and quantitatively better than the default rustc checks?

Please note that this is about checks on the codebase itself - not system/integration tests which are of course already applicable against alternative implementations.


If you read my comment with a little care, you may realize that I said nothing about replacing the software, I only spoke about writing it. In my own Distro I wouldn't replace coreutils with a Rust rewrite either at this point.

On the borrow checker: It doesn't prevent logic errors as is commonly understood. These errors are what careful use of Rusts type system could potentially prevent in many cases, but you can write Rust without leveraging it successfully. The Rust compiler is an impressive problem-avoidance tool, but there are classes of problems even it can't prevent from happening.

BLet us not fall into the trap of thinking thst just because the Rust compiler fails to prevent all issues, we should therefore abandon it. We shouldn't forget our shared history of mistakes rustc would have prevented (excerpt):

- CVE-2025-5278, sort: heap buffer under-read in begfield(), Out-of-bounds heap read in traditional key syntax

- CVE-2024-0684, split: heap overflow in line_bytes_split(), Out-of-bounds heap write due to unchecked buffer handling

- CVE-2015-4042, sort: integer overflow in keycompare_mb(), Overflow leading to potential out-of-bounds and DoS

If we were civil engineers with multiple bridge collapses in our past and then, we finally had developed a tool that reliably prevents a certain especially dangerous and common type of bridge collapse, we would be in the wrong profession if we scoffed at the use of the tool. Whether it is Ruet or some C checker isn't really the point here. The point is building stable, secure and performant software others can rely on.

Any new method to achieve this more reliably has to be tested. Ideally in an environment where harm is low.


It is worth noting, that all three CVEs could be prevented by simple bounds checking at runtime. Preventing them does not require borrow checker or any other Rust fancy features.

It is also worth noting that theoreticals don't help such discussions either.

Yes, C programmers can do much more checks. The reality on the ground is -- they do not.

Forcing checks by the compiler seems to be the only historically proven method of making programmers pay more attention.

If you can go out there and make _all_ C code utilize best-in-class static checkers, by all means, go and do so. The world would be a much better place.


If it was simple to prevent, then why was it not prevented?

This bug was missed because there wasn't a test case in the original.

> key word being untested.

That's why they (still) have users. If it works so well for Microsoft, why wouldn't work for Ubuntu ?


Is it really battle tested?

Claiming without evidence that something is battle-tested while also claiming the competition is "trendy" does not help any argument you might be attempting to make.

I am trying to read your comments charitably but I am mostly seeing generalizations which makes it difficult to extract useful info from your commentary.

We can start by dropping dismissive language like "trendy" and "magic", "fashion" and "Rust kids". We can also continue by saying that "believing the borrow checker is some kind of magic" is not an interesting thing to say as it does not present any facts or advance any discussion.

What you "assure" us of is also inconsequential.

One fact remains: there are a multitude of CVEs that, if the program was written in Rust, would not happen. I don't think anyone serious ever claimed that logic bugs are prevented by Rust. People are simply saying: "Can we just have less possible bugs by the virtue of the program compiling, please?" -- and Rust gives them that.

What's your objection to that? And let us leave aside your seeming personal annoyance of whatever imaginary fandom you might be seeing. Let us stick to technical facts.


The objections I see against Rust and Rust rewrites of things remind me a lot of the objections I saw against Linux and Linux users by Windows users, and against macOS and macOS users by Linux users. Dismissive language and denegrating comments without any technical backing; assertions of self-superiority. "It's a toy", "it's not mature", "it's a worse version of blah blah", "my thing does stuff it doesn't do and that's important, but it does things my thing doesn't do and that's irrelevant".

Honestly it's at the point where I see someone complaining about a Rust rewrite and I just go ahead and assume that they're mouthing off about something because they think it's trendy and they think it's cool to hate things people like. I hate being prejudicial about comments but I don't have the energy to spend trying to figure out if someone is debating in good faith or not when it seems to so rarely be the case.


My impression is exactly the same. For multiple years now I keep seeing grandiose claims about "Rust fandom" and all I ever see in those threads are... the C people who complain about that Rust fandom that I cannot for the life of me find in a 300+ comments thread.

It's really weird, at one point I started asking myself if many comments are just hidden from me.

Then I just shrugged it off and concluded that it's plain old human bias and "mine is good, yours is bad" tribe mentality and figured it's indeed not worth my time and energy to do further analysis on tribal instinctive behaviour that's been well-explained in literature for like a century at this point.

I have no super strong feelings for or against Rust, by the way. I have used it to crushing success exactly where it shines and for that it got my approval. But I also work a lot with Elixir and I would rarely try to make a web app with Rust; multiple PLs have the frameworks that make this much better and faster and more pleasant to do.

But it does make me wonder: what stake do these people have in the whole thing? Why do they keep mouthing off about some imaginary zealots that are nowhere to be found?


Do zealots usually know that they're zealots? Food for thought.

I define somebody as a zealot by their expression. Fanaticism, generalizations, editorial practices like misconstruing with the goal of tearing down a straw men, and even others.

If you show me Rust advocates with comments like these I would be happy to agree that there are in fact Rust zealots in this thread.


Generally, they don't. Zealotry is not specific to Rust, but you've reminded me of some moments in the 2020's edition of Programming Language Holy Wars™.

Like, one zealot stabbing at another HN commenter saying "Biased people like yourself don't belong in tech", because the other person simply did not like the Rust community. Or another zealot trying to start a cancel campaign on HN against a vocal anti-Rust person. Yet another vigorously denied the existence of Rust supremacism, while simultaneously raging on Twitter about Microsoft not choosing Rust for the Typescript compiler.

IMO, the sad part is watching zealots forget. Reality becomes a story in their head; much kinder, much softer to who they are. In their heads, they are an unbiased and objective person, whereas a "zealot" is just a bad word for a bad, faraway person. Evidence can't change that view because the zealot refuses to look & see; they want to talk. Hence, they fail the mirror test of self-awareness.

Well, most of them fail. The ones who don't forget & don't deny their zealotry, I have more respect for.


> reimplementing stuff is a very good way to througly check the original

Problem is, they unironically want to replace coreutils with their toy. And they just did.


Yes I also think that bugs in a Rust replacement will receive more attention than other bugs. Why?

- the cult-like evangelism from the Rust community that everything written in Rust would be better

- the general notion, that rewriting tools should bring clear an tangible benefits. Rewriting something mostly because the new language is safer will provoke irritation and frustration with affected end-users when the end product turns out to introduce new issues


So, reducing CVEs is not a tangible benefit?

Rewriting old known good code from scratch is going to create more CVEs.

This rewrite project is about corporations escaping from GPL code. It's got nothing to do with security.


Somebody linked a comment from an Ubuntu maintainer where they said they want more resilient tools.

If license was the only concern then I'd think that they wouldn't switch the programming language?

And yeah, obviously using Rust will not eliminate all CVEs. It does eliminate buffer overflows and underflows though. Not a small thing.

Also I would not uncritically accept the code of the previous coreutils as good. It got the job done (and has memory safety problems here and there). But is it really good? We can't know for sure.


C is a bad language in many respects, and Rust greatly improves on the situation. Replacing code written in C with code written in Rust is good in and of itself, even if there are some costs associated with the transition.

I also don't think that Rust itself is the only possible good language to use to write software - someone might invent a language in the future that is even better than Rust, and maybe at some point it will make sense to port rust-coreutils to something written in that yet-undesigned language. It would be good to design software and software deployment ecosystems in such a way that it is simply possible to do rewrites like this, rather than rely so much on the emergent behavior of one C source code collection + build process for correctness that people are afraid to change it. Indeed I would argue that one of the flaws of C, a reason to want to avoid having any code written in it at all, is precisely that the C language and build ecosystem make it unnecessarily difficult to do a rewrite.


> C is a bad language in many respects, and Rust greatly improves on the situation. Replacing code written in C with code written in Rust is good in and of itself

That's empty dogma.

C issue is that C compilers provide very little in term of safety analysis by default. That doesn't magically turn Rust into a panacea. I will take proven C or even static analysed C above what the borrow checker adds to Rust any day of the week.

I like the semantic niceties Rust adds when doing new development but that doesn't in any way justify all rewrites as improvement by default.


> C issue is that C compilers provide very little in term of safety analysis by default.

Yes this is precisely a respect in which C is bad. Another respect is that C allows omitting curly braces after an if-statement, which makes bugs like https://www.codecentric.de/en/knowledge-hub/blog/curly-brace... possible. Rust does not allow this. This is not an exhaustive list of ways in which Rust is better than C.

> I will take proven C or even static analysed C above what the borrow checker adds to Rust any day of the week.

Was coreutils using proven or statically analyzed C? If not, why not?


> This is not an exhaustive list of ways in which Rust is better than C.

Which is why your first and only example is a bug from over a decade ago, caused by an indentation error that C compilers can trivially detect as well.


Can detect, but how many are forced? Have you tried using Gentoo with "-Wall -Werror" everywhere?

You have some theoretical guardrails that aren't used widely in practice, many times even can't be used. If they could just be introduced like that, they'd likely be added to the standard in the first place.

The fact that the previous commenter can even ask the question if someone has analyzed or proven coreutils shows how little this "can detect" really guarantees.

The end your "can trivially detect" is very useless compared to Rust's enforcing these guarantees for everyone, all the time.


> Another respect is that C allows omitting curly braces after an if-statement, which makes bugs like https://www.codecentric.de/en/knowledge-hub/blog/curly-brace... possible.

This is a silly thing to point to, and the very article you linked to argues that the lack of curly braces is not the actual problem in that situation.

In any case, both gcc and clang will give a warning about code like that[1] with just "-Wall" (gcc since 2016 and clang since 2020). Complaining about this in 2025 smells of cargo cult programming, much like people who still use Yoda conditions[2] in C and C++.

C does have problems that make it hard to write safe code with it, but this is not one of them.

[1] https://godbolt.org/z/W74TsoGhr

[2] https://en.wikipedia.org/wiki/Yoda_conditions


> That's empty dogma.

This dogma is statistically verifiable. We could also replace them with Go counterparts

> I will take proven C or even static analysed C

This just means you don't understand static analysis as much as you do. A rejection of invalid programs by a strict compiler will always net more safety by default than a completely optional step after the fact.


> Replacing code written in C with code written in Rust is good in and of itself, even if there are some costs associated with the transition.

No it isn't. In fact, "Replacing code written in <X> with code written in <Y> is good in and of itself" is a falsehood, for any pair of <X> and <Y>. That kind of unqualified assertion is what the deluded say to themselves, or propagandists (usually <Y> hype merchants) say out loud.

This is what reality looks like: https://www.bankofengland.co.uk/news/2022/december/tsb-fined...

Furthermore, "designing for a future rewrite" is absolute madness. There is already a lot of YAGNI waste work going on. It's fine to design software to be modular, reusable, easily comprehensible, and so on, but designing it so its future rewrite will be easier - WTF? You haven't even built the first version yet, and you're already putting work into designing the second version.

Fashions are fickle. You can't even know what will be popular in the future. Don't try to anticipate it and design for it now.


> Furthermore, "designing for a future rewrite" is absolute madness. There is already a lot of YAGNI waste work going on. It's fine to design software to be modular, reusable, easily comprehensible, and so on, but designing it so its future rewrite will be easier - WTF? You haven't even built the first version yet, and you're already putting work into designing the second version.

If software is in fact designed to be modular, reusable, and easily-comprehensible, then it should be pretty easy to rewrite it in another language later. The fact that many people are arguing that programmers should not even attempt to rewrite C coreutils, for fear of breaking some poorly-understood emergent behavior of the software, is evidence that C coreutils is not in fact modular, reuseable, and easily-comprehensible. This is true regardless of whether or not the Rust rewrite (or another language rewrite) actually happens or not.


> C coreutils is not in fact modular, reuseable, and easily-comprehensible

It's not. I never said it was. Nor are my bank's systems; I don't want them to fuck them up either. My bank's job is not to rewrite their codebase in shinier, newer languages that look nice on their staff's CVs, their job is to continue to provide reliable banking services. The simplest, cheapest way for them to do that is to not rewrite their software at all.

What I was addressing was two different approaches to "design[ing] software [...] in such a way that it is simply possible to do rewrites"

* One way is evergreen: think about modularity, reusability, and good documentation in the code you're writing today. That will help with any mooted future rewrite.

* The other way, which you implied, is to imagine what the future rewrite might look like, and design for that now. That way lies madness.


Give it 15 years and all the Rust kids will be coming back to C like vinyl.

'the two things that really drew me to vinyl were the expense and the inconvenience.'

https://imgur.com/gallery/vinyl-meme-7jeZtVJ


I've been using Rust for 13 years, I'll let you know in two if I go back to C.

Given I'm about to turn 40, I do appreciate being referred to as a kid though ;)


I'm 50 and prefer Rust... though tbh I haven't worked much with C or Rust. I just never liked C, preferring to stick to higher level languages, even C# over it. I do like Rust though, even if I feel like I'm pulling my hair out sometimes trying to grok ownership symbols. Most Rust I understand by looking at it... I can not say the same with C.

This is an apt comparison since nobody except fringe enthusiasts came back to vinyl.

> I also don't think that Rust itself is the only possible good language to use to write software

you don't really believe this because you said that the only possibly better language would be one that doesn't yet exist


I don't think there is one programming language that is best suited for all types of programs. I think that Rust is probably the best language currently in use for specifically implementing Unix coreutils, but I don't think that this implies that (say) Zig or Odin or Go or Haskell would necessarily be terrible choices (although I really would pick Rust rather than any of those).

But my point was that there's no reason to think that the specific package of design decisions that Rust made as a language is the best possible one; and there's no reason why people shouldn't continue to create new programming languages including ones intended to be good at writing basic PC OS utils, and it's certainly possible that one such language might turn out to do enough things better than Rust does that a rewrite is justified.


> Im okay with this. This is how we find out issues

at Microsoft. /s


Was there something wrong with the old coreutils that needed improvement?

I have a suspicion it's about the license, like this commenter [0] did a year ago.

[0]: https://news.ycombinator.com/item?id=38853429


Agreed. Since GNU Coreutils is GPLv3 but uutils is MIT, my guess is eventually Canonical will start using "works like the GNU software except you don't have to comply with GPLv3" as a selling point for Ubuntu Core (their IoT focused distro). This would let them sell to companies who want to only permit signed firmware images to run on their devices, which isn't allowed under GPLv3.

There are F500 companies shipping Ubuntu Core on devices that will only permit signed firmware, so I'm not sure your assessment is correct.

https://buildings.honeywell.com/au/en/products/by-category/b...


Depending on the product, this might be OK! If you've ever had cause to closely read the GPLv3, the anti-tivoisation clause for some reason is only really aimed at "User products" (defined as "(1) a “consumer product”, which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling"). This one looks like it's a potential grey area, since it's not obvious if it's intended for buildings that anyone would live in.

I worked on an embedded product that was leased to customers (not sold). The system included GPLv3 portions (e.g. bash 5.x) but they concluded that we did not need to offer source code to their cuatomers.

The reasoning was that the users didn’t own the device. While I personally believe this is not consistent with recent interpretations of the license by the courts, I think they concluded that it was worth the risk of a customer suing to get the source code, as the company could then pull the hardware and leave that customer high and dry. It is unlikely any of their users a would risk that outcome.


Take a look at their customer testimonials [0] and ask yourself if they have recently made anticompetitive or user-hostile moves. Now, ask yourself: do you think they like being beholden to a license that makes it harder for them to keep their monopolies?

[0]: https://ubuntu.com/pro/

Edited to add: it would be cool if, instead of the top-most wealth-concentrators F[500:], there was an index of the top-most wealth-spreaders F[:500]. What would that look like? A list of cooperatives?


As long as nobody sues them everything is fine

If that's really the case, I wish they would just come out and say it and spare the rest of us the burden of trying to debate such a decision on its technical merits. (Of course, I am aware that they owe me nothing here.)

Assuming this theory is true then, what other GPLv3-licensed "core" software in the distro could be next on their list?


I doubt GPL version 3 is the motivation here.

https://packages.ubuntu.com/plucky/rust-coreutils

The dependencies of rust-coreutils list libgcc-s1, which is GPL version 3.


This isn't anything specific to uutils. When you build a Rust program that links with glibc, it needs to use libgcc to do the stack unwinding. If you look at other packaged Rust programs on Ubuntu, they all depend on libgcc for this reason. For example, Eza https://packages.ubuntu.com/plucky/eza and Ripgrep https://packages.ubuntu.com/plucky/ripgrep . If Ubuntu moves to some safe, permissively licensed glibc replacement in the future, this requirement will drop off all their Rust packages. I'm not saying this uutils change alone will let Ubuntu get out of GPLv3 compliance, I'm saying they likely view GPLv3 software in the base install as undesirable due to their IoT customers and will replace it with a permissively licensed alternative given the opportunity.

The dependency of glibc on the unwinder (for backtrace, pthread_exit and pthread_cancel) is a glibc packaging problem. You need to plan for replacing glibc anyway because its licensing could switch to (L)GPLv3+ (including for existing stable release branches).

However, it would be a fairly straightforward project to replace the unwinder used directly by Rust binaries with the one from libunwind. Given that this hasn't happened, I'd be surprised if Canonical is actually investing into a migration. Of course there are much bigger tasks for avoiding GPLv3 software, such as porting the distribution (including LLVM itself and its users) from libstdc++ (GCC's C++ standard library that requires GCC to build, but provides support for Clang as well) to libc++ (LLVM's C++ standard library).


In this hypothetical situation are Canonical also replacing the GPL Linux kernel? If they’re not replacing the Kernel, how does anything change for the end user?

Linux is GPLv2, there is no tivoization protection. In fact most tivoized devices run Linux.


Basically every IOT/router/phone/whatever which is advanced enough runs Linux and almost every one of them enforces firmware signing. They'd have to fight the whole world at this point.

If it was only for that, they could use/improve busybox, which has the same license as the kernel (GPLv2).

Perhaps it is also so they can be used in closed source systems (I have uutils installed on my Windows system which works nicely).


Busybox is frankly a horrible user experience, and will never be a good one. Its niche is to be as small as possible, as a single static executable, while providing most tools you need to get the job done in an embedded system. Bells and whistles like a shell that's nice to use, or a vi implementation with working undo/redo, or extensive built-in documentation in the form of --help output, are non-features which would make busybox worse for its primary use case.

  This would let them sell to companies who want to only permit signed firmware images to run on their devices, which isn't allowed under GPLv3.
How is this not allowed under GPLv3?

Search for "Tivoization" and the GPLv3

Isn't preventing "tivoization" the whole point of the GPLv3?


I like how the first comment is asking "is anyone actually going to switch to this version?" and here we are with one of the major Linux distributions using it already, and already managed to ship a bug via it.

Brave of them to ship a Rust port of sudo as well.


The authors have specifically said that it’s not. They just chose Rust community licensing norms, they don’t really care about licenses.

At best this just makes them a patsy, which isn't actually better; but it, also becomes pretty clear if you pay more attention and dig into this (watch some of their interviews, etc.) that they actually DO care about the license, and are splitting hairs on what that means: if they don't care, but they have users who do that will be disappointed if they went in a different direction, then they not only do care, but have chosen to actively align with those specific users. But, regardless, again, and most importantly: this is about why they have a niche and why Canonical is pushing this, and if you try to just right software in such an environment and actively truly actually don't care about the license and just YOLO it, then that level of cavalier negligence cannot be rewarded with immunity to guilt or culpability in the outcomes.

That might be Canonical’s motive though.

Seriously, how on earth are you coming up with this? Time and again they debunk those silly claims but people just keep bringing this up on and on. Is it some sort of conspiracy theory?

It could be a conspiracy on the part of Canonical, sure. People have hidden motives all the time. Sometimes you have to deduce their motives from their actions, while ignoring their words.

I don't think there’s any serious evidence of it being true though. All we can see right now is that there are a surprising number of MIT-licensed packages replacing GPL-licensed packages. It could be a coincidence.


That's fair!

"Licensing norms"? Are people really choosing software licenses without considering the implications just because it's a "norm"?

This is gonna cause a lot of disappointment down the road.


If most of an ecosystem chooses a specific license (dual licensed in Rust's case), the simplest thing to do is choose the same license as everyone else.

Regardless of what others do, the best thing to do is to choose the best license for one’s own software. One which preserves the freedom of one’s users and the openness of one’s code.

Sadly people don’t always do what’s best. We sometimes do what other people are doing on the theory that maybe someone else has thought it through and already decided that it _is_ the best thing to do. It’s not perfect, but then heuristics rarely are. But it’s cheap to implement.

This may be reasonable if you're writing a library but not for applications.

Considering how often MIT is chosen over the slightly simpler ISC version... yeah.

In the end, a lot of people are willing to write open source just for the sake of having it as it scratches their own need and isn't otherwise monetizable or they just think it should exist. I would never even consider touching a GPLv3 licensed UI library component, for example.

It's not always the most appropriate license and if a developer wants to use a permissive license, they are allowed to. This isn't an authoritarian, communist dictatorship, at least it isn't where I live and to my dying breath won't be.


Of course it's allowed. People can do whatever they want. If they think it over, consider the implications of what they are doing and decide that this is what they want, then by all means.

Choosing licenses due to peer pressure is completely stupid though. If you're not sure, you can just not pick a license at all. Copyright 2025 all rights reserved. If you must pick a license just because, then the reasonable choice is the strongest copyleft license available, simply because it maximizes leverage. The less you give away, the more conditions, the more leverage. It's that simple.

That people are actually feeling "pressure" to pick permissive licenses leads me to conclude this is a psyop. It's a wealth transfer, from well meaning developers straight into the pockets of corporations. It's being actively normalized so that people choose it "by default" without thinking. Yeah, just give it all away! Who cares, right?

I urge people to think about what they are doing.


It looks like we have three major open source imementations:

- GNU coreutils (GPLv3)

- uutils coreutils (MIT)

- busybox (GPLv2)


There's the BSD coreutils too.


Agreed. Proprietary tools could then rely on those coreutils without any license fears.

Yeah, if this is not upstreamed eventually, it will have to be rewritten again.

I've had the same suspicion since I read about it the first time.

Is it not just yet another Rust rewrite?

If you're the maintainer of OpenBSD, then implementing coreutils in a given language is a necessary requirement for it to be considered a viable systems language: https://marc.info/?l=openbsd-misc&m=151233345723889&w=2


"Denial of service"

In sort command

Is this the best they could come up with?


In some cases it was possible to crash (overflow) sort.c, not just DoS. I did try to look more info the issue - it was not handled for quite some time however I did not find any real world impact.

Minor correction, but that bug was never in any "official" coreutils release. The bug was in a multi-byte character patch that many distributions use (and still use). There have been other CVEs in that patch [1].

But the worst you can do is crash 'sort' with that. Note that uutils also has crashes. Here is one due to unbounded recursion:

  $ ./target/release/coreutils mkdir -p `python3 -c 'print("./" + "a/" * 32768)'`
  Segmentation fault (core dumped)
Not saying that both issues don't deserve fixing. But I wouldn't really panic over either of them.

[1] https://lwn.net/Articles/535735/


Didn't that bug get fixed before it went public?

They weren't written in Rust. But I wonder why the borrow checker wouldn't catch the date bug...

> where date ignores the -r/--reference=file argument

This has nothing to do with memory ownership, so borrow checker is irrelevant. Ubuntu just shipped before that argument's handling was implemented.



To summarize, a Jon Seager from Canonical says it’s for safety and resilience.

> Performance is a frequently cited rationale for “Rewrite it in Rust” projects. While performance is high on my list of priorities, it’s not the primary driver behind this change. These utilities are at the heart of the distribution - and it’s the enhanced resilience and safety that is more easily achieved with Rust ports that are most attractive to me.


It wasn’t… “safe”

It seems like I'm probably preaching to the choir, but what really is the attack surface with coreutils? I can't imagine there have been a lot of pwns as a result of the `date` command.

Untrusted input is often stored in files. Coreutils tools are often used to operate on those files.

As an obvious example, I sometimes download files from the Internet, then run coreutils sha256sum or the like on those files to verify that they're trustworthy. That means they're untrusted at the time where I use them as input to sha256sum.

If there's an RCE in sha256sum (unlikely, but this is a thought experiment to demonstrate an attack vector), then that untrusted file can just exploit that RCE directly.

If there's a bug in sha256sum which allows a malicious file to manipulate the result, then a malicious file could potentially make itself look like a trusted file and therefore get past a security barrier.

Maybe there's no bug in sha256sum, but I need to base64 decode the file before running sha256sum on it, using the base64 tool from coreutils.

If you use your imagination, I'm sure you yourself can think up plenty more use cases where you might run a program from GNU coreutils against untrusted user input. If it helps, here's a Wikipedia article which lists all commands from GNU coreutils: https://en.wikipedia.org/wiki/GNU_Core_Utilities#Commands

EDIT: To be clear, this comment is only intended to explain what the attack surface is, not to weigh in on whether rewriting the tools in Rust improves security. One could argue that it's more likely that the freshly rewritten sha256sum from uutils has a bug than that GNU sha256sum has a bug. The statement "tools from coreutils are sometimes used to operate on untrusted input and therefore have an attack surface worth exploring" is not the same as the statement "rewriting coreutils in Rust improves security". Personally, I'm excited for the uutils stuff, but not primarily because I believe it alone will directly result in significant security improvements in Ubuntu 25.10.


But if there is a bug in the date command that prevents security updates from being installed, you've got your vulnerability right there.

Rust is not a silver bullet.


It's not really a bug in uutils. The option was not implemented yet when Ubuntu decided to switch. It's known that there's no 100% compatibility and won't be for a while.

Can you show a post from an influential figure in the Rust community that literally said "Rust is a silver bullet", please?

Please read my edit.

To play devil's advocate, who knows what kind of madness people are handing off to subprocess.run(["date"]) et al. They shouldn't, but I'd bet my last dollar it's out there.

I can certainly understand it for something like sudo or for other tools where the attack surface is larger and certain security-critical interactions are happening, but in this case it really seems like a questionable tradeoff, where the benefits in this specific case are abstract (theoretically no more possibility of any memory-safety bugs) but the costs are very concrete (incompatibility issues; and possibly other, new, non-memory-safety bugs being introduced with new code).

EDIT: Just to be clear, I'm otherwise perfectly happy that these experiments are being done, and we should all be better off for it and learn something as a result. Obviously somebody has assessed that this tradeoff has at least a decent probability of being a net positive here in some timeframe, and if others are unhappy about it then I suppose they're welcome to install another implementation of coreutils, or use a different distro, or write their own, or whatever.


I view `uutils` as a good opportunity to get rid of legacy baggage that might be used by just 0.03% of the community but has to sit there and it impedes certain feature adding or bug fixing.

F.ex. `sudo-rs` does not support most of what the normal `sudo` does... and it turned out that most people did not need most of `sudo` in the first place.

Less code leads to less bugs.


I'd prefer it if all software was written in languages that made it as easy as possible to avoid bugs, including memory-safety bugs, regardless of whether it seems like it has a large attack surface or not.

> "sudo"

Hence "doas".

OpenBSD has a lot of new stuff throughout the codebase.

No need for adding a bloated dependency (e.g. Rust) just because you want to re-implement "yes" in a "memory-safe language" when you probably have no reasons to.


A thousand badly written shell scripts might disagree.

You don't attack coreutils. You attack the scripts. In this case it was an update script that failed because of an incompatibility. It's not too hard at all to imagine one failing in an exploitable way.

Honestly, Rust-related hilarity aside, this project was a terrible, terrible idea. Unix shell environments have always been ad hoc and poorly tested, and anything that impacts compatibility is going to break historical code that may literally be decades old.

See also the recent insanity of GNU grep suddenly tossing an error when invoked as "fgrep". You just don't do that folks.


> See also the recent insanity of GNU grep suddenly tossing an error when invoked as "fgrep". You just don't do that folks.

The 'fgrep' and 'egrep' didn't throw errors, it would just send a warning to standard error before behaving as expected.

Those commands were never standardized, and everyone is better off using 'grep -F' and 'grep -E' respectively.


> didn't throw errors, it would just send a warning to standard error

Noted without comment. Except to say that I've had multiple scripts of my own break via "just" discovering garbage in the output streams.

> Those commands were never standardized

"Those commands" were present in v7 unix in 1979!


I think he means POSIX. Didn’t check but in some cases posix only covers some options a tool provides not all. It’s a hard lesson I learned while keeping shell scripts portable between Linux and macOS.

Yep. I was slightly incorrect in my original message, though. SUSv2 (1997) specified egrep and fgrep but marked them LEGACY. POSIX.1-2001 removed them.

The only place that that doesn't support 'grep -E' and 'grep -F' nowadays is Solaris 10. But if you are still using that you will certainly run into many other missing options.

[1] https://pubs.opengroup.org/onlinepubs/007908775/xcu/egrep.ht... [2] https://pubs.opengroup.org/onlinepubs/007908775/xcu/fgrep.ht...


"GNU grep implemented a change that breaks pre-existing scripts using a 46 year old API, but it's OK because the required workaround works everywhere but Solaris 10" seems like not a great statement of engineering design to me.

and yet coreutils continues to receive updates in ways that could break things.

This is not a rousing endorsement of the Unix shell environment. Maybe that should be rewritten in something else too (probably not Rust, Rust is probably not a good choice for this - but something that is designed in such a way that it is easy to test would be nice!).

There's nothing about rust that makes things hard to test. Actually the embedded test framework makes it easier than C. But what really matters is the public interface of those tools and that's got an extensive test suite available. It doesn't matter which language is used internal for those tests to run.

I meant that Rust is probably not a good choice for a new shell scripting environment, not that Rust is hard to test. I was responding to the claim "Unix shell environments have always been ad hoc and poorly tested", which is a bad thing and is worth fixing in and of itself.

> not a good choice for a new shell scripting environment

Why?


> This is not a rousing endorsement of the Unix shell environment.

It's surely not. The question wasn't how to rewrite the shell environment to be more "endorseable", though.

The point is that we have a half century (!) long history of writing code to this admittedly fragile environment, with no way to audit usage or even find all the existing code (literally many of the authors are retired or dead).

So... it's just not a good place to play games with "Look Ma, I rewrote /usr/bin/date and it's safe now!" Mess with your own new environments, not the ones that run the rest of the world please.


Maybe it's more important to rewrite half a century of poorly documented and specified shell scripts that are so embedded that their existence gets in way of rewriting fundamental Unix command line utilities, than it is to rewrite those utilities themselves. Any time someone makes the claim "we shouldn't touch this code, it's fragile" that state of affairs is itself bad. Our free software source code shouldn't be some poorly understood black box that we're afraid to touch for fear of breaking something, and if it is that is something we should fix.

I reported a segfault in "tac" a number of years ago.

Safer threading for performance improvements was part of it, as I understand.

   $ /usr/bin/time date
   Fri Oct 24 10:20:17 AM CDT 2025
   0.00user 0.00system 0:00.00elapsed 0%CPU (0avgtext+0avgdata 2264maxresident)k
   0inputs+0outputs (0major+93minor)pagefaults 0swaps
Imagine how much faster it will be with threading!

when your bug it fully typed

Absolutely nothing.

But systemd projects and Rust rewrites have this one thing in common: them being pure virtue signaling they absolutely have to be noticed. And what's a better way to get noticed if not going for something important and core?

To me, Rust rewrites look like "just stop oil" road blocks - the more people suffer, the better.

PS: Disclaimer: I love Rust. I hate fanboys.


no, just the usual... people want to rewrite stuff in Rust "just because". it's getting annoying.

Other people are allowed to do whatever they want.

Yes, people are allowed to do stupid things, but other people are also allowed to call them out on those stupid things. Especially if they make other people's lives harder, like in this case.

Whether it’s “stupid” remains to be seen. I personally would not have made this choice at this point in time, but the way some people seem to consider “Rust program has a bug” to be newsworthy is… odd.

> but the way some people seem to consider “Rust program has a bug” to be newsworthy is… odd.

But the fact that program X was written in Rust is, on the other hand, newsworthy? And there is nothing odd in the fact that the first property of the software that is advertised is the fact that it was made in Rust.

Yeah, nothing odd there.


Having unimplemented features makes a thing stupid?

Rewriting already perfectly working and feature-complete software and advertising your version as a superior replacement while neglecting to implement existing features that users relied on is pretty stupid, yes.

Need I remind you what coreutils or Linux (re)implement?

certainly! I didn't say otherwise.

however once software that has been only rewritten for the sake of being written in Rust starts affecting large distributions like Ubuntu, that's a different issue...

however one could argue that Ubuntu picking up the brand new Rust based coreutils instead of the old one is a 2nd order effect of "let's rewrite everything in Rust, whether it makes sense of not"


Nobody - least of all the authors of uutils - is forcing Ubuntu to adopt this change. I personally feel like it's a pretty radical step on Ubuntu's part, but they are free to make such choices, and you are free to not use Ubuntu if you believe it is detrimental.

There's no "however" here. Rewriting anything in Rust has no effect on anybody by itself.


So rewrites in Rust can happen as long as they don't have practical usage?

This isn't something that affected Ubuntu. It's something Ubuntu wanted to test in day to day usage.


ideally stupid rewrites never happen, but alas...

Iirc it started as simple exercise. It just aimed at high compatibility with original coreutils.

Which part of that is stupid? License is chosen because Rust is more static linkage friendly. Which leaves exercise part or high compatibility.

You might as well say Linux is a stupid rewrite that will never achieve anything circa 1998.


> we’re releasing an s3 compatible self hosted service for free

> nice

> we’re releasing coreutils rewritten in a memory safe language for free

> how dare you!


How dare these software engineers working in their free time don't do it in a way I agree >:(

I think it's mainly that it's a fun project and Rust is a lot nicer to work with than C. You're way more likely to see modern niceties and UX improvements in these ones than the old ones.

> Rust is a lot nicer to work with than C

What? How??


Modern conveniences such as compiler support for

- Tagged unions so you can easily and correctly return "I have one of these things".

- Generics so you can reuse datastructures other people wrote easily and correctly. And a modern toolchain with a package manager that makes it easy to correctly do this.

- Compile time reference counting so you don't have to worry about freeing things/unlocking mutex's/... (sometimes also called RAII + a borrow checker).

- Type inference

- Things that are changed are generally syntactically tagged as mutable which makes it a lot easier to quickly read code

- Iterators...

And so on and so forth. Rust is in large part "take all the good ideas that came before it and put it in a low level language". In the last 50 years there's been a lot of good ideas, and C doesn't really incorporate any of them.


The borrow checker better described as compile time rwlock with all possible deadlocks caught as compiler errors

It's that as well, but that part of the description doesn't catch how objects are automatically freed once the last reference to them (the owning one) is dropped.

Meanwhile my description doesn't fully capture how it guarantees unique access for writing, while yours does.


> but that part of the description doesn't catch how objects are automatically freed once the last reference to them (the owning one) is dropped.

You're confusing the borrow checker with RAII.

Dropping the last reference to an object does nothing (and even the exclusive &mut is not an "owning" reference). Dropping the object itself is what automatically frees it. See also Box::leak.


No I'm rather explicitly considering the joint behavior or the borrow checker and RAII.

With only RAII you don't get the last reference part.

Yes, there are exceptions, it's a roughly correct analogy not a precise description.


I agree with your points, except that "compile time reference counting" is not a thing. I'm not sure what that would even mean. :-)

The borrow tracker tracks whether there is 1, more than 1, or no references to a pointer at any particular time and rust automatically drops it when that last reference (the owning one) goes away. Sounds like compile time reference counting to me :P

I didn't invent this way of referring to it, though I don't recall who I stole it from. It's not entirely accurate, but it's a close enough description to capture how rust's mostly automatic memory management works from a distance.

If you want a more literal interpretation of compile time reference counting see also: https://docs.rs/static-rc/0.7.0/static_rc/


So the problem here is that it is almost entirely wrong. There is no reference count anywhere in the borrow checker’s algorithm, and you can’t do the things with borrows that you can do with reference counting.

It’s just not a good mental model.

For example, with reference counting you can convert a shared reference to a unique reference when you can verify that the count is exactly 1. But converting a `&T` to a `&mut T` is always instantaneous UB, no exceptions. It doesn’t matter if it’s actually the only reference.

Borrows are also orthogonal to dropping/destructors. Borrows can extend the lifetime of a value for convenience reasons, but it is not a general rule that values are dropped when the last reference is gone.


There is a reference count in the algorithm in the sense that the algorithm must keep track of the number of live shared borrows derived from a unique borrow or owned value so that it knows when it becomes legal to mutate it again (i.e. to know when that number goes to zero) or if there are still outstanding ones.

Borrow checking is necessary for dropping and destructors in the sense that without borrows we could drop an owned value while we still have references to it and get a use after free. RAII in rust only works safely because we have the borrow checker reference counting for us to tell us when its again safe to mutate (including drop) owned values.

Yes, rust doesn't support going from an &T to an &mut T, but it does support going from an <currently immutable reference to T> to a <mutable reference to T> in the shape of going from an &mut T which is currently immutably borrowed to an &mut T which is not borrowed. It can do this because it keeps track of how many shared references there are derived from the mutable reference.

You're right that it's possible to leak the owning reference so that the object isn't freed when the last reference is gone - but it's possible to leak a reference in runtime reference runtime reference counted language too.

But yes, it's not a perfect analogy, merely a good one. It's most likely that the implementation doesn't just keep a count of references for instance, but a set of them to enable better diagnostics and more efficient computation.


You are reiterating the same points, and they are still wrong, I’m sorry.

I think Rust speaks to people who don't "play" with their code during development. Moving stuff around, commenting things out, etc. When I try to do this in Rust, the borrow checker instantly complains because $something violates $some_rule. I can't say "yeah I know but just for now let's try it out this way and if it works I'll do it right".

I work this way and that's why I consider Rust to be a major impediment to my productivity. Same goes for Python with its significant whitespace which prevents freely moving code around and swapping code blocks, etc.

I guess there are people who plan everything in their mind and the coding part is just typing out their ideas (instead of developing their ideas during code editing).


That might be true. In my case, it is precisely because I do play a lot with my code, doing big 2-day refactors sometimes too. With Rust, when it finally compiles, it very often tends to run without crashing, and often correctly too, saving me a lot of debugging.

But it's also because of all the things I'm forced to fix while implementing or refactoring, that I would've been convinced were correct. And I was proven wrong by the compiler, so, many, times, that I've lost all confidence in my own ability to do it correctly without this kind of help. It helped me out of my naivety that "C is simple".


You eventually don't even think about the borrow checker, writing compiling code becomes second nature, and it also has the side effect of encouraging good habits in other languages.

> I guess there are people who plan everything in their mind and the coding part is just typing out their ideas (instead of developing their ideas during code editing).

I don't think there are, I think Gall's law that all complex systems evolve from simpler systems applies.

I play with code when I program with Rust. It just looks slightly different. I deliberately trigger errors and then read the error message. I copy code into scratch files. I'm not very clever; I can't plan out a nontrivial program without feedback from experiments.


I enjoy the ability to do massive refactors and once it builds it works and does the expected. There are so few odd things happening, no unexpected runtime errors.

I've written probably tens of thousands of lines each in languages like C, C++, Python, Java and a few others. None other has been as misery-free. I admit I haven't written Haskell, but it still doesn't very approachable to me.

I can flash a microcontroller with new firmware and it won't magically start spewing out garbage on random occasions because the compiler omitted a nullptr check or that there's an off-by-one error in some odd place. None. Of. That. Shit.


So many ways it's hard to list them. Better tooling, type system, libraries, language features, compile-time error checking.

I'm a bit surprised that you are surprised by this. I sometimes think Rust emphasizes memory safety too much - like some people hear it and just think Rust is C but with memory safety. Maybe that's why you're surprised?

Memory safety is a huge deal - not just for security but also because memory errors are the worst kind of bug to debug. If I never have to a memory safety bug that corrupts some data but only in release mode... Those bugs take an enormous amount of time to deal with.

But Rust is really a great modern language that takes all the best ideas from ML and C, and adds memory safety.

(Actually multithreading bugs might be slightly worse but Rust can help there too!)


It wasn't rewritten in rust yet. Therefore it wasn't complete. /s

This joke is getting more worn out than ‘fizz buzz saas in rust’ hn post titles have ever been

Yeah. Except it isn't a joke. Rustafarians are dead serious.

Some actual links would be nice. I have not seen a Rust zealot in HN for like at least 3 years at this point. (On Reddit I've seen plenty, but who takes Reddit seriously?)

They hadn't implemented the `-r` flag of `date`... But worse than that, they didn't squeek on the unimplemented flag (because the interface was already accepting it...). This is an incompetent implementer (and project management?)

Not enough Rust.

The thought of rewriting anything as intricate, foundational, and battle-tested as GNU coreutils from scratch scares me. Maybe I'd try it with a mature automatic C-to-Rust translator, but I would still expect years of incompatibilities and reintroduced bugs.

See also the "cascade of attention-deficit teenagers" development model.


FWIW, GNU coreutils is itself a rewrite of stuff that existed before, and which has been rewritten multiple other times.

Eh. People have written replacements for glibc because they didn't like something or another about it, and that seems to me to be way more fraught with risk than coreutils.

Folks also run into compatibility issues with musl as well. The biggest I recall was an issue with DNS breaking because musl didn’t implement some piece.

TBF DNS handling of glibc is crazy.

Fair enough. My gut sense is that C functions are simpler than shell commands, with a handful of parameters rather than a dozen or more flags, and this bug supports that -- they forgot to implement a flag in "date." But I haven't tried to do either, so I could be wrong.

> The thought of rewriting anything as intricate, foundational, and battle-tested as GNU coreutils from scratch scares me. Maybe I'd try it with a mature automatic C-to-Rust translator, but I would still expect years of incompatibilities and reintroduced bugs.

It is extremely bad that it's not a relatively straightforward process for any random programmer to rewrite coreutils from scratch as a several week project. That means that the correct behavior of coreutils is not specified well enough and and it's not easy enough to understand it by reading the source code.


Not to be too harsh, but if that’s your (fundamentalist) attitude to software, remind me to argue strenuously to never have you hired where I work. Fact is you can’t rewrite everything all the time, especially the bits that power the core of a business, and has for a decade or more. See banking and pension systems, for instance.

I think you're completely missing the point. The problem being solved is not that coreutils is bad and thus they should be rewritten, the problem is that coreutils is not specified well enough to make new implementations straight-forward. Thus a new implementation written from scratch is tremendously valuable for discovering bugs and poorly documented / unspecified behavior.

For a business it's often fine to stop at a local maximum, they can keep using old versions of coreutils however long they want, and they can still make lots of money there! However we are not talking about a business but a fundamental open source building block that will be around for a very long time. In this setting continuous long term improvement is much more valuable than short term stability. Obviously you don't want to knowingly break stability either, and in this regard I do think Ubuntu's timeline for actually replacing the default coreutils implementation is too ambitious, but that's beside the point—the rewrite itself is valuable regardless of what Ubuntu is doing!


So, what’s the state of the art in guided state-space exploration/fuzzing?

Seems if you have a reference implementation your fuzzer should be able to do some nice white-box validation to ensure you are behaving the same as the old implementation.


For this type of thing I think property testing would work well. It would take a fair about of work to write a proptest for the entire input space of the tool. But it's achievable and as durable as the CLI arguments (so for this specific case, very unlikely to change in a backwards incompatible way). And this kind of rote work with good reference materials (namely the man pages) is amenable to being generated.

Whatever language you're working in there is probably a port of Hypothesis or quickcheck. For Rust I use the `proptest` crate, but for differential testing of a CLI I would probably use the Python Hypothesis package and invoke the commands externally.


Anyone have a link to the patch in uutils? Curious to see that the problem and solution were.

This comment[0] explains it.

The core bug seems to be that support for `date -r <file>` wasn't implemented at the time ubuntu integrated it [1, 2].

And the command silently accepted -r before and did nothing (!)

0: https://lwn.net/Articles/1043123/

1: https://github.com/uutils/coreutils/issues/8621

2: https://github.com/uutils/coreutils/pull/8630


Man, if I had a nickel every time some old Linux utility ignored a command-line flag I'd have a lot of nickels. I'd have even more nickels if I got one each time some utility parsed command-line flags wrong.

I have automated a lot of things executing other utilities as a subprocess and it's absolutely crazy how many utilities handle CLI flags just seemingly correct, but not really.


This doesn't look like a bug, that is, something overlooked in the logic. This seems like a deliberately introduced regression. Accepting an option and ignoring it is a deliberate action, and not crashing with an error message when an unsupported option is passed must be a deliberate, and wrong, decision.

It certainly doesn't look intentional to me- it looks like at some point someone added "-r" as a valid option, but until this surfaced as a bug, no one actually implemented anything for it (and the logic happens to fall through to using the current date).

a `todo!()` away from something being way more obvious. Unfortunate!

It's wrong (and coreutils get it right) but I don't see why it would have to be deliberate. It could easily just not occur to someone that the code needs to be tested with invalid options, or that it needs to handle invalid options by aborting rather than ignoring. (That in turn would depend on the crate they're using for argument parsing, I imagine.)

Could parsing the `-r` be added without noticing it somehow?

If it was added in bulk, with many other still unsupported option names, why does the program not crash loudly if any such option is used?

A fencepost error is a bug. A double-free is a bug. Accepting an unsupported option and silently ignoring it is not, it takes a deliberate and obviously wrong action.


At least from what I can find, here's the original version of the changed snippet [0]:

    let date_source = if let Some(date) = matches.value_of(OPT_DATE) {
        DateSource::Custom(date.into())
    } else if let Some(file) = matches.value_of(OPT_FILE) {
        DateSource::File(file.into())
    } else {
        DateSource::Now
    };
And after `-r` support was added (among other changes) [1]:

    let date_source = if let Some(date) = matches.get_one::<String>(OPT_DATE) {
        DateSource::Human(date.into())
    } else if let Some(file) = matches.get_one::<String>(OPT_FILE) {
        match file.as_ref() {
            "-" => DateSource::Stdin,
            _ => DateSource::File(file.into()),
        }
    } else if let Some(file) = matches.get_one::<String>(OPT_REFERENCE) {
        DateSource::FileMtime(file.into())
    } else {
        DateSource::Now
    };
Still the same fallback. Not sure one can discern from just looking at the code (and without knowing more about the context, in my case) whether the choice of fallback was intentional and handling the flag was forgotten about.

[0]: https://github.com/yuankunzhang/coreutils/commit/850bd9c32d9...

[1]: https://github.com/yuankunzhang/coreutils/blob/88a7fa7adfa04...


> Accepting an unsupported option and silently ignoring it is not, it takes a deliberate and obviously wrong action.

No, it doesn't. For example, you could have code that recognizes that something "is an option", and silently discards anything that isn't on the recognized list.


> deliberately introduced regression

> deliberate and wrong decision

Yeah... I hope "we" will not switch to it just because it is written in Rust. There is much more than just the damn language behind it.


I would say that Canonical is more at fault in this case.

I'm frankly appalled that an essential feature such as system updates didn't have an automated test that would catch this issue immediately after uutils was integrated.

Nevermind the fact that this entire replacement of coreutils is done purely out of financial and political rather than technical reasons, and that they're willing to treat their users as guinea pigs. Despicable.


What surprises me is that the job seems rushed. Implementation is incomplete. Testing seems patchy. Things are released seemingly in a hurry, as if meeting a particular deadline was more important for the engineers or managers of a particular department than the qualify of the product as a whole.

This feels like a large corporation, in the bad sense.



It would be really nice if something said what the actual problem was.

The last commit[0] is a fix for date parsing to bring it in line with the GNU semantics, which seems like a pretty good candidate.

Edit: Or not, see evil-olive's comment[1] for a more likely candidate.

0: https://github.com/uutils/coreutils/commit/0047c7e66ffb57971...

1: https://news.ycombinator.com/item?id=45687743


The problem is the existence of the project of Rust rewrite itself.

The top comment is hilarious:

> The next Ubuntu release will be called Grateful Guinea-Pig


annoyingly, they don't link to the actual bug in question, just say:

> Systems with the rust-coreutils package version 0.2.2-0ubuntu2 or earlier have the bug, it is fixed in 0.2.2-0ubuntu2.1 or later.

based on the changelog [0] it seems to be:

> date: use reference file (LP: #2127970)

from there: [1]

> This is fixed upstream in 88a7fa7adfa048dabdffc99451d7aba1d9e6a9b6

which in turn leads to [2, 3]

> Display the date and time of the last modification of file, instead of the current date and time.

this is not the type of bug I was expecting, I assumed it would be something related to a subtle timezone edge case or whatever.

instead, `date -r` is supposed to print the modtime of a given file:

    > date --utc -Is -r ~/.ssh/id_ed25519.pub
    2025-04-29T19:25:01+00:00
    > date --utc -Is
    2025-10-23T21:46:47+00:00
and it seems like the Rust version just...silently ignored that expected behavior?

maybe I'm missing something? if not this seems really sloppy and not at all what I'd expect from a project aiming to replace coreutils with "safer" versions.

0: https://launchpad.net/ubuntu/questing/+source/rust-coreutils...

1: https://bugs.launchpad.net/ubuntu/+source/rust-coreutils/+bu...

2: https://github.com/uutils/coreutils/issues/8621

3: https://github.com/uutils/coreutils/pull/8630


It's supposed to pass the coreutils upstream tests. If it does, then that would mean the upstream tests still need work

It... doesn't though: https://uutils.github.io/coreutils/docs/test_coverage.html

Neither this issue, which doesn't appear to be a bug at all but merely an unimplemented feature, nor the fact that uutils doesn't (yet) pass the entire testsuite, seem to me to at all be an indictment of the uutils project, merely a sign that it is incomplete. Which is hardly surprising when I get the impression it's primarily been a hobby project for a bunch of different developers. It does make me wonder about the wisdom of Ubuntu moving to it.


FWIW, the first test in the coreutils test suite covering the `date -r` case was added... 5 hours ago: https://github.com/coreutils/coreutils/blob/master/tests/dat...

I don't know what the code coverage of coreutils' test suite is, but my guess is that it's not spectacular.


This is good, the correct behavior of coreutils is now specified a little bit more thoroughly than it was previously.

If it's not passing the test suite, then why is it even considered for inclusion in a distribution like Ubuntu?

Ubuntu is likely used by 10s of millions of servers and desktops. I'm not sure why this kind of breakage is considered acceptable. Very confusing.


It's a part of Ubuntu 25.10 to get it ready for prime time for Ubuntu 26.04.

Users who need stability should use the LTS releases. The interim releases have always been more experimental, and have always been where Canonical introduces the big changes to ensure everything's mature by the time the LTS comes around.


The problem is that this isn't Canonical's own stance. From https://ubuntu.com/about/release-cycle

> Every six months between LTS versions, Canonical publishes an interim release of Ubuntu, with 25.10 being the latest example. These are production-quality releases and are supported for 9 months, with sufficient time provided for users to update, but these releases do not receive the long-term commitment of LTS releases.


Running production on non-LTS Ubuntu would be insane (unless it was a very short-term deployment on a more modern system).

Maybe the thought is that there will be more pressure now on getting all the tests to pass given the larger install base? It isn't a great way to push out software, but it's certainly a way to provide motivation. I'm personally more interested in whether the ultimate decision will be to leave these as the default coreutils implementation in the next Ubuntu LTS release version (26.04) or if they will switch back (and for what reason).

they have a tendency to try novel/different things, like upstart (init system), mir (desktop compositor (?))

and this is probably a net positive, there's now an early adopter for the project, the testsuite gets improved, and the next Ubuntu LTS will ship more modern tools


100% agree. Why would they adopt it if it doesn't pass the upstream test suite. I assumed that would be required before even considering it!

I was expecting that they would be concerned about bugs in the untested parts!


The test in question was added 5 hours ago..

Wow. Maybe I'm missing something but it seems really weird to replace a tool with a rewrite that doesn't pass the test suite!

The non-passing test was only added like 17 hours ago: https://github.com/coreutils/coreutils/commit/14d24f7a530f58...

So this is a good thing even for coreutils itself, they will slowly find all of these untested bits and specify behaviour more clearly and add tests (hopefully).


I mean, how long did they take to realize that the more(1) they shipped had no equivalent in GNU coreutils at all? Its from util-linux: https://github.com/uutils/coreutils/issues/8975

Doesn't look like people who do their homework


yeah, based on some more digging, it looks like a test case for `date --reference` in GNU coreutils was only added a few hours ago [0] so I assume it was in response to this bug.

but I don't think that should let the uutils authors off the hook - if `--reference` wasn't implemented, that should have been an error rather than silently doing the wrong thing.

after even more Git spelunking, it looks like that problem goes all the way back to the initial "Partial implemantion of date" [1] commit from 2017 - it included support for `--reference` in the argument parsing, including the correct help text, but didn't do anything with it, not even a "TODO: Handle this option" comment like `--set` has.

0: https://github.com/coreutils/coreutils/commit/14d24f7a530f58...

1: https://github.com/uutils/coreutils/commit/41d1dfaf440eabba3...


https://github.com/coreutils/coreutils/commit/14d24f7a5

That bring GNU date(1) line coverage from 79.8% to 87.1%


There were no buffer overflows, though!

I also can't be hacked if I pull the power to my PC!


25.10 is unusable. I've never said that about a non-LTS Ubuntu release.

Can you elaborate?

> But seriously. Rewriting C utilities that have been battle-tested for decades in Rust might be a good idea in the long term, but anyone could have predicted short-term hiccups.

How "long term" are we talking about that rewriting battle-tested, mission-critical C utils (which, as other posters noted, in this case often have minimal attack surfaces) actually makes sense?

>> Which is why I'm glad they're doing it! It seems like the kind of thing that one can be understandably scared to ever do, and I say this as one of the folks involved with getting some Rust in the Linux kernel.

Total zealot.

Reminder that one of the uutils devs gave a talk at FOSDEM where he used spurious benchmarks to falsely claim uutils's sort was faster, only for /g/ users to discover it was only because it was locale-unaware, and in fact was much slower:

https://archive.fosdem.org/2025/schedule/event/fosdem-2025-6... (~15 min)

https://desuarchive.org/g/thread/104831348/#q104831479

https://desuarchive.org/g/thread/104831348/#104831809


Those mission critical tools are rewrites of rewrites, too. Don’t be a zealot yourself.

There are reasons for rewriting, but I can't imagine a technical one for coreutils.

> discover it was only because it was locale-unaware, and in fact was much slower:

That was a lot of noise about not much. Locale handling was added and performance got even better:

https://www.phoronix.com/news/Rust-Coreutils-0.2


> How "long term" are we talking about that rewriting battle-tested, mission-critical C utils (which, as other posters noted, in this case often have minimal attack surfaces) actually makes sense?

Makes me wonder if putting a similar amount of effort into building up proof/formal verification system for coreutils would have yielded better results security wise.


Of course! But the problem is much more severe. I can't comment on coreutils, but there are not enough resources for high quality maintenance of the core tool chain. It is completely surprising that effort is wasted for creating new implementations when we do not even have enough resources to properly maintain the existing ones. It is based on the - completely wrong - idea that all the problems we have is from using the wrong language and will magically go away with Rust instead of a fundamental maintenance problem of free software. So we now makes things substantially worse based on this incorrect analysis.

A rewrite in Rust may attract new contributors, thereby aiding maintenance in the coming years.

Especially if you look very long term, as in where the young developers are, you'll see a significant reduction in the amount of people with the ability to write high-quality C. Rust has the benefit that low-quality Rust ist fairly close to high-quality Rust, while low-quality C is a far cry from high-quality C.

Choosing Rust does not necessarily require Rust itself to be better for the task. It can also be the result of secondary factors.

I don't know if this applies to coreutils, but C being technically sufficient does not always mean it shouldn't be replaced.


I don't buy this story. It may attract some people during the hype phase. But in the end, Rust is more complex, so it will make it harder to maintain software. And then, this also only can work if the rewrites completely replace the original (rarely the case), and you do not lose more maintainers than you gain.

Rust isn’t more complex. It just codifies the things you need to know.

It is certainly a lot more complex than C. Whether it codified the things one needs to know is a different question. I would even agree that partially it does and there are aspects I really like about Rust, but I do not believe it matters as nearly as much as some people might think. For example, no complicated super-smart type system will prevent you from adding a CLI option and then not implementing it, breaking automatic updates and backup scripts. But nerds like to believe that this super-smart type system is the solution for out security problems. I can understand this. This is also what I believed 20 years ago.

> A rewrite in Rust may attract new contributors, thereby aiding maintenance in the coming years.

Or they will get bored as soon as a New Awesome Language will be hyped on HN and elsewhere.


Grifters use public projects all the time for clout and security; like other public sector work, open source seems to attract a type of personality that creates their own justifications for existence. Replacing a core utility with a rewritten version is a major bonus to an individual's portfolio, even if only altruistic reasons are used to justify the task.

The rewrite has NOTHING to do with security and is all about licensing. coreutils are GLPv3 rust-coreutils are MIT

So what? Standalone binaries don't infect other things with copyleft anyway.

Apple never upgraded to GPL3 coreutils, bash and remained away from anything GPL3…

Oh, you mean specifically GPL v3 license, not any GPL license.

Yeah, broad tivoisation and patent clauses make it a problem, because making any patent litigation on unrelated grounds has potential to lose ability to ship the entire OS.


Is that true? If I make a product, and that product runs some embedded Linux system with GPLv3-licensed coreutils, are you confident that my product isn't infected by GPLv3?

Canonical is trying to position Ubuntu as a relevant player in the embedded space.


This hasn't stopped anybody from releasing a product that I'm aware of.

There is a lot of FUD spread about GPL so companies tend to just nope out entirely.

Can we just go back to the real version?

debian-stable welcomes you

I think this is where I'll be going after a good 15 years with Ubuntu.

They've lost the plot. I don't mind change if it has meaningful benefits, but forcing unstable and barely-tested coreutils that fail their own tests is madness.


I have been an Ubuntu user since inception (and Linux user since kernel 1.2.3).

This summer I have migrated all our production and development servers to Debian. Because absolutely and sincerely fuck rust coreutils, sudo-rs, systemd-* and the other virtue signaling projects.


I've been using Debian stable exclusively for the past three years on servers since Canonical doubled-down on "snaps" despite all of their customers telling them on no uncertain terms that "snaps" are horrible.

Also Canonical was named and shamed by people trying to get jobs as the poster child of everything wrong with tech recruiting in [current year].


Yes I discovered snaps in 24.04 when I tried to install Firefox.

Eventually installed from the PPA but it was an unexpected PITA.


Uh what's wrong with canonical recruitment?

They had me do an automated IQ test, specified I had to do it in my native language, and it turned out it had been machine translated with some tools that was decades old, so I didn't understand anything at all.

I am sure they've also blacklisted me because I get autorejected since then.

I'm also a Debian Developer so I don't have any relevant experience that could be useful in working at Canonical.

Where do you see anything wrong with their process?


[flagged]


Nobody promised that. Please don't make things up.

It would be silly to do so, for sure.

That's why it's called the bleeding edge. Rust dev culture is 99% bleeding edge. It is not a culture of stability. It is a culture of change and the latest and greatest. The language could be used in stable ways, but right now, it's not.

That's one heck of an extrapolation from one incident, or even one project, in a language that has been post-1.0 for a decade and has a wide variety of users with a wide variety of update/upgrade preferences and subcultures.


Have you EVER used GCC? I wish each release only had bugs with inferring the type of auto or something.

And rustc uses LLVM, and has had several bugs as well, whether related to LLVM or just due to itself. But what I linked was intentional breakage, and it caused some people a lot of pain.

Yea, I can think of a lot of intentional GCC breakages as well. Especially ones related to optimizations. If we wrote an article for every one you'd never hear the end of it.

So what's actually your point here?


> Especially ones related to optimizations.

Did they change the language? GCC is not meant to change the C or C++ languages (unless the user uses some flag to modify the language), there is an ISO standard that they seek to be compliant with. rustc, on the other hand, only somewhat recently got a specification or something from Ferrocene, and that specification looks lackluster and incomplete from when I last skimmed through it. And rustc does not seem to be developed against the official Rust specification.


> Did they change the language?

That's not what you asked though, these were intentional breakages. Language standard or not.

In any case though, bringing up language specification as an example for maturity is such a massive cop-out considering the amount of UB in C and C++. It's not like it gives you good stability or consistency.

> there is an ISO standard that they seek to be compliant with

You can buy RM 8048 from NIST, is that the "culture" of stability you have in mind?


> That's not what you asked though, these were intentional breakages. Language standard or not.

You are completely wrong, and you ought to be able to see that already.

It makes a world of difference if it is a language change or not. As shown in dtolnay's comment https://github.com/rust-lang/rust/issues/127343#issuecomment... .

If breakage is not due to a language change, and the program is fully compliant with the standard, and there is no issue in the standard, then the compiler has a bug and must fix that bug.

If breakage is due to a language change, then even if a program is fully compliant with the previous language version, and the programmer did nothing wrong, then the program is still the one that has a bug. In many language communities, language changes are therefore handled with care and changing the language version is generally set up to be a deliberate action, at least if there would be breakage in backwards compatibility.

I do not see how it would be possible for you not to know that I am completely right about this and that you are completely wrong. For there is absolutely no doubt that that is the case.

> In any case though, bringing up language specification as an example for maturity is such a massive cop-out considering the amount of UB in C and C++.

Rust is worse when unsafe is involved.

https://materialize.com/blog/rust-concurrency-bug-unbounded-...


> If breakage is not due to a language change, and the program is fully compliant with the standard, and there is no issue in the standard, then the compiler has a bug and must fix that bug.

There are almost no C programs without UB. So a lot of what you would call "compiler bugs" are entirely permitted standard. If you say "no true C program has UB" then of course, congrats, your argument might be in some aspects correct. But that's not really the case in practice and your language standard provides shit in terms of practical stability and cross-compatibility in compilers.

> I do not see how it would be possible for you not to know that I am completely right about this and that you are completely wrong. For there is absolutely no doubt that that is the case.

Lol, lmao even.

> Rust is worse when unsafe is involved.

It's really not.


> There are almost no C programs without UB. So a lot of what you would call "compiler bugs" are entirely permitted standard. If you say "no true C program has UB" then of course, congrats, your argument might be in some aspects correct. But that's not really the case in practice and your language standard provides almost no practical stability nor good cross-compatibility in compilers.

If the compiler optimization is compliant with the standard, then it is not a compiler bug. rustc developers have the same expectation when Rust developers mess up using unsafe, though the rules might be less defined for Rust than for C and C++, worsening the issue for Rust.

I don't know where you got the idea that "almost no C programs [are] without UB". Did you get it from personal experience working with C and you having trouble avoiding UB? Unless you have a clear and reliable statistical source or other good source or argument for your claim, I encourage you to rescind that claim. C++ should in some cases be easier to avoid UB with than C.

> > Rust is worse when unsafe is involved.

> It's really not.

It definitely, very much is. As just some examples among many, consider aliasing and pinning https://lwn.net/Articles/1030517/ .


> I don't know where you got the idea that "almost no C programs [are] without UB". Did you get it from personal experience working with C and you having trouble avoiding UB? Unless you have a clear and reliable statistical source or other good source or argument for your claim, I encourage you to rescind that claim. C++ should in some cases be easier to avoid UB with than C.

From the fact that a lot of compilers can and do rely on UB to do certain optimizations. If UB wasn't widespread, they wouldn't have those optimization passes. You not knowing how widespread UB is in C and C++ codebases is very telling.

You're however absolutely free to find me one large project that will not trigger "-fsanitizer=undefined" for starters. (Generated codebases do not count though.)

> It definitely, very much is. As just some examples among many, consider aliasing and pinning https://lwn.net/Articles/1030517/ .

Difficult to understand or being unsafe does not make unsafe Rust worse than C. It's an absurd claim.


> From the fact that a lot of compilers can and do rely on UB to do certain optimizations.

Your understanding of both C, C++ and Rust appear severely flawed. As I already wrote, rustc also uses these kinds of optimizations. And the optimizations do not rely on UB being present in a program, but on UB being absent in a program, and it is the programmer's responsibility to ensure the absence of UB, also in Rust.

Do you truly believe that rustc does not rely on the absence of UB to do optimizations?

Are you a student? Have you graduated anything yet?

> You're however absolutely free to find me one large project that will not trigger "-fsanitizer=undefined" for starters. (Generated codebases do not count though.)

You made the claim, the burden of proof is on you. Though your understanding appear severely flawed, and you need to fix that understanding first.

> Difficult to understand or being unsafe does not make unsafe Rust worse than C. It's an absurd claim.

Even the Rust community at large agrees that unsafe is more difficult than C and C++. So you are completely wrong, and I completely right, yet again.


> Do you truly believe that rustc does not rely on the absence of UB to do optimizations?

Relying on absence of UB is not the same as relying on existence of UB. I'm not surprised however that you find this difference difficult to grasp.

> You made the claim, the burden of proof is on you. Though your understanding appear severely flawed, and you need to fix that understanding first.

I gave you such a great opportunity to prove me wrong. Why not take it? Surely if what you say is true this should be easy.


It's been post-1.0 for a decade, but they keep on changing the definition of "nightly." "Stable" lacks too many quality of life features so most things don't target it.

Rust currently has a problem that too few people are using nightly which makes gathering experience and feedback on new features harder.

This is a stark difference to back in the early post 1.0 days where many high profile crates needed nightly and everyone was experimenting.


I agree, but the post does resonate - Rust still has a very "Ready to make breaking changes on a whim" reputation

Which makes sense because in 2025 people have grown tired of lack of improvement so that some esoteric ass compiler from the 90s still works or someones 30 year old bash script still functions.

Pros and Cons either way for better or worse depending on your perspective.


I've largely lost patience with the current culture of sacrificing any backwards compatibility that is slightly inconvenient in the name of “improvement”.

That’s an argument against creating uuutils; it is a project that aims for coreutils 100% compatibility. eza, bat, ripgrep, etc are more exciting for at least having different features than coreutils

I was more commenting on the Rust community being ready to make breaking changes.

Personally while I think Rust is a decent language it would not have caught on with younger devs if C/C++ didn't have such a shitty devex that is stuck 30 years in the past.

Younger people will always be more willing to break things and messing around with ancient and unfriendly build/dev does not attract that demographic because why waste time messing around with the build env that actually getting things done.

One day rust will be the same and the process will start again.


If you're on unix, I think the only thing you really need is cc and ld. The build system aims for flexibility instead of each project being its own personal world and things are duplicated ad momentum. Everyone is happy playing in their little sandbox instead of truly collaborating with each other and create great software.

Indeed. This is exactly the problem. It is no fun helping with maintenance of an existing project, fix some boring bug, deal with all the historical constraints, necessary support for old systems, etc.

It is so much more fun to cargo-download some stuff and build some new shiny Rust-xyz implementation of Z on your Apple Macbook and even get some HN attention just for this. The problem with all the Rust hype is that people convince themselves that they are actually helping the world by being part of a worthwhile cause to rid the world from old languages, while the main effect is that i draws resources away from much more important efforts and places an even higher burden on the ecosystem - making our main problem - sustainable maintenance of free software - even harder.


It's an argument for/against doing anything. The question is how large of a change can you get away with. Ubuntu seems to think they can get away with a 1:1 replacement being acceptable by 26.04, I doubt they'd think the same about forcing alternative tooling options just because the impetus is the same.

Improvement to what? It's not like anyone is creating a new paradigm (or even ripping off an old one, like smalltalk or plan9). It's mostly coming up with a different defaults.

> Rust still has a very "Ready to make breaking changes on a whim" reputation

No it doesn't. What on earth are you talking about?


I like Rust, but almost all libraries I end up using are on some 0.x version...

I find this tends to stem from libraries refusing to declare 1.0 for fear it would lock them into bad decisions, not from being unstable. Chrono is a great example: v0.4 for EIGHT YEARS while they make sure the design and APIs are worthy of being set into stone as 1.0 (think: Stability Bit Versioning).

Sure, some pre-1.0 libraries in Rust land are actually wildly volatile, but I find that's not especially the norm, out of the crates I've used. That said... 0.4 for EIGHT YEARS is also a pretty darn good sign you've solidified the API by now, and should probably just tag a 1.0 finally...


That’s the whole point. Perfectionism and stability are mutually exclusive. What’s ‘worthy of being set in stone’ is very much not set in stone, in an insanely fashion-driven industry like software development.

Version numbers are bogus anyway. For all you care all those libraries could be YY.MM. Semantic versioning is a lie except for the smallest units.

Version numbers are communication. Version 0.x is the clearest way I can imagine to communicate, "do not expect a stable API".

You are conflating what actually _is_ with what you perceive version numbers communicate. The comments above that you are seemingly commenting in support of are talking about "Ready to make breaking changes on a whim," which is factually not true. What you are talking about is some libraries using a version number that you perceive as indicating a lack of stability, even when the actual library is extremely stable (e.g., log and libc).

So you seem to be saying "I don't like how Rust libraries communicate their stability." But that's something wholly different from "Ready to make breaking changes on a whim." Yet your commentary doesn't distinguish these concepts and instead conflates them.


Version 0.x communicates, "the API is not stabilized, and you must be prepared for breaking changes on a whim". If a library did not want to communicate that, it would release a version >=1.x.

And when most library authors communicate "the API is not stabilized, you must be prepared for breaking changes on a whim", then yeah, of course I am going to perceive that as a lack of stability.


That doesn't address my point, which is that you are conflating communication of a thing with the thing itself.

Moreover, it's unclear to me if you're aware that, in the Rust ecosystem, 0.x and 0.(x+1) are treated as semver incompatible releases, while 0.x.y and 0.x.(y+1) are treated as semver compatible releases. While the actual semver specification says "Anything MAY change at any time. The public API SHOULD NOT be considered stable." when the major version is 0, this isn't actually true in the Rust crate ecosystem. For example, if you have `log = "0.4"` in your `Cargo.toml`, then running a `cargo update` will only bump you to semver compatible releases without breaking changes (up to a human's ability to adhere to semver).

Stated more succinctly, in the Rust ecosystem, semver breaking changes are communicated by incrementing the leftmost non-zero version component. In other words, you cannot correctly interpret what version numbers mean in the Cargo crate ecosystem using only the official semver specification. You also need to read the Cargo documentation: https://doc.rust-lang.org/cargo/reference/semver.html#change...

(emphasis mine)

> This guide uses the terms “major” and “minor” assuming this relates to a “1.0.0” release or later. Initial development releases starting with “0.y.z” can treat changes in “y” as a major release, and “z” as a minor release. “0.0.z” releases are always major changes. This is because Cargo uses the convention that only changes in the left-most non-zero component are considered incompatible.

So I repeat: you are conflating perception with what actually is reality. This isn't to say that perception is meaningless or doesn't matter or isn't a problem in and of itself. But it is distinct from what is actually happening. That is, "the Rust crate ecosystem doesn't use semver version numbers in a way that can be interpreted using only the official semver specification" is a distinct problem from "the Rust crate ecosystem makes a habit of introducing breaking changes on a whim." They are two distinct concerns and you conflating them is extremely confusing and misleading.


When "the thing" is "API stability promises made by the author of a library", then communication of that thing is the thing itself.

I know about the semver exception Rust uses where 0.x is considered a different "major version" to 0.y for the purpose of automatic updates. That's not really relevant. I'm talking about communication between humans, not communication between human and machine. By releasing log 0.4.4 after 0.4.3, you're communicating to the machine that it should be safe to auto-update to the new release, but by keeping the version number 0.x, you're communicating to the human that you still don't promise any kind of API stability.


This is the context of the discussion I'm talking about:

    >>> Rust still has a very "Ready to make breaking changes on a whim"
    >>> reputation
    >>
    >> No it doesn't. What on earth are you talking about?
    >
    > I like Rust, but almost all libraries I end up using are on some 0.x version...
That initial complaint is talking about Rust being ready to "make breaking changes on a whim." But this is factually not true. That is something different from what you perceive the version numbers to mean. Just because several important ecosystem crates are still on 0.x doesn't actually mean that "Ready to make breaking changes on a whim" is a true statement.

> you're communicating to the human that you still don't promise any kind of API stability.

A 1.x version number doesn't communicate that either. Because nothing is stopping someone from releasing 2.x the next day. And so on. Of course, it may usually be the case that a 1.x means the cadence of semver incompatible releases has decreased in frequency. But in the Rust ecosystem, that is also true of 0.x too.

> I'm talking about communication between humans, not communication between human and machine.

Yes, but communication between humans is distinct from describing the Rust as ready to make breaking changes on a whim. You are confusing communication with a description about the actual stability of the Rust ecosystem.


As is the norm for HN and Rust commentary - any slight criticism is met with fury and downvotes.

No crying in the casino, to quote a classic.

I'm not "furious", but I do think your comment was bad and deserved to be downvoted. You're posting a random opinion with nothing to back it up, which is, to boot, factually wrong.

What breaking changes has Rust made "on a whim" ?


> What breaking changes has Rust made "on a whim" ?

I don't know about "on a whim", but this isn't far off in regards to breaking compatibility. And it caused some projects, like Nix, a lot of pain.

https://github.com/rust-lang/rust/issues/127343

https://devclass.com/2024/08/19/rust-1-80-0-breaks-existing-...


> I don't know about "on a whim"

Probably not the best way to lead, considering that that phrase is the entire root of the disagreement you're chiming in on!

> but this isn't far off in regards to breaking compatibility.

I think it might be worth elaborating on why you think that change "isn't far off" being made "on a whim". At least to me, "on a whim" implies something about intent (or more specifically, the lack thereof) that the existence of negative downstream impacts says nothing about.

If anything, from what I can tell the evidence suggests precisely the opposite - that the breakage wasn't made "on a whim". The change itself [0] doesn't exactly scream "capricious" to me, and the issue was noticed before Rust 1.80.0 released [1]. The libs team discussed said issue before 1.80.0's release [2] and decided (however (un)wisely one may think) that that breakage was acceptable. That there was at least some consideration of the issue basically disqualifies it from being made "on a whim", in my view.

[0]: https://github.com/rust-lang/rust/pull/99969

[1]: https://github.com/rust-lang/rust/issues/127343

[2]: https://github.com/rust-lang/rust/issues/127343#issuecomment...


Your post strongly reinforces Rust's reputation as a language whose language designers are willing to break compatibility on a whim. If Rust proponents argue like this, what breakage will not be forced upon Rust users in the future?

Your post itself reinforces the OP's claim.

Edit: Seriously. At this point, it seems clear that the culture around Rust, especially driven by proponents like you, indirectly have a negative effect on both Rust software, and software security & quality overall, as seen by the bug discussed in the OP. Without your kind of post, would Ubuntu have felt less pressured to make technical management decisions that allowed for the above bug?


> Your post strongly reinforces Rust's reputation as a language whose language designers are willing to break compatibility on a whim.

> Your post itself reinforces the OP's claim.

Again, I think it might be worth elaborating precisely what you think "on a whim" means. To me (and I would hope anyone else with a reasonable command of English), making a bad decision is not the same thing as making a decision on a whim, and you have provided no reason to believe the described change falls under the latter category instead of the former.


This new post you have made again reinforces the general notion that, yes, closer to "on a whim" than many like, the Rust community is willing to break backwards compatibility. It reflects extremely poorly on the Rust community in some people's eyes that you and other proponents appear to not only be unwilling to admit the issues, like the above issue that caused some people a lot of pain, but even directly talk around the issues.

In C and C++ land, if gcc (as a thought experiment) tried breaking backwards compatibility by changing the language, people would be flabbergasted, complain that gcc made a dialect, and switch to Clang or MSVC or fork gcc. But for Rust, Rust developers just have to suck it up if rustc breaks backwards compatibility. Like Dtolnay's comment in the Github issue I linked indicates. If and once gccrs gets running, that might change.

Though I am beginning to worry, for the specification for Rust gotten from Ferrocene might be both incomplete and basically fake, and that might cause rustc and gccrs to more easily risk becoming separate dialects of Rust, which would be horrible for Rust, and since there should preferably be more viable options in my opinion of systems languages, arguably horrible for the software ecosystem as well. I hope that there are plans for robust ways of preventing dialects of Rust.


> yes, closer to "on a whim" than many like

You're moving the goalposts. Neither the original claim nor your previous comment in this subthread used such vague and weakening qualifiers to "on a whim".

And even those still don't say anything about what exactly you mean by "on a whim" or how precisely that particular change can be described as such, though at this rate I suppose there's not much hope in actually getting an on-point answer.

> the Rust community is willing to break backwards compatibility

Again, the fact that Rust can and will break backwards compatibility is not in dispute. It's specifically the claim that it's done "on a whim" that was the seed of this subthread.

> appear to not only be unwilling to admit the issues

I suggest you read my comment more carefully.

I also challenge you to find anyone who claims that the changes in Rust 1.80.0 did not cause problems.

> but even directly talk around the issues.

Because once again, the existence of breaking changes and/or their negative downstream impact is not what the original comment you replied to was disputing! I'm not sure why this is so hard to understand.

> In C and C++ land, if gcc (as a thought experiment) tried breaking backwards compatibility by changing the language, people would be flabbergasted, complain that gcc made a dialect, and switch to Clang or MSVC or fork gcc.

No need for a thought experiment. Straight from the GCC docs [0]:

> By default, GCC provides some extensions to the C language that, on rare occasions conflict with the C standard.

> The default, if no C language dialect options are given, is -std=gnu23.

> By default, GCC also provides some additional extensions to the C++ language that on rare occasions conflict with the C++ standard.

> The default, if no C++ language dialect options are given, is -std=gnu++17.

Also from the GCC docs [1]:

> The compiler can accept several base standards, such as ‘c90’ or ‘c++98’, and GNU dialects of those standards, such as ‘gnu90’ or ‘gnu++98’.

So not only has GCC "chang[ed] the language" by implementing extensions that can conflict with the C/C++ standards, GCC has its own dialect and uses it by default. And yet there's no major GCC fork and no mass migration to Clang or MSVC specifically because of those extensions.

And it's not like those extensions go unused either; perhaps the most well-known example is Linux, which only officially supported compilation via GCC for a long time precisely because Linux made (and makes!) extensive use of GCC extensions. It was only after a concerted effort to remove some of those GNU-isms and add support for others into Clang that mainline Clang could compile mainline Linux [2].

> I hope that there are plans for robust ways of preventing dialects of Rust.

This is not a realistic option for any language that anyone is free to implement for what I hope are obvious reasons.

[0]: https://gcc.gnu.org/onlinedocs/gcc/Standards.html

[1]: https://gcc.gnu.org/onlinedocs/gcc/C-Dialect-Options.html

[2]: https://www.phoronix.com/review/clang-linux-53


All day every day I see "random opinion with nothing to back it up" posts on Hacker News, but are not voted down - discuss.

Rust users read more xkcd than the average hn poster. Or less. Take your pick.

Yeah, sweeping hot takes with very little to back them up do tend to get downvoted.

More than anything, the Rust community is hyper-fixated on stability and correctness. It is very much the antithesis to “move fast and break things”.


> More than anything, the Rust community is hyper-fixated on stability and correctness. It is very much the antithesis to “move fast and break things”.

This is incorrect.

https://devclass.com/2024/08/19/rust-1-80-0-breaks-existing-...


The high level of visibility that this incident received is a great example of the point I'm making. Can you name one more?

I'm going to throw in another one.

Cargo always picks the newest version of a dependency, even if that version is incompatible with the version of Rust you have installed.

You're like "build this please", and it's like "hey I helpfully upgraded this module! oh and you can't build this at all, your compiler is too old granddad"

They finally addressed this bug -- optionally (the default is still to break the build at the slightest provocation) -- in January this year (which of course, requires you upgrade your compiler again to at least that)

https://blog.rust-lang.org/2025/01/09/Rust-1.84.0/#cargo-con...

What a bunch of shiny-shiny chasing idiots with a brittle build system. It's designed to ratchet forward your dependencies and throw new bugs and less-well-tested code at you. That's absolutely exhausting. I'm not your guinea pig, I want to build reliable, working systems.

gcc -std=c89 for me please.


Let the one without sin throw the first stone. Please describe to us how you do dependency management with C.

Also picking C89 over any later iteration is bananas.


> Please describe to us how you do dependency management with C.

     PKG_CHECK_MODULES([libfoo], [libfoo >= 1.2.3])
     AC_CHECK_HEADER([foo.h], ,[AC_MSG_ERROR([Cannot find foo header])])
     AC_CHECK_LIB([foo],[foo_open], ,[AC_MSG_ERROR([Cannot find foo library])])
There are additionally versioning standards for shared objects, so you can have two incompatible versions of a library live side-by-side on a system, and binaries can link to the one they're compatible with.

> Cargo always picks the newest version of a dependency, even if that version is incompatible with the version of Rust you have installed.

> PKG_CHECK_MODULES([libfoo], [libfoo >= 1.2.3])

This also picks the newest version that might be incompatible with your compiler, if the newer version uses a newer language standard.

> You're like "build this please", and it's like "hey I helpfully upgraded this module! oh and you can't build this at all, your compiler is too old granddad"

Also possible in the case of your example.

> What a bunch of shiny-shiny chasing idiots with a brittle build system.

Autoconf as an example of non-brittle build system? Laughable at best.


This is whataboutism to deflect from Rust's basic ethos being to pull in the latest shiny-shiny.

> This also picks the newest version that might be incompatible with your compiler, if the newer version uses a newer language standard.

It doesn't, it just verifies what the user has already installed (with apt/yum/dnf) is suitable. It certainly doesn't connect to the network and go looking for trouble.

The onus is on library authors to write standard-agnostic, compiler-agnostic headers, and that's what they do:

    #if __STDC_VERSION__ >= 199901L
        /* C99 definitions */
    #else
        /* pre-C99 definitions */
    #endif
For linking, shared objects have their own versioning to allow backwards-incompatible versions to exist simultaneously (libfoo.so.1, libfoo.so.2).

> This is whataboutism to deflect from Rust's basic ethos being to pull in the latest shiny-shiny.

No. You set a bar for Cargo that the solution you picked does not reach either.

> It doesn't, it just verifies what the user has already installed (with apt/yum/dnf) is suitable.

There's no guarantee that that is compatible with your project though. You might be extra unlucky and have to bring in your own copy of an older version. Plus their dependencies.

Perfect example of the pile of flaming garbage that is C dependency "management". We haven't even mentioned cross-compiling! It multiplies all this C pain a hundredfold.

> The onus is on library authors to write standard-agnostic, compiler-agnostic headers, and that's what they do:

You're assuming that the used feature can be represented in older language standards. If it doesn't, you're forced to at least have that newer compiler on your system.

> [...] standard-agnostic, compiler-agnostic headers [...] > For linking, shared objects have their [...]

Compiler-agnostic headers that get compiled to compiler-specific calling conventions. If I recall correctly, GCC basically dictates it on Linux. Anyways, I digress.

> shared objects have their own versioning to allow backwards-incompatible versions to exist simultaneously (libfoo.so.1, libfoo.so.2).

Oooh, that one is fun. Now you have to hope that nothing was altered when that old version got built for that new distro. No feature flag changed, no glibc-introduced functional change.

> hey I helpfully upgraded this module! oh and you can't build this at all, your compiler is too old granddad

If we look at your initial example again, Cargo followed your project's build instructions exactly and unfortunately pulled in a package that is for some reason incompatible with your current compiler version. To fix this you have the ability to just specify an older version of the crate and carry on.

Looking at your C example, well, I described what you might have to do and how much manual effort that can be. Being forced to use a newer compiler can be very tedious. Be it due to bugs, stricter standards adherence or just the fact that you have to do it.

In the end, it's not a fair fight comparing dependency management between Rust and C. C loses by all reasonable metrics.


You're just in attack mode now.

I listed a specific thing -- that Rust's ecosystem grinds people towards newness, even if goes so far to actually break things. It's baked into the design.

I don't care that it's hypothetically possible for that to happen with C, I care that practically, I've never seen it happen.

Whereas, the single piece of software I build that uses Rust, _without changing anything_ (already built before, no source changes, no compiler changes, no system changes) -- cargo install goes off to the fucking internet, finds newer packages, downloads them, and tells me the software it could build last week can't be build any more. What. The. Fuck. Cargo, I didn't ask you to fuck up my shit - but you did it anyway. Make has never done that to me, nor has autoconf.

Show me a C environment that does that, and I'll advise you to throw it out the window and get something better.

There have been about 100 language versions of Rust in the past 10 years. There have been 7 language versions of C in the past 40. They are a world apart, and I far prefer the C world. C programmers see very little reason to adopt "newer" C language editions.

It's like a Python programmer, on a permanent rewrite treadmill because the Python team regularly abandon Python 3.<early version> and introduce Python 3.<new version> with new features that you can't use on earlier Python versions, asking how a Perl programmer copes. The Perl programmer reminds them that the one Perl binary supports and runs every version of Perl from 5.8 onwards, simultaneously, and the idea of making all the developers churn their code over and over again to keep up with latest versions is madness, the most important thing is to make sure old code keeps running without a single change, forever. The two people are simply on different planets.


> I don't care that it's hypothetically possible for that to happen with C, I care that practically, I've never seen it happen.

I don't think your anecdotal experience is enough to redeem the disarray that is C dependency management. It's nice to pretend though.

> and tells me the software it could build last week can't be build any more. What. The. Fuck. Cargo, I didn't ask you to fuck up my shit - but you did it anyway. Make has never done that to me, nor has autoconf.

If you didn't get my point in previous comment, let me put it more frankly - it is your skill issue if you aren't fixing your crates to a specific version but depend on them remaining constant. This is not Cargo's fault.

> Make has never done that to me, nor has autoconf.

Yeah, because they basically guarantee nothing nor allow working around any of the potential issues I've already described.

But you do get to wait for a thousandth time for it to check the size of some types. All those checks are a literal proof how horrible the ecosystem is.

> There have been about 100 language versions of Rust in the past 10 years

There's actually four editions and they're all backwards-compatible.

> C programmers see very little reason to adopt "newer" C language editions.

Should've stopped at the word "reason".


Most of your post completely falls apart when considering https://github.com/rust-lang/rust/issues/127343

It's not relevant to this thread.

I would use a newer version of C, and consider picking C++, if the choice was between C, C++, Ada, and Rust. (If pattern matching would be a large help, I might consider Rust).

For C++, there is vcpkg and Conan. While they are overall significantly or much worse options than what Rust offers according to many, in large part due to C++'s cruft and backwards compatibility, they do exist.


> For C++, there is vcpkg and Conan

But I asked about C.


I looked it up, both vcpkg and Conan support C as well as C++, at least according to their own descriptions.

The way you've described both of those solutions demonstrates perfectly how C package management is an utter mess. You claim to be very familiar with the C ecosystem yet you describe them based on their own description. Not once have you seen them in use? Both of those are also (only) slightly younger than Rust by the way.

So after all these decades there's maybe something vaguely tolerable that's also certainly less mature than what even Rust has. Congrats.


> Please describe to us how you do dependency management with C

dnf or apt, depending on if Fedora/EL or Debian...


You're always building for the same distribution and release?

A small number of slowly moving variants, but for a given deployment it's roughly stable.

I suppose I missed the important case of Yocto though


It received a lot of attention and "visibility" because it caused a lot of pain to some people. I am befuddled why you would wrongly attempt to dismiss this undeniable counter-example.

Sorry, but your argument is incorrect.


I suspect you miss the point.

Somebody is attempting to characterize the Rust community in general as being similar to other programming communities that value velocity over stability, such as the JS ecosystem and others.

I’m pointing out that incidents such as this are incredibly rare, and extremely controversial within the community, precisely because people care much more about stability than velocity.

Indeed, the design of the Rust language itself is in so many ways hyper-fixated on correctness and stability - it’s the entire raison d’etre of the language - and this is reflected in the culture.


Comparing with JS ecosystem is very telling. Some early Rust developers, come from the JS ecosystem (especially at Firefox), and Cargo takes inspiration from the JS ecosystem, like with lock files. But JS ecosystem is a terrible baseline to compare with regarding stability. Comparing a language's stabilitity with JS ecosystem says very little. You should have picked a systems language to compare with.

And your post is itself a part of the Rust community, and it itself is an argument against what you claim in it. If you cannot or will not own up to the 1.80 time crate debacle, or mention the 1.80 time crate debacle proactively as a black mark that weighs on Rust's conscience and that it will take time to rebuild trust and confidence in Rust's stability because of it, well, your priorities, understood as in the Rust community's priorities, are clear, and they do not, in practice, lie with stability, safety and security, nor with being forthcoming.


Ok, I'm going to call it here. I don't know what this comment (or account) is, and I'm not particularly interested in a bad faith flamewar.

It is not "bad faith", or insincere in any way. If you actually considered it or cared, you could use it as constructive criticism.

Which would be borne out with discourse, not hate - but you do you

> It is a culture of change and the latest and greatest

Good luck achieving anything of long-term value this way.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: