Hacker Newsnew | past | comments | ask | show | jobs | submit | orangeboats's commentslogin

Clearly you have not visited China.

Try it someday. You _will_ be surprised by some of the technologies there.


Yeah technologies they obtained from hacking other countries and their corporations lol

why spend millions and a decade doing R&D when you can just hack American companies and steal it all for free!


Interesting. What surprising technologies do they have?

I find it very ironic that you are calling out sockpuppets when your name is literally "RustSupremacist" and your submission history is less than stellar.

Nothing says "intellectual honesty" like necrobumping a 9 day old post because of moderation concerns.

This was posted less than 2 weeks ago. Why bother doing it again?

https://news.ycombinator.com/item?id=45849204


> I can assure you that the core utils have all already went through static analysers doing more checks than the Rust compiler.

Some checks are pretty much impossible to do statically for C programs because of the lack of object lifetime annotations, so no, this statement can't be right.

It is true that the borrow checker doesn't prevent ALL bugs though.

Furthermore, the "bug" in this case is due to an unimplemented feature causing a flag to be silently ignored... It's not exactly something that any static analyser (or runtime ones for that matter) can prevent, unless an explicit assert/todo is added to the codepath.


Well, you can annotate C code to do a lot more than lifetime annotations today. The tooling for C analysis is best in class.

And even without annotations, you can prove safe a lot of constructs by being conservative in your analysis especially if there is no concurrency involved.

Note that I wasn't specifically commenting about this specific issue. It's more about my general fatigue regarding people implying that rewrite in Rust are always better or should be done. I like Rust but the trendiness surrounding it is annoying.


You can do a lot of things. Yes, there are formally verified programs and libraries written in C. But most C programs are not, including the GNU coreutils (although they are battle-tested). It's just the effort involved is higher and the learning curve for verifying C code correctly is staggering. Rust provides a pretty good degree of verification out-of-the-box for free.

Like any trendy language, you've got some people exaggerating the powers of the borrow checker, but I believe Rust did generally bring out a lot of good outcomes. If you're writing a new piece of systems software, Rust is pretty much a no-brainer. You could argue for a language like Zig (or Go where you're fine fine with a GC and a bit more boilerplate), but that puts even more spotlight on the fact that C is just not viable choice for most new programs anymore.

The Rewrites-in-Rust are more controversial and they are just as much as they are hyped here on HN, but I think many of them brought a lot of good to the table. It's not (just?) because the C versions were insecure, but mostly because a lot of these new Rust tools replaced C programs that had become quite stagnant. Think of ripgrep, exa/eza, sd, nushell, delta and difft, dua/dust, the various top clones. And these are just command line utilities. Rewriting something in Rust is not an inherently bad idea of what you are replacing clearly needs a modern makeover or the component is security critical and the code that you are replacing has a history of security issues.

I was always more skeptical about the coreutils rewrite project because the only practical advantage they can bring to the table is more theoretical safety. But I'm not convinced it's enough. The Rust versions are guaranteed to not have memory or concurrency related bugs (unless someone used unverified unsafe code or someone did something very silly like allocating a huge array and creating their own Von Neumann Architecture emulator just to prove you can write unsafe code in Rust). That's great, but they are also more likely to have compatibility bugs with the original tools. The value proposition here is quite mixed.

On the other hand, I think that if Ubuntu and other distros persist in trying to integrate these tools the long-term result will be good. We will get a more maintainable codebase for coreutils in the future.


> Well, you can annotate C code to do a lot more than lifetime annotations today. The tooling for C analysis is best in class.

Where can I see these annotations for coreutils?


> It is true that the borrow checker doesn't prevent ALL bugs though.

True, but "prevents all bugs" is that what the debate pretty much digests to in the "rust is better" debate. So you end up with rewrites of code which introduce errors any programmer in any language can make and since you do a full rewrite that WILL happen no matter what you do.

If that's acceptable fine, otherwise not. But you cannot hide from it.


The "rust is better" point is always "prevent memory-related bugs from happening".

Memory-related bugs is, like, 70% of bugs found in programs written in C and C++.

So, by rewriting in Rust, you prevent 70% of new bugs from happening, because a whole class of bugs just cease to exist.


But that's hardly relevant to coreutils, is it? Do these utilities even manage memory?

These are command line utilities meant to be a human porcelain for libc. And last I checked, libc was C.

Ideally these should be developed in tandem, and so should the kernel. This is not the case in Linux for historical reasons, but some of the other utilities such as iputils and netfilter are. The kernel should be Rust long before these porcelain parts are.


~70% of bugs in NEW code, in companies, that have mottos like "move fast and break things". The same study, found that old C, C++ codebases tends to have these once in a blue moon and other bug classes are more prevalent.


Only if you don't use unsafe though. If you look at almost any real-world project written in Rust, you'll find tons and tons of `unsafe` in its dependency tree.


In safe rust, yes. But in unsafe rust, no guarantees. And as long as rust itself is unsafe, you cannot call it safe.


Congratulations to you on being the 10000th? person [0] to miss the point of unsafe/safe.

1. Unsafe doesn't mean the code is actually unsafe. It only tells you that the compiler itself cannot guarantee the correctness of it.

2. Unsafetiness tells the code reviewers to give a specific section of code more scrunity. Clippy also has an option that requires the programmer to put a comment to explain how the unsafe code is actually safe in the context.

3. And IF a bug does occur, it minimizes the amount of code you need to audit.

[0]: https://xkcd.com/1053/


You don't silently allow unhandled args.

You do coverage testing, which would have found the missing date -r path.

You do proper code review, which would have found the missing date -r path.

And many coreutils options will not be implemented at all. ENOFIX


>You do coverage testing, which would have found the missing date -r path.

The original coreutils test suite didn't cover the -r path. The same bug would have not been statically discovered in most programming languages, except perhaps the highly functional ones like Haskell.

>You do proper code review, which would have found the missing date -r path.

And in an ideal world there would be no bugs at all. This is pointless -- we all know that we need to do a proper code review, but humans make errors.


Then any replacement project should start with implementing a better test suite in order to know what you're doing. That has been the case with many other utilities such as ntp.

And it should most certainly not be possible to declare options and leave them as empty placeholders. That should be flagged just like an unused variable is flagged. That is a problem with whatever option library they chose to use.

That alone should disqualify it from being a replacement yet. We're talking about a stable operating system used in production here. This is simply wrong on so many levels.


>Then any replacement project should start with implementing a better test suite in order to know what you're doing

"Then any replacement project should not include bugs in their code."

Like I said before, broad statements like these are borderline pointless.

Of course we all know the should, the real problem is how -- how can you realistically make a "better test suite" when your goal is to create a bug-for-bug compatible replacement project?

And given the size of the original project, how should a better test suite be created?

>That is a problem with whatever option library they chose to use.

Instead of being vague, why not show a precise example of what you are talking about?


Oh no, the Godwin's law equivalent for networking is proving itself once again.

>someone will again complain about the address format, without realizing that shoving in extra address bits on an IPv4 datagram is already a new protocol

So you are having all the pains of transitioning to a new Internet Protocol, but none of the benefits of having an actually huge address space.

https://news.ycombinator.com/item?id=39258290


No, it's very simple.

You set a bit, and if set it's routed using 6 octlets. If the bit isn't set, it's routed via 1.1.x.x.x.x.

Yes, legacy can only reach 1.1.x.x.x.x.

Done.


How is your proposal _materially_ different from today's NAT64 and the 64:ff9b::x.x.x.x prefix?


It doesn't include ipv6, and therefore ipv6's broken everything. It's just 100% ipv4 + two extra octlets.


Do you really think your solution of "IPv4 with extra octets" will not introduce so called "broken everything"...?

Whether it's the addition of one octet or twelve octets, you are nevertheless introducing a new Internet Protocol, and therefore you are going to face the reality of introducing a new Internet Protocol.

To think otherwise is delusional and is the reason why the "Godwin's law of networking" has become sort of a meme.


Can't the decompressor still produce a malicious uncompressed file?


Any decompressor can produce a malicious file. Just feed a malicious file to the compressor.


Yes, but currently the decompressors we use (so things like zstd, zlib, 7z) come from a mostly-verifiable source -- either you downloaded it straight from the official site, or you got it from your distro repo.

However, we are talking about an arbitrary decompressor here. The decompressor WASM is sandboxed from the outside world and it can't wreak havoc on your system, true, but nothing stops it from producing a malicious uncompressed file from a known good compressed file.


The format-specific decompressor is part of the compressed file. Nothing here crosses a security boundary. Either the compressed file is trustworthy and therefore decompresses into a trustworthy file, or the compressed file is not trustworthy and therefor decompresses into a non-trustworthy file.

If the compressed file is malicious, it doesn't matter whether it's malicious because it originated from a malicious uncompressed file, or is malicious because it originated from a benign uncompressed file and the transformation into a compressed file introduces the malicious parts due to the bundled custom decompressor.


If the decompressor is included in the compressed file and it's malicious, the file can hardly be called known good.


But also I guess the logic of the decompressor could output different files in different occasions, for example, if it detects a victim, making it difficult to verify.


If it can "detect a victim", then the sandbox is faulty. The decompressor shouldn't see any system details. Only the input and output streams.


For any P2P protocols to work, including BitTorrent, you need to be able to access the other nodes. In today's world, that's becoming harder because everyone and their granny are gradually being hidden behind a strict CGNAT.


IPv6 ??


That's the implicated solution to this problem, correct.

But you have a bunch of people on HN opposed to it for no good reason other than "IPv4 works for me™".


Usually it all boils down to "aesthetics". I am pretty sure most would agree that Gall-Peters is atrocious looking!


Sometimes it's not just "your code". Strings are often interchanged and sent to many other parties.

And some of the codepoints, such as the surrogate codepoints (which MUST come in pairs in properly encoded UTF-16), may not break your code but break poorly-written spaghetti-ridden UTF-16-based hellholes that do not expect unpaired surrogates.

Something like:

1. You send a UTF-8 string containing normal characters and an unpaired surrogate: "Hello /uDEADworld" to FooApp.

2. FooApp converts the UTF-8 string to UTF-16 and saves it in a file. All without validation, so no crashes will actually occur; worst case scenario, the unpaired surrogate is rendered by the frontend as "�".

3. Next time, when it reads the file again, this time it is expecting normal UTF-16, and it crashes because of the unpaired surrogate.

(A more fatal failure mode of (3) is out-of-bounds memory read if the unpaired surrogate happens at the end of string)


I had a github action with a phrase 'filter: \directory\u02filename.txt' or something close to this and the the filename got interpreted as a utf-8 character rather than a string literal causing the application to throw an error about invalid utf 8 in the path. Had to go about setting it up to quote the strings differently, but you get to see a lot of these issues in the wild.


>A referendum was held in April 1975, where over 97% of voters supported abolishing the monarchy and joining India.

I know you are trying to express that the annexation is supported by the Sikkimese... But referendum is by far the worst tool you can use to support your point. Even more so, when South Asia is notorious for election rigging.

By the same logic, Crimea should "rightfully" be under Russia's rule.


> I know you are trying to express that the annexation is supported by the Sikkimese

Do you have any sources that say they don't support being part of India ?

> Even more so, when South Asia

Indira Gandhi imposed emergency in 1975 and lost in 1977. If there was widespread election rigging she wouldn't have lost the election.

> By the same logic, Crimea should "rightfully" be under Russia's rule.

Can you point to a separatist movement from the people of Sikkim, 1975 to today ? Without that, your comment just doesn't make any sense. You come across as CCP shill.


>Do you have any sources that say they don't support being part of India ?

I never said anything to that effect.

I simply said referendum, especially one regarding the sovereignty of a nation, is a wildly inaccurate tool to assess popularity.

Hence,

>You come across as CCP shill.

Is a very, very disappointing direction that you have decided to take the discussion toward.


As someone who has been to Sikkim and spoken with people there, I can say they are proud and happy to be Indians. In fact, they receive several concessions from the Indian state for being part of the Union - for example, they are exempt from income tax. While a referendum may not always be the best measure of sentiment, the absence of insurgency in Sikkim (unlike in some other regions in India) suggests that people are broadly content with being part of India.


According to your logic, you could go to eastern Ukraine and ask the same question in ten years. Or maybe even less.


The parallel to Ukraine is Tibet, not Sikkim. Nice try


If pooping on the beach is a Canadian tradition, then you're right.


If you have been there and know that the Sikkimese are comfortable with being part of India, that's good!

I am only resistant to the idea that "referendums = popularity".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: