Hacker News new | past | comments | ask | show | jobs | submit | embedded3's comments login

> What Rust does is flip this. The default is the safe path. So instead of risking forgetting smart pointers and thread safe containers, the compiler keeps you honest.

For what it’s worth, the same is true of Swift. But since much of the original Rust team was also involved with Swift language development, I guess it’s not too much of a surprise. The “unsafe” api requires some deliberate effort to use, no accidents are possible there. It’s all very verbose through a very narrow window of opportunity if you do anything unsafe.


I have such a love hate relationship with Swift. It genuinely has some great ergonomics and I see it as a sister language to rust.

I just wish it was more cross platform (I know it technically works on Linux and windows…but it’s not a great experience) and that it didn’t have so much churn (though they’ve stopped futzing with the core components as much with Swift 4+).

I also wish it was faster. I did an Advent of Code against some friends. I picked Rust, they picked Swift. The rust code was running circles around their Swift ones even when we tried to keep implementations the same.

Anyway, that’s a rant, but I think to your point, I feel like Swift could have been as big as Rust..or bigger given the easier use, with many of the same guarantees. I just wish the early years were more measured.


>The rust code was running circles around their Swift ones even when we tried to keep implementations the same.

I've done Advent of Code a few years -- even Javascript implementations, if using a good (optimal) algorithm, are highly performant, so I'm suspicious of the claim. In most AoC problems, if your code is observably different between languages, it's the fault of the algorithm, regardless of language. But perhaps you are referring to well-profiled differences, even if there are no observable differences.

That said, in projects other than AoC I've compared Swift to C++ and it's hard to deny that a low-level language is faster than Swift, but Swift itself is certainly fast compared to most dynamically typed or interpreted languages like Python, JS, Ruby, etc. which are vastly slower than anything in Swift.


Swift is fast but it’s both easy to accidentally write slow code and there is a not-insignificant amount of overhead to the reference counting and constant dynamic table lookups.

When I say Rust ran circles around it, I mean the difference was not noticeable unless timing, and was the difference of 200ms vs 400ms or a 3 seconds vs 5 seconds , so nothing to write home about necessarily.


That is not the first time I want to understand a bit better the performance difference today between the approaches of Rust, without a garbage collector, ARC on Swift with the reference counting and other garbage collected languages, such as Javascript.

I know Javascript have an unfair advantage here, since the competition between V8 and the other javascript cores is huge over the years and garbage collection on JS is not often a problem. At least I see more people struggling with the JVM GC with its spikes in resource usage.

I've also heard that the erlang VM (be it written in elixir or erlang itself) implements GC on a different level, not to the global process, but on a more granular way.

Is there a good resource that compare the current state of performance between those languages or approaches?


Apple has pushed OS updates to help the Asahi team, though they did not publicize this, of course.


Source for this claim?



You got downvoted unfairly, asking for sources is totally normal.


This has been long established. marcan_42 has repeatedly posted about this here, on twitter and reddit to name a few locations.


It might be "long established" but plenty of people didn't know this, including me.


[flagged]


Sometimes it's good to ask. One may get additional context that way. And if that happens, it's a good thing.


Hi, this is HN. It's perfectly reasonable to ask for sources. There isn't any need to be so hostile.


Depends on the context. The OP, IMO, was being passive-aggressive, almost incredulous.

As I said, every time anything comes up about Asahi, it is guaranteed that at least two threads will occur. This and comment about the vTuber. At this point - 2 years after the project began - it adds little value to the conversation, to the extent that the project lead, who was previously very active here, doesn’t contribute anymore.


Commenting on the fairness of votes is against the HN commenting guidelines.


I used an iPhone 4s from 2011 until end of 2018. Probably most remarkably, I never put a case or any kind of screen protector on it. The thing was still practically scratch free when I stopped using it. It’s still sitting in a drawer next to me.


> the article doesn't state Heroku by name

Heroku is mentioned in the first sentence.


> Whatsapp is the de-facto communication tool in India and LatAm from experience

And much of Europe.


As do many other countries. You really cannot live without it in much of the world.


Rich Hickey talks about this, calling it "hammock-driven development."

https://www.youtube.com/watch?v=f84n5oFoZBc

Aaron Sorkin has also touched on this:

> Most of the time, me writing looks—to the untrained eye—like someone watching ESPN. The truth is if you did a pie chart of the writing process, most of the time is spent thinking. When you’re loaded up and ready to go—when you’ve got that intention and obstacle for the first scene that’s all you need. For me at least, getting started is 90% of the battle. The difference between page zero and page two is all the difference in the world.


They are typically referred to as Porter Duff blend modes, so I assume that these algorithms were invented mainly by those guys. (by Thomas Porter and Tom Duff)


This set me on an interesting trail. I haven’t found anything yet that links Porter and Duff to the modern blend modes as we know them, but I did find this 1984 SIGGRAPH paper they authored that appears to have laid much of the groundwork! https://graphics.pixar.com/library/Compositing/paper.pdf


I don't know the history, but there is supposedly a fixed set of Porter Duff blend modes, specific algorithms. For example, this C++ library:

> Blend2D supports all Porter & Duff compositing operators

https://blend2d.com/

Also, the link shared by one of the sibling comments can be searched for "Porter" to see how they are referenced:

https://drafts.fxtf.org/compositing/


Same. I've written probably more C++ than any other language, and I have a lot of respect for the language overall, but I recently had to use only C for something, and I found it rather refreshing. My use case didn't hit a lot of the pain points of C, but it did feel refreshing as there are fewer ways of doing things, fewer options, compared to C++, and my code ended up feeling more straightforward and readable as a result.


C23 says hello.

C is becoming into everything C++ has, minus OOP and the additional type safety.


Examples? Because the list of accepted proposals seems conservative. https://en.wikipedia.org/wiki/C2x https://thephd.dev/c23-is-coming-here-is-what-is-on-the-menu


You have to look forward to what in the plate post C23 like lambdas, improved constexpr, more _Generic support, and so forth.

https://www.open-std.org/jtc1/sc22/wg14/www/wg14_document_lo...


Lambdas and some other complicated looking extensions have been on the plate for quite some time, and it seems like none of them made it into C23. Lambdas in particular have been promoted for the most part by a single person AFAICT. "C is becoming into everything C++ has" is a drastic misrepresentation.


I think I take the middle position between you and GP, C is slowly working it’s way towards unnecessary complexity. Most of the ‘X language is a C replacement’ posts and threads here on HN seem to disregard C’s development (particularly from 2017 onward) through committee standards releases. Everyone keeps arguing about Rust, Zig, Odin, etc and claiming one or the other is clearly more C like and the others are too complicated/expansive/growth-prone. But no one ever stops and claims that the simple C in there heads is not C in 2022, but C in 1990.

C is not yet at the point where added features, idioms, or concepts have totally removed the ability to return to or regularly restrict yourself to some ‘simple’ subset of the language. However, the push to add more and more to the standard does not inspire hope that this situation will remain.

So while I don’t think it’s there yet, it certainly looks like complexity via continual expansion is the path forward. I may not like it, but it does seem to be a reasonable supposition by GP.


I agree, there is some proposals in there that I doubt they fit well with the culture. However what has actually been accepted that makes you think C in the heads is not C in 2022? In my head, the biggest practical change from 1990 C is declare-anywhere (including declare-in-for-loop) which in my estimation has afforded a lot of added ergonomics at practically zero semantic cost. The next most relevant one is standardization of memory models, which is a good one AFAICT (but I used it only a little, so far). One unfortunate addition from 1999, VLAs, has even been demoted in the following revision.

In 2022 I can jump into basically any C codebase and be immediately productive. Which can't be said of some other languages.


I don’t think there is any one addition to the C that has moved C to far up the complexity hierarchy. It is the cumulative effect of all the proposals submitted, some of which are approved, and the movement to continue to add more. Like I said, C still enables developers to restrict themselves to a simple and long existing subset of features and idioms making coding in C a productive exercise for experienced developers. But the continued push from all directions, and with varying likelihood of being accepted, to change the language wear on me, to the point I continually want to just make a C99 clone with my preferred improvements and just use that instead.

Also, I suspect I phrased the ‘C in head CS C in 2022’ line backwards from what I thought I was saying. In my mind I was saying the arguments for the simplicity of C had less to do with the language as it exists now and more to do with the closeness of the language to some ISA/actual machine’s operating characteristic in the past. It was that closeness that enabled (or necessitated) C’s simplicity.

It wasn’t about the changes the committee has made to the language since the 1990’s. The tie in point was (in my mind) implied to be that now that C has become even less of a mapping to actual hardware the simplicity sought in a C replacement should look different, but commenters seem to be looking at a very surface level.


Consider that some of the C23 changes have in fact cleared up ambiguities and removed support for ancient hardware. So I find that the renewed pushes behind the standard are not all promises of eventual disintegration. There are a few accomplished and considerate people in WG14. If you visit the page I linked above you'll get a sense of that.

I never particularly cared about ISAs - when I think about the "simplicity" and "closeness to hardware" I mean the control over data layout and control flow. Data layout and architecture of the global data flow is often the key to 90-100% of the performance gains you can expect; and more or less stable binary interfaces mean interopability and modularity, minimizing ecosystems lock-in. Where you need to use a certain instruction is the rare case, and you are recommended to code it in assembly or using compiler extensions.


I did not assert that C23 was it, rather that is how it looks like going forward.

What did not land in C23, will land in C26, C29, ....


It is conservative, except for nullptr which duplicates NULL. This violates C's own charter of "provide only one way to do an operation."


Because NULL is dogshit. It's a #define 0. That's not one way to do an operation, that's one way to do an operation, horribly badly. That int on your stack ? Sure it can equal NULL. Hope that wasn't the result of (2 - 2).


In C the NULL macro can be defined as either (void*)0 or 0. It's only mandated as 0 in C++.

The nullptr concept was introduced into C to fix a type ambiguity when NULL is used with generic selection or varargs functions. The ambiguity could have been solved by mandating that NULL be defined as (void*)0. My issue with nullptr is its an overkill solution that unnecessarily duplicates the concept of NULL in the language.


I agree, it should have been (void*)0. I doubt that nullptr_t will see much use (as much as _Generic is a fringe addition), but we'll find out.


Well, since 0 is guaranteed to compare equal to the null pointer, my current code compare my pointers to it directly:

  if (ptr != 0) { foo(*ptr); }
The type mismatch is ugly, but that saves me an include (this particular code minimises its dependencies to maximise portability).


And here I thought you were looking for help understanding how normal vectors work to define the reflectiveness of a 3D surface, and I thought, that’s a rather specific item for guidance!


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: