Hacker Newsnew | past | comments | ask | show | jobs | submit | wk_end's commentslogin

Physics is a hard science. Software development is applied engineering. I’m sure there are applied engineering fields adjacent to physics where papers are fairly readable by practitioners.

Most developers would struggle quite a bit to read typical theoretical computer science papers.


Computer science is a formal science with empirical elements, as much as I'd like to think of it as a branch of mathematics. I'm not sure what to make of "software engineering" or "software development", academically, but it doesn't really seem to be applied engineering; software engineering students don't study general engineering and then apply it to software, and finally layer some software-specific focus on top. And most developers are still nominally trained in "computer science" rather than "software engineering" or "software development" anyway!

Rather than engineering, the academic discipline of software engineering grows out of computer science, which was born as an area of interest in mathematics. It shows! Because most developers who prepare for their jobs by their choice of major in school typically study computer science, let's consider a typical curriculum: a tiny bit about how hardware works, a small amount of "low-level" software stuff in a class where students work in assembly language, some management science-ish stuff (typically part of the software engineering classes, focused on the development lifecycle, development methodologies, etc.), and a little bit about "design patterns", which is engineering-y but often more qualitative than quantitative in nature. You can often get cross-listed credit for some electrical and computer engineering electives, but they're very much optional. (And many schools don't even have a software engineering program per se, only a computer science program.)

To the extent that software engineering even is a theoretical discipline that can be "applied" on the job, it doesn't share much, ancestrally or methodologically, with engineering. The most they really have in common is that they are broadly speaking puzzle-solving disciplines that often rely heavily on fairly sophisticated formal reasoning.

> Most developers would struggle quite a bit to read typical theoretical computer science papers.

This is probably true, though, perhaps especially because even those who study computer science as undergraduates don't aim to be computer scientists. Their emphasis is reflected in their electives, and they don't continue to study computer science once they join the workforce.

Is this unusual? Can most nurses not only competently but effortlessly read and understand the research output of working medical scientists? Can a one-time biology major typically read and understand contemporary research on micro-organisms without "struggling quite a bit"?


In many countries Software Engineering is a proper engineering degree, sharing many of the same lectures as other engineering degrees, including an engineering college and professional titles certification, not just something one calls themselves because it is cool.

Likewise in many of those countries, nursing is an university degree with many lectures shared with medicine degree, during the initial years.


> In many countries Software Engineering is a proper engineering degree, sharing many of the same lectures as other engineering degrees, including an engineering college and professional titles certification, not just something one calls themselves because it is cool.

I didn't know this! Thank you for correcting me.

> Likewise in many of those countries, nursing is an university degree with many lectures shared with medicine degree, during the initial years.

That's how it is where I live as well. My point was just that an undergraduate education doesn't really prepare a person to easily keep up with the research of specialists (although it might orient one enough to get through it with some effort and possibly some lingering questions).


Haskell and OCaml are excellent for compilers, because - as you suggest - you end up building, walking, and transforming tree data structures where sum types are really useful. Lisp is an odd suggestion there, as it doesn’t really have any built-in support for this sort of thing.

At any rate, that’s not really the case when building an emulator or bytecode interpreter. And Haskell ends up being mostly a liability here, because most work is just going to be imperatively modifying your virtual machine’s state.


> And Haskell ends up being mostly a liability here, because most work is just going to be imperatively modifying your virtual machine’s state.

That sounds odd to me. Haskell is great for managing state, since it makes it possible to do so in a much more controlled manner than non-pure languages.


Yeah, I don't understand what the "liability" here is. I never claimed it was going to be optimal, and I already pointed out C/C++ as the only reasonable choice if you actually want to run games on the thing and get as much performance as possible. But manipulating the machine state in Haskell is otherwise perfect. Code will look like equations, everything becomes trivially testable and REPLable, and you'd even get a free time machine from the immutability of the data, which makes debugging easy.

If you're effectively always in a stateful monad, Haskell's purity offers nothing. Code doesn't look like equations, things aren't trivially testable and REPLable, you don't get a free time machine, and there's syntactic overhead from things like lifting or writes to deeply nested structures and arrays, since the language doesn't have built-in syntactic support for them.

Even if you use a stateful monad (not necessarily the State monad), you can take snapshots of the state of the machine and literally produce a log. You haven't lost immutability or the time machine, and you can 'deriving Show' the hell out of everything and get human-readable output for free. Fuck, you could even lift functions in such a way that they produce a trace of assertions that each function of (state -> state) must satisfy. A state-debugger-log monad.

Not that you'd need a monad for something like this anyway.


On the other hand, it does have support for things like side-effectful traversals, folds, side effects conditional on value existing, etc. In most other languages you have to write lower-level code to accomplish the same thing.

It is a pain to update a deep state in Haskell.

C++: game.level.unit[10].position += 5;

Haskell: way too much code here (unless you use lenses of course but then you are effectively turning Haskell into an imperative language).


What's wrong with that? Haskell is, after all, the "world's finest imperative language".

https://www.microsoft.com/en-us/research/wp-content/uploads/...


Lisp has one of the most powerful macro systems.

Also when people say Lisp in 2025, usually we can assume Common Lisp, which is far beyond the Lisp 1.5 reference manual in capabilities.

In fact, back when I was in the university, Caml Light was still recent, Miranda was still part of programming language lectures, the languages forbidden on compiler development assignments were Lisp and Prolog, as they would make it supper easy assignment.


I've heard Haskell described as the best imperative language.


Haskell isn't a liability for that lol

I’d also point out, that even in the compiler space, there are basically no production compilers written in Haskell and OCaml.

I believe those two languages themselves self-host. So not saying it’s impossible. And I have no clue about the technical merits.

But if you look around programming forums, there’s this ideas that”Ocaml is one of the leading languages for compiler writers”, which seems to be a completely made up statistic.


I don't know that many production compilers are in them, but how much of that is compilers tending towards self hosting once they get far enough along these days? My understanding is early Rust compilers were written in Ocaml, but they transitioned to Rust to self-host.

What do you define as a production compiler? Two related languages have compilers built in Haskell: PureScript and Elm.

Also, Haskell has parsers for all major languages. You can find them on Hackage with the `language-` prefix: language-python, language-rust, language,javascript, etc.

https://hackage.haskell.org/packages/browse?terms=language


Obviously C is the ultimate compiler of compilers.

But I would call Rust, Haxe and Hack production compilers. (As mentioned by sibling, Rust bootstraps itself since its early days. But that doesn't diminish that OCaml was the choice before bootstrapping.)


Most C compilers are written in C++ nowadays.

Yes, C and C++ have an odd symbiosis. I should have said C/C++.

Most C and C++ developers take umbrage with combining them. Since C++11, and especially C++17, the languages have diverged significantly. C is still largely compatible (outside of things like uncasted malloc) since the rules are still largely valid in C++; but both have gained fairly substantial incompatibilities to each other. Writing a pure C++ application today will look nothing like a modern C app.

RAII, iterators, templates, object encapsulation, smart pointers, data ownership, etc are entrenched in C++; while C is still raw pointers, no generics (no _Generic doesn’t count), procedural, void* casting, manual malloc/free, etc.

I code in both, and enjoy each (generally for different use cases), but certainly they are significantly differing experiences.


Unfortunately we still have folks writing C++ in the style of pre-C++98 with no desire to change.

It is like adopting Typescript, but the only thing they do is renaming the file extension for better VScode analysis.

Another one is C++ "libraries" that are plain C with extern "C" blocks.


Sure, and we also still have people coding in K&R-style C. Some people are hard to change in their ways, but that doesn't mean the community/ecosystem hasn't moved on.

> Another one is C++ "libraries" that are plain C with extern "C" blocks.

Sure, and you also see "C Libraries" that are the exact same. I don't usually judge the communities on their exceptions or extremists.


What are you on? Rust was written in ocaml, and Haxe is still after 25 years going strong with a ocaml based compiler, and is very much production grade.

We must be looking at different compilers.

Verilog?

...just kidding (maybe).

Assuming we're talking about a pure interpreter, pretty much anything that makes it straightforward to work with bytes and/or arrays is going to work fine. I probably wouldn't recommend Haskell, just because most operations are going to involve imperatively mutating the state of the machine, so pure FP won't win you much.

The basic process of interpretation is just: "read an opcode, then dispatch on it". You'll probably have some memory address space to maintain. And that's kind of it? Most languages can do that fine. So your preference should be based on just about everything else: how comfortable are you using it, how much do you like its abilities to interface with your host platform, how much do you like type checking, and so on.


Can you expand on what makes SML better, in your eyes, than OCaml?

IMO: it's certainly "simpler" and "cleaner" (although it's been a while but IIRC the treatment of things like equality and arithmetic is hacky in its own way), which I think causes some people to prefer SML over aesthetics, but TBH I feel like many of OCaml's features missing in SML are quite useful. You mentioned applicative functors, but there's also things like labelled arguments, polymorphic variants, GADTs, even the much-maligned object system that have their place. Is there anything SML really brings to the table besides the omission of features like this?


> the treatment of things like equality and arithmetic is hacky in its own way

mlton allows you to use a keyword to get the same facility for function overloading that is used for addition and equality. it's disabled by default for hygienic reasons, function overloading shouldn't be abused.

https://baturin.org/code/mlton-overload/

> labelled arguments

generally speaking if my functions are large enough for this to matter, i'd rather be passing around refs to structures so refactoring is easier.

> polymorphic variants

haven't really missed them.

> GADTs

afaik being able to store functors inside of modules would fix this (and I think sml/nj supports this), but SML's type system is more than capable of expressing virtual machines in a comfortable way with normal ADTs. if i wanted to get that cute with the type system, i'd probably go the whole country mile and reach for idris.

> even the much-maligned object system that have their place

never used it.

> Is there anything SML really brings to the table besides the omission of features like this?

mlton is whole-program optimizing (and very good at it)[1], has a much better FFI[2][3], is much less opinionated as a language, and the parallelism is about 30 years ahead[4]. the most important feature to me is that sml is more comfortable to use over ocaml. being nicer syntactically matters, and that increases in proportion with the amount of code you have to read and write. you dont go hiking in flip flops. as a knock-on effect, that simplicitly in sml ends up with a language that allows for a lot more mechanical sympathy.

all of these things combine for me, as an engineer, to what's fundamentally a more pragmatic language. the french have peculiar taste in programming languages, marseille prolog is also kind of weird. ocaml feels quirky in the same way as a french car, and i don't necessarily want that from a tool.

[1] - http://www.mlton.org/Performance

[2] - http://www.mlton.org/ForeignFunctionInterface

[3] - http://www.mlton.org/MLNLFFIGen

[4] - https://sss.cs.purdue.edu/projects/multiMLton/mML/Documentat...


I love, love, love StandardML.

I respect the sheer power of what mlton does. The language itself is clean, easy to understand, reads better than anything else out there, and is also well-formalised. I read (enjoyed!) the tiger book before I knew anything about SML.

Sadly, this purism (not as in Haskell but as a vision) is what probably killed it. MLTon or not, the language needed to evolve, expand, rework the stdlib, etc.

But authors were just not interested in the boring part of language maintenance.


What are your thoughts on basis[1] and successorml[2]?

[1] - http://www.mlton.org/MLBasis

[2] - https://smlfamily.github.io/successor-ml/


I don't think these change anything for the language. Too little, too late.

Assuming you've got experience with Javascript, read the "Motivation" section on the monocle-ts website:

https://gcanti.github.io/monocle-ts/


The argument here is that being against punishing someone for their speech is anti-free speech? Because the punishment constitutes speech?

Well, um, yes. Having an opinion is free speech. Calling someone else's opinion stupid is, in it of itself, an opinion. So that's also free speech.

The point being free speech is a two-way street. Speech without consequences is actually un-free in that sense. Because you're free to say whatever, but I'm not free to say whatever in response.

Now, whether corporate actions constitute speech is kind of another question. But the consensus in the US is that yes, they do. Corporations are allowed to have opinions and make donations, and they're allowed to fire you for having opinions or making donations.

The important thing to note is that free speech, as we understand it, is a protection for private entities from public entities. Meaning it protects you, a citizen, from public censorship. And it protects companies, private entities, from public censorship. So it, in a way, enables private companies to censor. Because the public can't censor their censorship.


You’re conflating the US’s constitutional protections against government attacks on free speech with the broader concept of (the virtue of) free speech. No one is saying that what Mozilla did was illegal.

Just curious: would you defend a company for firing someone for speaking out in support of gay marriage?


> Just curious: would you defend a company for firing someone for speaking out in support of gay marriage?

Well companies already do this all the time - this is more so the status-quo. I'm not going to pretend the majority are somehow, in some roundabout way, oppressed. Is this person fired for supporting gay marriage, or being gay? Because obviously that's illegal... you can't fire someone for being part of a protected class. Being a republican or whatever is not a protected class, being gay is. One matters, one doesn't.


When you bind with (the Haskell definition for) the List monad - `someList >>= \someElement -> ...` it's like you're saying "this is a forking point - run the rest of the computation for each of the possible values of someElement as taken from someList". And because Haskell is lazy, it's (pardon the informality here) not necessarily just going to pick the first option and then glom onto it if it, say, were to cause an infinite loop if the computation were eagerly evaluated; it'll give you all the possibilities, and as long as you're careful not to force ones that shouldn't be forced, you won't run into any problems. Nondeterminism!

A nice demonstration of this is writing a very simple regex matcher with the List monad. A naive implementation in Haskell with the List monad Just Works, because it's effectively a direct translation of Nondeterministic Finite Automata into code.


> It's probably now faster by about 2us through that blob of code, but maybe this only matters on bare metal and not the emulator, which is probably not time-perfect anyway.

(Some) SNES emulators really are basically time-perfect, at this point [0]. But 2us isn't going to make an appreciable difference in anything but exceptional cases.

[0] https://arstechnica.com/gaming/2021/06/how-snes-emulators-go...


In general, even SNES games are still doing frame-locking, right? i.e. if you save 2us you're just lengthening the amount of time the code is going to wait for a blanking signal by 2us.

Yeah, exactly. It'd have to be really exceptional cases. For example, exactly one game (Air Strike Patrol) has timed writes to certain video registers to create a shadow effect, but 2us is so minor I don't think it'd appreciably effect even that. Or, like, the SNES has an asynchronous multiplier/divider that returns invalid results while the computation is on-going, so if you optimized some code you might end up reading back garbage.

IIRC ZSNES actually had basically no timing; all instructions ran for effectively one cycle. ZSNES wasn't an accurate emulator, but it mostly worked for most games most of the time.


There's actually some issues with clock drift, and speculation whether or not original units had an accurate crystal or varied significantly in timing. The only way to figure that out is to go back and ask the designers what the original spec was, and who knows if they remember. So they're not really time-perfect, because the clock speeds can vary as much as a half-percent.

It's mostly the audio clock that is suspectible to drift. Everything except the audio subsystem is derived from a single master clock, so even if the master clock varies in frequency slightly, all the non-audio components will remain in sync with each other.

That means the 2 clock cycles could theoretically make an observable difference if they cause the CPU to miss a frame deadline and cause the game to take a lag frame. But this is rather unlikely.


The CPU has shown some variation, but yes, it's the APU that has a ceramic clock source that isn't even close to the same among units. Apparently those ceramic resonators have a pretty high variation, even when new.

When byuu/near tried to find a middle-ground for the APU clock, the average turned out to be about 1025296 (32040.5 * 32). Some people have tested units recently and gotten an even higher average. They speculate that aging is causing the frequency to increase, but I don't really know if this is the case or if there really was that much of a discrepancy originally.

It does cause some significant compatibility issues, too, like with attraction mode desyncs and random freezes.


Well, the SNES - if that counts, it's a 65816 - uses DRAM. This is especially noteworthy because the DRAM refresh is actually visible on-screen on some units:

https://www.retrorgb.com/snesverticalline.html


(…and guess who’s company they’ll be contracting those launches to?)

Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: