Hacker Newsnew | past | comments | ask | show | jobs | submit | kryptiskt's commentslogin

The simplest way to have no Korean war is if Kim Il-Sung decides that an invasion is too risky and concentrates on internal matters.

Traditionally it has been done because the last three bits in an object pointer typically are always zero because of alignment, so you could just put a tag there and mask it off (or load it with lea and an offset, especially useful if you have a data structure where you'd use an offset anyway like pairs or vectors). In 64-bit architectures there are two bytes at the top that aren't used (one byte with five-level paging), but they must be masked, since they must be 0x00 or 0xff when used for pointers. In 32-bit archs the high bits were used and unsuitable for tags. All in all, I think the low bits still are the most useful for tags, even if 32-bit is not an important consideration anymore.

One good use of "It turns out..." is to report negative results. Something like "You can overclock a Mac Mini to 8GHz using liquid nitrogen. It turns out this is not a stable configuration <picture of burning Mac Mini hooked up to a physics experiment>"

> So it looked like the Mac but was infinitely worse.

"Infinitely worse"? Some people really need to cool off the hyperbole.

Having each window be a self-contained unit is the far better metaphor than making each window transform a global element when it is selected. As well as scaling better for bigger screens. An edge case like that may well be unfortunate, but it could be the price you pay to make the overall better solution.


That was the point of Tog's conclusion: edges of the screen have infinite target size in one cardinal direction, corners have infinite target size in two cardinal directions. Any click target that's not infinite in comparison, has infinitely smaller area, which I suppose you could conclude is infinitely worse if clickable area is your primary metric.

This wasn't just the menu bar either. The first Windows 95-style interfaces didn't extend the start menu click box to the lower left corner of the screen. Not only did you have to get the mouse down there, you had to back off a few pixels in either direction to open the menu. Same with the applications in the task bar.

The concept was similar to NEXTSTEP's dock (that was even licensed by Microsoft for Windows 95), but missed the infinite area aspect that putting it on the screen edge allowed.


The infinitely worse part was when you maximized the window so the menu bar was at the top, but Windows still had the border there, which was unclickable.

So now you broke the infinite click target even though it looked like it should have one.


Go is a total non-starter, it's not interactive at all. The competitors are things like Matlab, Mathematica, R or Python (with the right math libs). If you're weird you could use something like Haskell, APL or Lisp in this role, but you'd pay a hefty price in available libs.

In what situations would a non-interactive language be a non-starter? I have never felt that I missed having a REPL when coding C++ or Rust. The only reason it is even useful in python is that the type info is laughably bad, so you need to poke things interactivly to figure out what shape of data you should even expect.

(I'll take strong static typing every day, it is so much better.)


REPLs/notebooks are really nice in situations where you don't know what you want ahead of time and are throwing away 90% of the code you write, such as trying to find better numerical algorithms to accomplish some goal, exploring poorly documented APIs (most of them), making a lot of plots for investigating, or working a bunch with matrices and Dataframes (where current static typing systems don't really offer much.)

Yeah, this is a entirely different domain than what I work in (hard real-time embedded and hard real-time Linux).

Though poorly documented APIs exist everywhere, but they are not something you can rely on anyway: if it isn't documented the behaviour can change without it being a breaking change. It would be irresponsible to (intentionally) depend on undocumented behaviour. Rather you should seek to get whatever it is documented. Otherwise there is a big risk that your code will break some years down the line when someone tries to upgrade a dependency. Most software I deal with is long-lived. There is code I wrote 15 years ago that is still in production and where the code base is still evolving, and I see no reason why that wouldn't be true in another 15 years as well.

At least you should write tests to cover any dependencies on undocumented behaviour. (You do have comprehensive tests right?)


Yeah it's definitely not for all domains.

I usually write the tests afterwards, except for very well-defined engineering problems, and the REPL exploration helps inform what tests to write.


People working with math or stats are often in an explorative mode, testing different things, applying different transforms to the data, plotting variables or doing one-off calculations. You need some form of interactive session for that to be feasible, whether it is a REPL or a notebook. There actually is a C++ REPL just for this use case from CERN, because they have a ton of HEP C++ code that they want to use interactively.

I don't see what the role of AI is in this. You don't need an AI to aggregate data from a bunch of sources. You'd be better off having the AI write a scraper for you than burning GPU time on an agent doing the same thing every time.


If you're paying a monthly fee for your agent, might as well use it to save you another few mins


Sony made a water-proof phone long before fixed batteries became fashionable.


Samsung too


Back in the day, a GHC release had a bug that deleted source files with errors in them: https://gitlab.haskell.org/ghc/ghc/-/issues/163. Which is pretty harsh as diagnostics go.


That guy has a history of harassment in Debian, and is not a credible source.


The director of the FSF is not a credible source? All the harassed and exploited third-world women are not credible? Certainly more credible than this guy.


That was back when there was "real" UNIX around, as well as a number of clones, including Microsofts own Xenix (maybe they had offloaded that to SCO by then). So UN*X was one way to indicate that it meant UNIX-like OSes.


Turns out SCO bought Xenix in 1987, but Microsoft was just a couple of years removed from being the biggest Unix vendor around at this point.


I guess that's probably Apple now.


Or IBM?

z/OS is officially a Unix


There are an awful lot of iOS and macOS devices out there


macOS is a certified UNIX[0], but iOS isn’t.

[0] https://www.opengroup.org/openbrand/register/


I still think Macs outnumber IBM mainframe and POWER machines by a couple orders of magnitude.


By unit count, no question.

By revenue? Genuinely uncertain.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: