Hacker Newsnew | past | comments | ask | show | jobs | submit | 0x0's commentslogin

It's actually a requirement by app store connect to use a modern sdk for uploading binaries, and modern sdk versions will often raise the minimum supported ios version, so this is not always the developer's fault. See for example https://developer.apple.com/news/upcoming-requirements/?id=0...

I'm guessing they don't want to maintain and build and test x86_64 versions of all the macos libraries like Appkit and UIKit (including large changes like liquid glass) when they are no longer shipping x86_64 macOS versions. Which is not entirely unreasonable as I'm sure it takes a lot of effort to keep the whole ui library stack working properly on multiple archs.

Perhaps that's what they're hinting about with the note about a "subset of Rosetta". So maybe there is hope that the core x86_64 binary translator will stick around for things like VM and emulation of generic (linux? wine?) binaries, but they don't want to maintain a whole x86_64 macOS userspace going forward.

Space savings from not shipping fat binaries for everything will probably also be not insignificant. Or make room for a new fat binary for a future "arm64v2" :)


Apple always phases out these kinds of technologies after some time to keep the ecosystem tidy and give a last push to developers to abandon legacy code.

In this iteration, it might also allow some simplification of the silicon since Mx chips have some black magic to mimic x86 (mostly in memory access IIRC) to allow Rosetta to work that fast. IOW, Rosetta 2 is not a software only magic this time.

I remember using the first Rosetta to play Starcraft on my Intel Mac. It also got deprecated after a year or two.

So leaving things behind despite some pains is Apple's way to push people forward (e.g.: Optical media, ports, Rosetta 1, Adobe Flash, etc.).


If they hadn't deprecated 32 bit we would still be able to play Halo on mac.


This is the perfect comment because 1) it’s true, and 2) it can be read as supportive, a complaint, or just a neutral observation.


The problem is, keeping older architectures alive creates an exponential workload, grinding everything to halt.

So, even though I feel what you are saying, we can't have every nice thing we want, at the same time.


What has been so impressive about the last 5 years of MacOS releases?


I'm still not sure what's so impressive about the last 25 years of Windows and MacOS that means we need an absolute supercomputer by 2000 standard just to open a word document the same way we did back in Windows 2000


Didn’t Word used to be installed from 2 floppy disks? Now Calculator.app leaks 40 GB of memory. Software in this sorry state cannot be run on a supercomputer, it needs one of those theoretical ocean-boilers.


Word 4.0 for DOS from 1987, sure.


This is a false memory. The reason "splash screens" existed with little text banners updating you about the status of the program's initializers was because it took fucking forever to launch Word on a 90's PC.


The steep decline in software stability and usability has been quite impressive, I wasn’t expecting them to screw it up so fast. Alan Dye in particular is a true inspiration for those who subscribe to the Peter Principle.

https://en.wikipedia.org/wiki/Peter_principle


I'm not very well versed in macOS internals, but I was a tech lead of a Debian derivative. I also write HPC software and manage relevant infrastructure from metal to user , so I believe I know some details about processor architectures, general hardware, Linux and *NIX systems in general.

The user-visible layer of an operating system is generally one of the simpler layers when it comes to code and maintain since it's build upon abstractions. However, the libraries powering these layers, esp. math-heavy and hardware-interacting ones are much more complex due to the innate complexity of the hardware in general.

Keeping multiple copies of a library, in two different architectures (even if it only changes in bit-length), where this simple bit-change needs different implementation strategies to work correctly is a pain by itself (for more information, ask Linux Kernel devs since they're also phasing out x86).

Moreover, x86 and x86_64 is a completely different mode on the processor. On top of that, x86 only mode is called "protected mode" and x86_64 is called "long mode", and running x86 under x86_64 is a sub-mode of "long mode", and is already complex enough at silicon level.

Same complexities apply to ARM and other processor architectures. Silicon doesn't care about the ISA much.

We have seen the effort of increasing performance on superscalar, out of order processors opened a new, untapped family of side-channel/speculative attacks already. So processors are complex, software is complex, and multiple architectures on the same hardware is exponentially complex. If you want to see how the sausages made, you can also research how Windows handles backwards compatibility problem (hint: by keeping complete Windows copies under a single Windows installation in ELI5 terms).

So, the impressive thing was making these multi-arch installations running for quite some time. We need to be able let things go and open some software and hardware budget for new innovations and improvements.

Addenda: Funnily, games are one of the harder targets for multi-arch systems since they are both math-heavy and somewhat closer to the hardware than most applications and are very sensitive to architecture changes. Scientific/computational software is also another family, and this interestingly contains databases and office software. Excel also had a nasty floating point bug back in time, and 32/64 bit installations of Microsoft Office has some feature differences since the beginning.


How much worse they make things.


ARM/Apple-Silicon support?


Apple's contempt for compatibility makes for poor game platforms. It's also a drain on developers who have a continual maintenance burden just to keep things running with each year's new edition of iOS.


Is there not an emulator at this point?


It's really hard to get normal people to deal with emulators so that you can build a community. And the original Halo allocated memory in a weird way that often screws things up.


Halo is a terrible example because it’s a game where the Mac version was never a very good way to play in the first place. I would guess that 99% of Halo players would be surprised to know it had a Mac version.

We should have a path to run legacy software when it’s practical but Halo is just not a good example to make that case.

I’d also personally be more interested in firing up the master chief collection or seeing if the upcoming campaign remake will be any good.


just as a little side note: while it really isn‘t regarded as a typical mac game nowadays, Halo was originally planned as a Mac-exclusive, before Microsoft bought Bungie. If memory servese me right, Jobs even presented it as an exclusive at some MacWorld or Expo.


Not sure it's only about tidiness. Rosetta 1 was licensed from a third party and Apple didn't want to keep paying the license fees.

I don't know if this is the situation with Rosetta 2.


I read a comment somewhere, possibly here by an ex-Apple engineer who claimed that they optimized the thing mathematically for the performance it exhibits.

So, considering its silicon parts, Rosetta 2 is more of an Apple endeavor and technology.

On the other hand 5-7 years a very typical timespan for Apple. So, I don't think licensing fees were that important while ending support for it.


The original Rosetta was based on technology from Transitive which, as I recall, IBM bought. Don't know where Rosetta 2 fits in and any licensing associated with the original Rosetta was a long time ago.


> It also got deprecated after a year or two.

It was five years, from 2006 to 2011. Rosetta 2 will have been there for seven years (currently at five).

https://en.wikipedia.org/wiki/Rosetta_(software)


To clarify, the complete sentence in my mind was "...after a year or two I got my Intel Mac". I got mine in Q3 2008, just before Unibody ones introduced.

So, I effectively got 2 years out of Rosetta 1, but didn't meant to say Apple supported it for two years only.

Sorry for the confusion.

Looks like I can't edit my comment anymore to clarify.


> ...they don't want to maintain and build and test x86_64 versions...

This feels wrong. Apple sold Intel-based Macs until early June 2023. The last one was the 2019 Mac Pro model.

Ending support for Rosetta in macOS around 2028 also means ending support for any x86_64 versions of software. This means that those unfortunate users who bought an Intel Mac Pro in 2023 only got five years of active usability.


Just because the latest OS isn't able to be installed on older hardware does not mean the hardware in no longer usable. I know people to this day that still run the last 2012 cheese grater MacPros with Snow Leopard as daily work machines. They still use Final Cut 7 on them to capture content from tapes. At this point, they are very fancy dedicated video recorders, but they still run and are money making devices.


You're right; I still have a 2010 MBP w/8GB of RAM and a SSD upgrade I made to it years ago. My mother still uses her similar vintage MBP with the same upgrades. These work just fine for most non-work tasks.

That doesn't mean that I expect these things to be updated or supported 15y after I bought them. I am absolutely certain I made the back $850 I originally paid (edu discount) + the ~$250 in upgrades over the years and I'm entirely ok with just letting it limp along until it physically dies. I think most people have similar expectations.


I still have my 2011 MBP with very similar upgrades, but unfortunately, it has the known bad Nvidia GPU that has been repaired multiple times. The last time it was taken in for repair, Apple said they were no longer supporting the repair. It's still usable as long as nothing tries to access the GPU, but as modern web tries to use GPU it would crash the laptop constantly.


Lucky you, so to speak. Back in the day I had the same one, but it would pass their diagnostics, so they wouldn't repair it, though I could literally make it crash in front of the Genius Bar techs reliably and repeatedly (essentially the same way, by trying to do anything that hit the GPU a certain way - websites, Photoshop). "Sorry, our diagnostic tool says it's not the GPU". At one point I even demanded they do a completely fresh install of the OS. On first login, I fire up Safari, go to a certain site, crash. Restart, go to a different site, crash. "Sorry."


I liked out in that mine never developed any issues with the GPU itself. Though it was stolen in 2014, so who knows longer term. My daughter is still running my (iirc 2014) model. I've been relatively happy with my 16gb M1 Air, aside from my own vision issues.


The last security update for Snow Leopard was in 2013. Friends don't let friends connect software that vulnerable to the internet.

The hardware can be ok, the walled garden is not.


Production networks like these are typically not on the internet. That's a bit of information that I take for granted that people not familiar with would not.


What does this have to do with typical consumers who purchased a 2023 Intel Mac only getting 5 years of security patches? Typical users connect to the internet.


“Those systems will continue to receive security updates for 3 years.” - looks like 8 years in total.


You got it wrong.

Rosetta is the technology that allows Apple Silicon hardware to execute Intel software. When they introduced Apple Silicon with the M1 processor, not many binaries existed for Apple Silicon, so Rosetta2 was a bridge for that problem.

They used the same technology (Rosetta 1) when they switched from PowerPC to Intel.

Pretty much every binary for macOS is distributed as a "Universal Binary", which contains binaries for both x86 and Apple Silicon, so x86 isn't being abandoned, only the ability to run applications on Apple Silicon that hasn't been redistributed / recompiled in 6-7 years.


No, I didn't get it wrong. The moment Apple stops supporting to run x86_64 binaries on ARM (M) CPUs, everyone including Apple will stop making Universal Binaries. Because (among other reasons, like lack of motivation) there will be no easy way to test the x86_64 part of the binary. The Intel MacOS era will be over. Just 5 year after Apple sold the last Intel-based Mac Pro.


Is that really a problem though ?

Unless you’re doing something special, you can be fairly certain that universal binaries will behave well on both platforms, that’s what Apple guarantees. They expose one API, which can be executed on multiple hardware architectures.

If you’re doing something special, like an image editor, or a game, you might need to test performance, but you couldn’t really do that with Rosetta either.

Universal binaries work well. And as long as they exist, apps will most likely run just fine on both Intel hardware and Apple silicon.


I don't think the ability to cross-compile things will go away when Rosetta is phased out, though.


But how can you test it if your ARM-based Mac cannot run it? Most software vendors will simply stop making x86_64 builds.


Keep older hardware at hand?


Sure! The point is that it wasn't necessary because of Rosetta. For example, I no longer have an Intel-based Mac, but I still want to build and test for x86_64.


There’s someone out there who wants to build for PowerPC. At some point you have to say it’s a tiny piece of the market and making a few people spend $300 for old hardware is better than maintaining back compat forever.


The difference is there is still a lot of x86 software written for windows, which you will need x86 emulation to run it through whiskey/crossover on a mac.


And for x86-64 Windows builds, you should be testing using an x86-64 Windows machine, not Rosetta 2


I am writing from a user perspective, rather than testing your builds.


I understand where you are coming from and commend you for trying to support your users (I'd do the same!), but I don't think Apple marketed Rosetta 2 as a permanent solution after the transition.

Another aspect is, a Mac stops getting software updates after ~7 years, and then the API level starts to drift between the latest macOS releases.

So, after 10 year mark, you can't get the latest versions of the applications already since the features developers use aren't available in the older macOS versions and you can't run the software anyway.


More issues generally arise from supporting/qualifying older OS versions than supporting specific architectures in my experience, so developers keep around older hardware or VMs for that purpose. In some other circumstances Rosetta may not be sufficient for testing older Intel hardware (one example is work on GPU)


It's reasonable to say this is wrong. But really, this seems like a tiny subset of users. Who bought a Mac Pro in 2023 after Apple Silicon had been out for 3 years already? Almost nobody, because it wasn't a real performance improvement by that time. For those extremely niche folks for which it was somehow still beneficial, they definitely won't want to still be using such a machine in 2028. They will have moved on to something like an M5 Ultra Mac Studio or whatever form the Mac Pro takes next.


> including large changes like liquid glass

They could just revert all that large change with no loss to the users.


The largest impact would be that the reversion would only affect native macOS apps, while catalyst apps, remote iPhone apps and locally installed iPad apps would still have Liquid Glass UX.


Seriously? Why would they revert it just on desktops? Phones should remain unreadable?


> So maybe there is hope that the core x86_64 binary translator will stick around for things like VM and emulation of generic (linux? wine?) binaries

It's mostly for their game-porting toolkit. They have an active interest in Windows-centric game developers porting their games to Mac, and that generally doesn't happen without the compatibility layer.


System library calls from x86 don’t get converted into arm64 by Rosetta? I coulda sworn Microsoft’s emulator did that


> Or make room for a new fat binary for a future "arm64v2" :)

Or, one can dream: RVA23


It’s basically just a recompile though.


I'm sure there's lots of x86_64 specific code in the macOS userland that is much more than just a recompile - things like safari/javascriptcore JIT, various quartz composer core animation graphics stack and video encoder decoder stack libraries, as well as various objective-c low level pointer tagging and message passing ABI shenanigans and so on. This is probably why 32bit intel mac app support was dropped pretty hard pretty fast, as the entire runtime and userland probably required a lot of upkeep. As just one example, 32bit intel objective-c had "fragile instance variables" which was a can of worms.


This is <1% of the total code that Apple writes


Yeah, the most important, least readable, and oldest code. It's exactly the stuff that's expensive to maintain.


It's not that expensive.


Until it isn't


Can you enable TSO for ARM executables?


Yes but I don't see how that is relevant


Best take


It’s not like they were doing it to make me happy, they are doing it to sell Mac and lock people into the Apple ecosystem. Maybe there is a negligible % of people using it, possible m1 is 6 yrs old iirc


Closer to 5 years old


Surprised to see no patch available for watchOS, which can also receive images via iMessage. Not important enough to patch, or not vulnerable, or just not exploited in the wild yet?


I was hoping this would work over ssh in a macOS Terminal.app, but last I tried it was inserting all kinds of weird characters into the edited text files.

Windows ships an official OpenSSH server these days, but so far there haven't been any good official text editors that work over OpenSSH, as far as I know.

I've had to resort to "copy con output.txt" the few times I needed to put things into a text file over windows-opensshd...


Maybe using ucs2 encoding, instead of utf8?


If linux kernel security really is so bad that google had to add a proof-of-work to introduce a 4 second race for 0day submissions, I'm surprised they're ok with still using the Linux kernel as the base for Android.


Android has a vastly improved and better version of linux kernel: https://old.reddit.com/r/GrapheneOS/comments/bddq5u/os_secur...


All of that's talking about the userspace though?


Not all. For example kCFI is kernel space.

Also, attack surface reduction is a very valid strategy, so it may seem like about the userspace (sandbox for every apps etc) but it could make a big different in how much of the kernel attack surface is exposed.


Yes, but the concept of CFI is only mentioned in passing in that entire thread, and the kCFI implementation used is a vanilla kernel feature and not android specific.

There's a lot to be said that "Distro kernel config choices may not be as secure as possible", but that's not really an "Android"/"Vanilla Linux Kernel" difference.


Well, I don't know kCFI being enabled on any distro besides Android, cause it requires building the kernel with Clang.

The previous in-kernel CFI implementation (before the kinda joint effort - kCFI) was upstreamed by Google, too: https://www.phoronix.com/news/Clang-CFI-Linux-Patches and https://www.phoronix.com/news/Linux-Kernel-Clang-LTO-Patches. Pixel devices also had this long before. Given that the entire Linux kernel feature was developed out of Android I find it a little bit unfair to call it "using a vanilla kernel feature".


I'd argue that the entire point of using a shared open source kernel is that other users can benefit from additions.

Arguing "Who first added a feature" seems to be a losing spiral of needless tribalism. How many features does the Android kernel use that weren't developed by Google? Does that mean they wouldn't have developed those features? Or just that there's no point making a parallel implementation if there's already one existing.


The point here is not who first added the feature to Linux kernel. The point is Android cared about security, built a CFI implementation, started shipping it back in 2018, while Linux had other priorities and didn't have it until 2021. And even then almost nobody adopted it.


What is the alternative? I suspect all modern kernels are more or less just as vulnerable? They did start https://fuchsia.dev/ so maybe they are hedging against this problem? But making a fully-featured OS is a huge undertaking, especially if you need compatibility with existing apps and a wide variety of hardware.


What's the alternative that is efficient, feature complete(*) and more secure?

(*) For example, Android uses SELinux to confine apps, virtual machines (pKVM) to run DRM code, and so on. All these increase the overall security of the system and decrease the cost of kernel bugs, so there's a tradeoff that's not easy to evaluate.


Google isn't doing the race thing. They just pay out to whoever submits a valid submission. The people doing the racing are the submitters who want to get ahead of their competition and are stockpiling their exploits. If they were altruists, they would just submit their exploits for no renumeration. Hence the race isn't something Google is doing.

The proof of work isn't there to add a "four second race". It's to prevent ddos like spam.


> Safari only exist on Apple devices

Webkit, at least, builds on a lot more platforms than you think. Take a look at https://build.webkit.org/#/builders

I'm seeing at least three other MAJOR platforms:

  • GTK-Linux-64-bit-Release-Build
  • PlayStation-Release-Build
  • Windows-64-bit-Release-Build


And just a tip, if you don't have any Apple devices but need to test a bug/inconsistency being reported by Safari users, you can usually use GNOME Web (Epiphany) and the same behavior will usually manifest, since it is a true Webkit browser. It also includes the Web Inspector with the exact same interface as Safari. And it's not super outdated or anything like that, it tracks Webkit quite well nowadays.

It's a bit ironic that Webkit started as KHTML, a component of KDE, but eventually made its way to GNOME when a Gecko-based Epiphany became hard to maintain.


WebKit 100% exists on Windows and Linux, Microsoft builds it under the playwright project.

I use it occasionally, only for debugging purposes though.


So I guess you couldn't get certificates for any random (MX) domain, only for those where you can obtain an inbox / user account. Still really bad, especially for things like gmail.com, but also larger enterprises. Intense.


It is unlikely that SSL.com would issue a certificate for any major mail host; it would be malpractice for them not to have some kind of exclusion list.

Issuing a Google certificate is a good way to get your whole CA killed.


Sure, gmail.com might be excluded, but its still a massive hole for a few reasons.

This would affect ANY email provider who offers public email addresses. While I agree gmail.com is probably excluded (and maybe this doesn't bypass CAA -- maybe it does) there's a whole additional surface of anyone who has an email at any big enterprise getting a certificate for their domain.

Even if I work at google.com, therefore have a google.com email, I should absolutely not be able to get a certificate for google.com just by getting an email at that company.

I doubt it's even /that hard/ to buy an email account at a big company like that in the underground world, it seems like they are valuable generally and any company with 200k employees is going to have some leaks. This massively increases the attack surface of a simple leaked email account (which might otherwise have very little or no access).

Crazy crazy oversight that has huge implications and is so easy to carry out that I would not be surprised if this was actually exploited by bad actors.


plenty of companies have mailing lists which are listname@companydomain.com

Getting on those lists is often easy. Same with support ticketing systems, etc.


Someone else on the list might figure out something was fishy when the verification email came through though.


> Issuing a Google certificate is a good way to get your whole CA killed.

Surely what happened here is a good way to get your CA killed? The linked bug seems pretty bad.


Less clear on that. Bugs happen. I'm not an expert on browser root policies.


From what I understand one of the factors is how often things like this happen, and how well they handle it when it does.


Historically, singular domain validation bugs have not killed CAs.


Or any domain for which you can read an email sent to an inbox. I remember a few years ago an attack where the attacker would read email because a ticket would be created for incoming emails, and he could guess the next ticket ID to read it. A lot of platform that aren't email providers still allow emails in (e.g. GitHub, GitLab). This looks like a rather widely-applicable attack.

edit: I was thinking about this: https://news.ycombinator.com/item?id=41818459


Or potentially one where you could subscribe to a mailing list. Which includes a lot of very important open source software projects.


Even then, use of a DNS CAA record should mitigate this, right?


Maybe?

I wouldn't assume that the bug doesn't bypass CAA checking.

Very important question to answer.


Yeah - unless you're an actual SSL.com customer, in which case your CAA records would allow it. That's a much smaller blast radius at least.


I couldn’t reproduce the attack with a pair of my own domains, so I think it might be even narrower in scope than the initial post suggests. But I suppose we will just have to wait to see what the CA says.


> Out of an abundance of caution, we have disabled domain validation method 3.2.2.4.14 that was used in the bug report for all SSL/TLS certificates while we investigate.

I think they have already addressed the bug.


I tested before they acknowledged or disabled the method (I was able to use a 3.2.2.4.14 validation the “normal” way)


I saw this on twitter a few hours ago on my phone, and misread the price as $25, so I was considering maybe putting in an order or even two, but when I revisited the site on my laptop and discovered it was $250, my curiosity hit a wall. Looks like a super neat product but unfortunately a bit overpriced for a gimmick.


I recently ran into an issue with this because building an iOS .ipa from the command line with xcodebuild apparently ends up shelling out to call rsync to copy some files between local directories, and because I had homebrew rsync earlier in $PATH, it would end up running homebrew rsync, but xcodebuild passed an openrsync-only command line argument "--extended-attributes" that homebrew rsync doesn't understand and would exit with a failure.


Probably because the 6502 CPU had instructions for optimized access to the zero page (first 256 bytes of RAM), this would also apply to C64 and NES etc. If you want to use "Indirect Indexed" memory accesses then I think it also has to go through an address held in the zero page.

See https://www.nesdev.org/obelisk-6502-guide/addressing.html


The AVR architecture puts its registers starting from address 0 of its data memory. As a twist, the program memory also starts at 0 (due to the Harvard architecture).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: