The default limit is in place because select() uses a fixed size bitmap for file descriptors. If a program that uses select opens more than that many files it will make an out of bounds write. It is probably better to make the file descriptor allocation fail than have memory corruption.
All programs that don't use select() should raise the limit to the hard limit on startup. Then drop it back down before exec()ing another program. It is a silly dance but that is the cost of legacy. I'm surprised that these programs that are affected can't add this dance in.
I am fairly confident that this is not the reason for the file descriptor limit, especially since select was superseded over 3 decades ago by by poll (1987, picked up by Linux in 1997), which in turn is superseded locally by kqueue. Use of select in modern times is a bug.
ulimits are quotas that disallow excessive resource consumption by an application, not bug shields.
Select has 1024 bit/file descriptor bitmap and anyone using it gets what they deserve. MacOS also provides poll() as a slightly less braindead, but still POSIX option. The proper solution is to use kqueue()/kevent() allowing userspace processes to efficiently get notified about changes to tens of thousands of file descriptors (tracking a million file descriptors using this API on FreeBSD works fine).
I don't know if I would call anyone unfortunate enough to not know about the limitations of select() deserving of such limitations. At least on my Linux box the man page does have a warning at the top of the description section but it is easy to accidentally skim over.
It would be interesting to do something like avoid defining that symbol by default, require `-DENABLE_OBSOLETE_SELECT_API` to make it available. It would cause trouble for compiling old software but it is easy to remedy and at least makes new users extra aware that they shouldn't start using this function.
Unfortunately, adding in an option like `-DENABLE_OBSOLETE_SELECT_API` is only trivial if you have a good understanding of the build system in use and sometimes other nuts and bolts of a given project.
Tracking down a build failure in some old software project that may or may not have approachable people left on the team through elaborate build scripts that may or may not surface the error in a way that may or may not be easily comprehensible for an outsider can be really hard.
I like using poll(). I don't generally write software which waits for tens of thousands of FDs, I wait for a couple. Sacrificing platform support for more or less nothing in return doesn't make a lot of sense.
Kqueue/kevent/epoll are good options if you have very particular needs, such as a huge number of file descriptors, but I'd argue poll() should be the go-to.
I guess if you're using an event loop library you don't have to think about it. For people who make event loop libraries, I agree that they should probably support kqueue/kevent+epoll. I was talking about normal application code that's not using such an event loop library.
Are you sure it's that, and not people setting ulimit to something very low and causing critical programs to start failing in ways that open up security issues?
It could be both. But having a failure be a security issue is a fail-open design and should be avoided in most cases. Having out-of-bounds memory writes can be exploited in a variety of ways and can provide exploits that affect even fail-closed designs.
The reasonable solution for that would be to fail calls to select() if more than FD_SETSIZE fds are being held instead of nerfing all applications to do the setrlimit dance - some of which may not be actively maintained or even if they are would take time for fixes to be available and distributed.
The problem is that the memory corruption occurs when preparing the arguments to `select()` so by the time `select` is called it is already too late. Having select abort the program could make it harder to exploit as the corruption likely occurs on the caller's stack but doesn't completely solve the problem.
I guess the real solution would be updating `FD_SET()` and `FD_CLR()` to abort if `fd > FD_SETSIZE`. IDK if writing to the fd set outside of these two functions is officially supported.
This seems like an unfortunate, but reasonable, given the circumstances, approach, but you'd have to make sure that none of your dependencies, including Apple's frameworks, have code that calls select.
At least it's only the soft limit. The hard limit isn't reduced. The simplest workaround is to wrap the start of the of process with a shell script raising the soft limit before it execs into the application you want to run.
That’s annoying. If you just had to disable SIP and change a value and reenable it wouldn’t be a major issue.
I feel Apple is trying to force apps to handle this “the correct way” - remember UAC prompts all over the place when they were first introduced in Windows?
Ugh, it has been a few years since I did any software development directly within MacOS, but when I did I found it really annoying to have SIP fully enabled. At the time, though, disabling it didn’t take away core features of the computer.
The most frustrating thing with SIP was that I’d always butt up against it at the most inopportune moments: deep in a rabbithole of diagnosing some unusual issue, I’d finally have everything running and set up just right to reproduce it, realize I needed to trace a specific process or something, only to have the system tell me I wasn’t allowed to.
Perfect timing to have to restart the computer and wait while it slowly boots into recovery, then remember where I left off and recreate my environment.
The continuing iOS-ification of MacOS really drove me away from using it as my main computer, despite having been a lifelong Mac user. I still have a Macbook Air, but for any real work it’s just a thin client to my Linux desktop.
I’ve slowly moved towards pushing any serious modifications of my system to a container or vm. The cost of messing something up, in terms of lost productivity alone, just isn’t worth it.
Globals are a nightmare. Despite this being something really common to do for developers and something that will cause a pain for a lot of early adopters of [whatever software that requires it], I see this as a great move!
I hit it because i use ssh to tunnel connections to mac. And sshd seems to use file socket to handle the sessions. I honest don't think it is a sane limit at 2023. Even the most complained 'low performance' platform like nodejs handles thousands of connections just fine. What it the point to have a limit of 256?
Having a soft-limit of FD_SETSIZE (typically 1024) prevents memory corruption in applications using `select`. The hard-limit is generally huge (>100k) so applications which don't need that protection can simply raise their soft-limit.
An application that opens many files and doesn't use `select`, should raise its own soft-limit to match the hard-limit (or a reasonable number below the hard limit, if it wants to self-limit for some reason).
The default soft-limit should match FD_SETSIZE and should not be raised globally by the user. I don't know why the default hard limit is 256 and not 1024. Perhaps FD_SETSIZE was lower than 1024 historically?
As an aside, Quinn “The Eskimo!” is a legend. They have helped me on a few occasions with code signing electron apps. In the American south there is a saying “doing the lords work.” Thank goodness we have people like Quinn. Their understanding of complex system and abilities to troubleshoot are invaluable.
It sucks because the forums are one of the worst place he could be putting this information–it's a terrible medium, hard to search, and full of information that ought to be elsewhere. I do not appreciate that Apple can rely on this horrible stopgap instead of writing technotes like they're supposed to.
Also he's got to approaching retirement age at this point, and there is no backbench corps of other DTS who do what he does. Maybe another decade we'll have his help, then what?
Pre-NeXT takeover, Apple had a lot of folks like Quinn to support developers, and an entire documentation department churning out content. Quinn was always one of if not the best.
While much was gained in the NeXT merger, the biggest thing that was lost was developer support and documentation. They were gutted in March 1997 and were never restored. Avie Tevanian had no respect for documentation and his attitude infected the rest of the organization. One dude did so much irreparable damage.
I found the documentation of NeXT and Apple during the early days of Mac OS X to be pretty excellent to be honest. It was even one of the points that drew me to the platform. But then I only started with Mac after the takeover, so I don’t know how things were before.
I worked with Quinn at Apple during my brief stay in the mid-90s. He's amazingly knowledgeable about almost everything in the Apple world, which is why he is so valuable at answering questions.
I am, at this exact moment, getting bodied by a ulimit problem on mac. Apparently with pacman, directories attached with bind mounts have a fixed ulimit of 64 internally and running npm install inside a mounted project explodes completely because of it. Funny that this turns up right now, even if it’s not a fix for my particular problem.
I think a security option that does the following would make sense:
1. exec should reduce the hard-limit to FD_SETSIZE unless it's passed a flag
2. When a process calls `select` while having a hard-limit > FD_SETSIZE, it gets terminated. (Strictly speaking this doesn't prevent the memory corruption, since it happens in select's support macros, not select itself. But I'd expect it to be good enough in practice)
A modern application which doesn't use `select` should raise its own hard-limit during startup, option into the select-termination in exchange for the ability to open many files.
I've been saying for as long as I've got a macbook pro that the hardware is absolutely above anything available from other vendors but macOS is crap for development. Interesting to know it actively and intentionally becomes even worse :/
I still don't know a better plattform. Linux may be fine if you like tinkering and can compromise on UX and Windows is the same but worse (but at least with Linux built in now).
I personally much, much prefer the UX on Linux. I may be suffering from some kind of Stockholm syndrome as a long time Linux user, but I have to use macOS for work and it absolutely doesn't work for me.
I find the UX quirky and akward to use. I hate how the window management works. I hate how annoying it is to get two windows to show side by side. I hate how there end up so many icons in the menu bar which are entirely unhelpful. I hate how space inefficient the dock is, but when you auto hide it, you can no longer see the notification counts at a glance.
I would honest to God rate macOS UX a 2/5 for design and a 5/5 for QA.
> I hate how the window management works. I hate how annoying it is to get two windows to show side by side.
Plenty of window management apps and tools available. Rectangle, for example.
> I hate how there end up so many icons in the menu bar which are entirely unhelpful.
The only ones I cannot seem to be able to remove trivially are the clock and the Control Center. Solved in mere seconds by typing "menu bar" into the System Settings search. You could literally even just hold cmd and drag almost any icon out of the bar. Not to mention any of the more involved menu bar customization apps and tools.
Why does it seem like such "macOS bad Linux good" comments always compare the default macOS experience to a personalized Linux environment?
Why does an out-of-the-box Mac have to fulfill the same requirements we spent hours configuring for on a Linux machine?
Where does the capability or mindset to install a tool disappear when we move from Linux to macOS?
Why does an out-of-the-box Mac have to fulfill the same requirements we spent hours configuring for on a Linux machine?
I think part of the problem is due to Mac and Apple have traditionally sold itself as intuitive and easy-to-use out of the box. People buy a Mac because they've been told that they don't have to spend hours configuring it. Hell Apple built a whole ad campaign around how the Mac is so much easier and more intuitive to use than Windows computers. However truth is that if you're coming from Linux or Windows, a lot of MacOS is incredibly unintuitive and weird out of the box.
Yes, I do in general agree with that. I suppose my confusion really only applies in this particular context of , e.g., HN; I would expect software engineers to look at these issues differently than your average home or office user. It seems to me that this "easy to use out of the box" marketing is aimed at a different sector of users, and I would expect software engineers be able to look through this. Perhaps my expectations and assumptions are wrong.
I am just curious where all the willingness to tinker and solve problems appears to disappear when we move to macOS.
Windows spying on you? Oh, let's install this tool X and change a whole bunch of registry settings to prevent that.
App windows behaving annoyingly on Linux? Oh, let's just switch to a completely different desktop environment/window manager or what have you.
Too many icons in the menu bar on macOS? Yeah, I'm returning this machine. :)
> Why does it seem like such "macOS bad Linux good" comments always compare the default macOS experience to a personalized Linux environment?
>
> Why does an out-of-the-box Mac have to fulfill the same requirements we spent hours configuring for on a Linux machine?
Pretty much all of their complaints are addressed out of the box by most of the major desktop environments. Install whatever popular distro of your choice and you can trivially put two windows side-by-side like you can on Windows.
For the window management, a big shoutout to `magnet` (for snapping windows) and `AltTab` (for sane alt tabbing).. for making the management much less frustrating (although not completely fixed).
The "windows should be grouped in terms of the applications" does not make sense to me AT ALL.
Most work is done on a project basis, which crosses application boundaries quite readily. E.g., one project might involve having an open Android Studio project, a PDF with documentation, a browser window and a note pad. "Cmd-tabbing", though, brings up the next window of the same application that I am currently on. Viewing a PDF with preview, this will bring up some random image most of the time? (Luckily this is fixed by AltTab)
No solutions for the dock from me though. Personally I hate how the dock can randomly move between two monitors if you just happen to drag a file through the area where the dock COULD go on your second screen.
Additionally, things I hate and have not been able to fix: how if you close a window of a specific application in MacOS, it moves another window of that same application forward (similar to the Cmd+Tab behaviour) and if you click on an application in the Dock, it brings forward ALL windows of that application (whereas I would expect only the most recently used one).
If you have the time to “play” with it Linux is amazing. But you have to defend yourself against upgrades. It was years ago that a routine upgrade from some Linux distribution to the next version completely changed my GUI in ways that I just didn’t want to deal with - I bought a Mac.
If I were ever to go back I’d run Gentoo or some other rolling release distro; at least then I get changes a little bit at a time.
There are addons and tricks to do window management in Mac OS - but the big one I do is throw that dock on the right hand side of the screen and leave it. Wide screens have space there, might as well use it.
Mac OS still has a bit of that “upgrade changes everything” going on but it seems more gradual and less painful - and in my experience pretty polished.
It is funny because this is actually one of the main things I like about Linux. My current setup is basically the same as the one I had 20 years ago. Things function and look the same. The only difference is that it the background lots of improvements started accumulating not to mention my much better PC.
With Linux it feels I can keep doing the things how I want them to do. True, there are some distributions and window managers that throw the baby out with the bathwater for every update, looking at Ubuntu and Gnome, but sticking to something more conservative and you are set for live. Also, every time this happens, somebody will always fork of the old code, like with Mate and Trinity.
With a Mac or Windows you are sort of stuck with wills and whims of some company. If Windows could still look and function like 2000, but just be better, I might still use it to this day.
> With a Mac or Windows you are sort of stuck with wills and whims of some company.
It's true, but I'd also say macOS as it is today looks pretty damn similar to screenshots of the first OSX version shown off 20+ years ago. Other than updated icons (which are still mostly based off the original icons) and an overall flatter aesthetic compared to the flashy, novel 3D of those days, it mostly functions the same. They only updated the Preferences app to System Settings in the last version with a design overhaul, and it was a pretty big deal because they don't do that much.
I've not had breaking changes in Linux for a long, long time. I generally use either a completely unmodified, default config Gnome, or Sway with my own config.
I've used a couple of different desktop environments or window managers over the past few decades, but these days I either stick with the default (which is so often Gnome) or use Sway, where I can just copy in my config file and everything works exactly as I'm used to.
> If you have the time to “play” with it Linux is amazing. But you have to defend yourself against upgrades.
That has not been my experience. If you are using something like Arch (rolling release distro) maybe, but Ubuntu and Debian stable have been pretty much that.
I also find it a little funny that a complaint about Linux is in the thread of a post showing how Apple will change things that break their users without apology.
This sorta feels like a parody comment, especially as I currently sit at my work macbook with an entirely redesigned and laid out settings UI (which now actually reminds me of the control panel view from windows XP - with it's new list of categories with lots of confusing, duplicate, and overlapping choices).
I’ve found that even if I have time to allot to playing with Linux, I can never get things tuned the way I want. If you want bog standard Win9X paradigm desktop (most DEs), iPadOS with a mild desktop bent (GNOME), or yet another minimal tiling WM those are easy to do, but reproducing other environments (mainly macOS) keeping all the bits that matter in tact isn’t currently possible. It’s quite frustrating for me.
If it weren’t for the fact that such a thing would progress glacially if worked on only in my spare time I’d write my own DE and maintain enough forks that my GitHub profile would look like a fork factory.
> Making a simple tweak like setting caps lock to an extra ctrl is a whole side quest on Linux
I've remapped CapsLock to Esc on both macOS and Linux/GNOME. It's just a checkbox/combobox in a settings in both cases. What is 'a whole side quest' you are talking about?
Part of the fun with that is there's about fifty different places you could do it, and the only one that will really affect everything is in the kernel (IIRC), anything else will affect the program, window manager, terminal, etc but not necessarily anything else. And some programs bypass everything and read scancodes anyway.
Whereas in Mac OS X it's a simple configurable option (a bit hidden, System Settings, Keyboard -> Keyboard Shortcuts -> Modifier Keys).
And if you do it in the kernel, then any other users on the machine are stuck with your choice. The MacOS setting is per-user, but affects very nearly everything, except the recovery environment and I guess the login screen, for good reasons.
It makes me wonder, how exactly would one go about implementing Linux remapping that works on everything on a per-user basis? Is there a way to hook into the USB HID implementation and change what it reports to X, Wayland, etc and flip it on/off based on user preferences?
You'd likely need to have the kernel be able to switch "keymaps" at anytime (and this is likely to be privileged and need to be done by root).
I know there was a way to load different keyboard layouts, but the problem with those mechanisms is they often cannot remap modifier keys, of which capslock is a special case of.
People will (and have) showed up to deny this, but what you describe is common enough that it can't be untrue.
At this very moment, I'm still going through the "play" phase. If you pick a distro and then never change anything about it, then maybe it will work out for you out of the box. But as soon as you try to "make it yours", suddenly you'll find yourself with a second day job with no pay, you'll wonder where all your free time went, and your muscle tone will be gone. Inevitably, you'll have some weird issue like a particular program acting slow and causing the mouse to lag, or maybe your Wayland compositor has the wrong cursor position, and nobody on the internet knows why, except for some guy with a script as long as the declaration of independence that you can run in your terminal to fix all your problems (um, no). Eventually you give up and run the built-in compositor within another compositor, and somehow that fixes the original issue. But now your keyboard/mouse configuration no longer works! Holy shit, it's almost Christmas? Screw this, let's install Debian stable. Wait, what? The installer can't find the installation media? You ARE the installation media! Fuuuuuuuuu...
I love Linux as a tool, but the desktop experience will never get its act together. Whether it works for you or not seems to be based on luck. I don't even want a complicated desktop experience; I just want something basic like Openbox and a web browser, but these days you can barely even do that without a bunch of tinkering (unless you want to stick entirely to X11).
It seems like the problem that makes it common is that Linux self-selects for people who want to tinker (that's why they're installing a non-default OS). The difference is that it also accommodates people who reach a point where they just want the computer to work and get out of their way. So you end up hearing both voices.
> If you pick a distro and then never change anything about it, then maybe it will work out for you out of the box.
Yes, exactly. Use a stable distro, don't touch things, and they won't change. It will continue to run exactly the same for years and years. This is in contrast to OSX and Windows. Windows is completely unrecognizable to me since stopped using it 7-8 years ago (the last version I used being Windows 7). My Linux desktop works exactly the same (with Firefox as an outlier that randomly removes/hides functionality). Apple is notorious for not caring about backwards compatibility, and just expecting everyone to accommodate whatever changes they want to make.
> unless you want to stick entirely to X11
Right, don't change things, and it will keep working. Wayland sounds like its benefits are all nerd stuff that I don't care about (a "better" architecture or whatever). Meanwhile, X works just fine. At some point, if people stop complaining about Wayland, and if it has some benefit to me, maybe I'll try it. Currently it seems that neither of those criteria are met. When I do try it, if it doesn't work, I'll just roll it back.
>Yes, exactly. Use a stable distro, don't touch things, and they won't change. It will continue to run exactly the same for years and years. This is in contrast to OSX and Windows. Windows is completely unrecognizable to me since stopped using it 7-8 years ago (the last version I used being Windows 7).
Unless you're willing to use Flatpak though, you'll be stuck with old versions of applications because user applications in the Linux world are typically tightly coupled to the system packages.
Right but that's the point. I'm fine with my stuff being out of date. I'm not trying to tinker. I don't need to use the latest versions of things because the current ones work fine. If there were something new I really wanted, then I can use one of those containerized versions until it lands in my distro's stable channel, which is every 6 months or so for feature releases.
> Wayland sounds like its benefits are all nerd stuff that I don't care about (a "better" architecture or whatever).
Kind of.
If you have a Retina display, for instance, Wayland is superior out of the box. Unless an application is written all stupid, it has a good chance of rendering at the correct scale while looking crisp. It's possible to do this with X11 or Xwayland, but I found it requires more tweaking of individual app settings and GTK environment variable to get it to look right. But even then, try using both X11 and Wayland apps together and get both kinds to look right on your HiDPI display without one or the other looking fuzzy or incorrectly scaled. I found it virtually impossible, thus it's better to just go with either one or the other. Although I think it would have been better to actually fix X11, Wayland does actually do most things better. HiDPI is one of them, and the other is vsync. I haven't seen Wayland cause horizontal tearing, but X11 always gave me this issue. A popular Stack Exchange question titled "Why is video tearing such a problem on Linux?" was written by me as a result of having tried my best to get my Linux installations to not experience horizontal tearing but inevitably failed no matter what display or graphics card I was using. I'm glad that someone in the Linux sphere decided to take vsync seriously and make it a non-issue with Wayland.
The average person doesn't need to know about Wayland, especially if they are going with a Wayland-based distro and not changing anything about it.
The only reason one would need to know about it, besides if they are writing a compositor, is if they are trying to customize their Linux distribution. In that case, they may be in for a world of hurt, because it may not be so easy as to install all the Linux apps they know and love and have everything look and play nice.
That's I think what gets to people like me. Linux is great for the server and containers, but as far as the desktop goes, it's still trying to figure out what it wants to be all these decades later. The idea that you can make it anything you want is true more in principal than in practice. Luckily, I think I have found my happy place with my customized version of Debian that I'm rolling for my own personal use, but that was after countless hours of trying things until they worked. If I wasn't willing to put in that effort and risk having nothing to show for it, I'd have dismissed Linux as a joke and just used macOS everywhere instead.
I suppose I occasionally notice a tear, but I can't say it's ever affected usability. I'm the type to turn off all animations though, so maybe that has something to do with it. I also use a cheapo $300 4k 32" monitor, and things seem to be fine at 100% scaling with a 10 pt font. If I had a $5000 monitor, I could see wanting to make it work. I also sit a little over 2 ft back from my monitor though, so the angular pixel resolution is still over 60 pixels per degree, so I don't see much point in going higher resolution than 4k.
I have read that HDR support will likely come to Wayland first (if it ever comes to X). That may be a compelling reason to try it out. Hopefully by then some of the kinks are worked out.
Back when I cared more, I was working with a lot of fast motion stuff in software like Maya, Nuke, etc., so the tearing was really bad and distracting.
> I have read that HDR support will likely come to Wayland first (if it ever comes to X). That may be a compelling reason to try it out. Hopefully by then some of the kinks are worked out.
As a side note, it's kinda crazy to me that we're talking about HDR "coming" in 2023; my graphics design teacher was telling us about HDR in 2009.
Another part of the problem (for me) is I would get something like an LTS setup the way I want (a CentOS or an Ubuntu LTS, but I honestly can't recall exactly what I was running on my desktop at the time) and it would be rock solid for years - until it finally was old enough (or out of support) and I wanted to install something new and I had to upgrade - and which point I get five years of changes right to the face, and spend a week or more tinkering everything to how I wanted it.
> But as soon as you try to "make it yours", suddenly you'll find yourself with a second day job with no pay, you'll wonder where all your free time went, and your muscle tone will be gone.
It's true for every OS. I went Windows -> Linux -> MacOS and I had to get used to every time.
I feel very similarly. Macs are polished and well built, but the amount of anti-features or weird UX decisions that I can't do anything about are maddening. I eventually switched to linux for my personal dev machine and after some initial bumpiness came to love it.
Unfortunately _entering_ Mission Control doesn't switch apps. On Windows it's a 3-finger swipe left/right. It seems like 'one of the bigger strengths of the Mac' is bigger on Windows.
That’ll switch spaces which makes more logical sense as they’re stably ordered left to right and apps (unless pinned to a space as a full or split screen app) are not.
On Windows it's 4-finger swipe left/right but it's not possible to switch between apps on macOS via gestures without 3-rd party apps. That's why I didn't get that 'compromise on UX' statement. MacOS gesture support is poorer compared to Windows (at least for my workflow) but on the hardware side Mac's Trackpad is superior.
Gesture based Window management has been around on Macs since early versions of OSX and is feature that Microsoft has attempted to copy, poorly.
> An interesting addition is the ability to use a three finger swipe up gesture to activate the new Task View feature of Windows 10. Not only does Task View look like OS X’s Mission Control (Exposé) feature, the three finger swipe up is the same gesture. Microsoft is also borrowing the three finger swipe left and right to activate switching between apps
Well, no. It's the part where you claimed that "it's not possible to switch between apps on macOS via gestures" that tells me you aren't familiar with how the Mac works.
But Apple should generally fix this. Having a system wide limit of 256 files open is almost as outdated and idiotic as having a filesystem that doesnt support case or more than 256 characters (NTFS)
And windows now supports longer paths, but I think it's still disabled by default (and I'm sure there's a bunch of legacy software that is still using old APIs and can't support long paths anyways).
The limit was with Win32. It’s not a “legacy” API, but it is as old as NT. There are some applications which do not use the MAX_PATH constant, such as Office (probably one of the few rate examples; Office apps have some horrid hacks like this and how they paint their menu bar).
Maybe that's not going to be popular here but my opinion is that the only reason MacOS is used at all for mobile development is because it's forced. Otherwise they would lose that as well.
I've never had to use anything worse than xcode yet. I'm almost sure that they have some kind of secret stuff in house to develop their own software other than that.
That's a pretty strong statement. I use both and it's been very smooth sailing for me, switching to Windows is something that comes with so many headaches that I couldn't make the case for it, besides I wouldn't know what to improve compared to how it is today. Using multiple GPUs under linux while also using them to drive a display is perfectly normal.
Was there anything specific that you would like to see improved on a Linux/NVidia combo that I may simply never have run into?
Which is relevant only if you actually do CUDA development? I've worked using CUDA in the past and usually remoted-d in to a big beefy HPC system anyway, since no laptop would really give you the power needed to exercise the code properly.
Linux and Nvidia work well these days, it has improved greatly from the dark days of "F you Nvidia" by Linus. I have been using KDE Neon with Nvidia/Cuda for several years now with no issues
I can speak for someone who has done it. I have not been able to achieve Windows level performance. Plus the time it takes to do things like force the GPU to turn on/be used by the application/OS is nontrivial and I'm a full time programmer.
Not to mention when I'm using GPU based applications, my second monitor sometimes freezes. Mostly Steam, but rarely videos.
(Not to say Windows didn't have its own problems like forced update/reboot, ads/ragebait news in the start menu, full remote control of your file system.... gross)
Yeah, I misattributed a lot of minor annoying things with my Linux machine to “general desktop Linux jank” for the past couple of years and would have said Linux+Nvidia was totally fine. Then I replaced my Nvidia GPU with an AMD and realized it was all GPU related.
It depends what you're after. They don't have CUDA, so if you're interested in running something already developed, you're behind. But if you want to develop something new, MPS which is able to use system memory may be a great option too.
AMD on Linux is also making at least some waves recently. Still catching up, but it seems like I've seen 100% increase in ROCm posts compared to last year...
basically no experienced Mac user would ever open a .PKG package with the Installer, but rather open it with a safe inspection app (Suspicious Package, Pacifist) first, either for security inspection of the pre- & post-install scripts, or to do a manual install outright to prevent the security nightmare that actually installing a package is.
while inspecting and/or manually installing packages is much safer than installing packages with the installer, it is now effectively outlawed by Apple. continuing to do things in the safe way now takes a lot more time...
Just turn off SIP. SIP is for regular users who don’t know what a ulimit is, the whole point of SIP is to lock down the OS as much as possible.
If you’re a developer, you live in the Terminal, you obviously need full control over your OS.
edit: I appreciate the irony of being downvoted for suggesting having control over your OS on Hacker News, so keep it coming please. Mo’ downvotes mo’ irony.
I'm a software developer at a company where ~50% of the staff is a developer and our IT fleet management enforces SIP. This simply isn't an option for us because of security requirements from our customers.
SIP is the default though. Turning it off is an option for some, for now. But Apple has clearly indicated they're moving towards a more restrictive ecosystem through there actions for a long time now.
I would suggest you likely aren't being downvote because you suggest having control, instead, you are being downvoted because you seem to believe that turning off SIP is like a normal thing that people should be doing regularly.
I think you are finding strong disagreement with that.
People probably don't believe they should have to do that to perform regular development work.
You aren’t being downvoted for telling people to have full control over the OS. You can do that with SIP enabled, or boot to recovery, disable, modify, enable, and have full control over your OS. How often are you needing to modify low level OS config that you’d rather make your entire machine vulnerable to root exploits than dance around SIP a couple times a year if that? That’s why you’re being downvoted, for advocating folks make their machine way less secure to save 3 minutes worth of reboot time a year, if that. Bump the hard limit once and you never need to touch it again.
3 minutes worth of reboot time a year for this, 2 minutes worth of reboot time for that, 1 for something else and 2 extra for no apparent reason. My previous company switched everyone to Mac and the second biggest reason I quit that job was that Mac was a horrible OS to work on. Constant reboots, crashes, no configuration for basic things like scrolling or window placement. Apple builds great hardware but the OS is only good to make presentations and edit video, not for software development.
A large number of extremely talented engineers might beg to differ. Everything you listed as an issue has a solution. Like any operating system, you have to spend the time to learn the intricacies of how it works and to customize it to your liking. For me, must haves are Alfred to replace spotlight, my dotfiles which change a ton of defaults in various apps like finder, the dock, etc, setup key repeat, iterm2 colors and profile, etc. divvy and magnet for window management. Caffeine to prevent sleep. Stats open source menu monitors to replace istatmenus
I’m sure there are newer equivalents to what I’ve listed. I’ve been using those programs for years.
I did find solutions for my problems on Mac, but the solutions were hard to find, poorly documented, subscription based or a combination.
Meanwhile on Linux it is generally fairly easy to find what you need in the documentation or in the forums. It can be a bit more involved when using some very niche tools but it's not worse than the average Mac app I had to deal with.
I am not a very talented engineer. I'm a normal engineer who enjoys his craft, tries to do quality work and tries to be efficient. My opinion is based on my experience using Mac and Linux alternatively for the last 5 years doing development professionally.
I have seen very talented devs using Mac, but also others that were just as talented and complained when they were forced to switch from Linux to Mac. Hell, the smartest most talented developer I have ever met (by a mile) developed drivers on Windows and he told me that for the type of development he did Windows was all right.
I have to doubt that there is any correlation between how talented a developer is and the quality of a OS because most developers I know use what the company allows them, and it's somewhat rare to be allowed to choose.
I will agree that recently, esp the last 2 major versions, the OS has gotten worse from a stability perspective. I have errors in my logs at a steady pace even on new machines and fresh, untouched OS install from the factory. They just never go away. The cloud services are always on and phoning home, even when you have everything that uses an Apple ID signed out. It’s becoming more intrusive and less configurable, but nothing beats the shortcuts or the mac keyboard layout, and the UI intuitiveness. I can’t go back to ctrl-s and everytime I’m on my Linux machine I struggle to do the ole carpal tunnel-s to save haha
Regarding your carpal tunnel comment. I started having carpal problems very young (in university). Then I looked into and went all in with an ergonomic keyboard, ergonomic mouse and ergonomic chair. It went away in a couple weeks and I haven't had a problem in 10 years, and I use the computer more than it could possibly be healthy. I've had younger coworkers complain and I always recommend getting a good setup because it pays off in health easily.
Turning off SIP allows for any user process to immediately gain root privileges. This is surprising to most people, so I generally would not recommend it without fully understanding the risks behind it.
~You can have a root user with SIP enabled. SIP protects core OS files from being modified while it’s enabled. This prevents processes, even root processes, from swapping out core libs with modified ones, installing root kits, back doors, etc.~
I misspoke
> System Integrity Protection (SIP) in macOS protects the entire system by preventing the execution of unauthorized code. The system automatically authorizes apps that the user downloads from the App Store. The system also authorizes apps that a developer notarizes and distributes directly to users. The system prevents the launching of all other apps by default.
Uh, this being a repeated problem in the past? Apple puts some sort of god mode behind an entitlement, only checks the entitlement rather than the actual permissions, disabling SIP allows for anyone to steal the entitlement and assume root. Apple does not consider this to be a legitimate security issue because they do not think systems with SIP disabled deserve security.
So, who's computer is it REALLY when you can't make settings/decisions about the computer you bought? When some other entity, manufacturer included, is making decisions you cant change or reverse, sounds like they retain real ownership.
Its high time for the FTC to start busting fraudulent rentals misrepresented as a "sale".
(And anybody viewing the Apple ecosystem KNEW this anti-owner lockdown phone shit would eventually hit their laptop and desktop computers as well. But for some reason, the Apple fanatics are OK with this stuff. )
Intellectual property rights (copyright, trademark, patent) should NOT dilute or devalue or remove inherent ownership from property rights.
And "license" presented after the purchase fact should make that the equivalent of toilet paper. It's a "license" after they have your money and over a barrel. There's no agreeing to that. At at least the Europeans with the GDPR realize that farcical situation and illegalized it appropriately.
"Agree or turn your $X000 device you paid for into a paperweight" is no agreement.
All programs that don't use select() should raise the limit to the hard limit on startup. Then drop it back down before exec()ing another program. It is a silly dance but that is the cost of legacy. I'm surprised that these programs that are affected can't add this dance in.