I run 3 displays across 2 video adapters. Getting them all to work as I want (one of them is generally powered down, but not disconnected) was a struggle, but now works flawlessly and I tend to forget about the work involved.
The key things I needed to find out:
1. figuring out how to run a script when a display was connected or disconnected. This needed to be in a udev .rules file somewhere
ACTION=="change", SUBSYSTEM=="drm", ENV{HOTPLUG}=="1", RUN+="/usr/local/bin/monitor-hotplug.sh"
2. how to enumerate all possible video display names:
DEVICES=$(find /sys/class/drm/*/status)
3. Writing the rest of the script.
The last step is mostly left as an exercise for the reader. There are some good examples online; I'll put my current version here https://pastebin.com/V5yGgCbh for a couple of weeks.
You should checkout srandrd, it lets you run a simple bash script when monitor changes happen. It includes some Env vars so you can figure out what happened.
Been a while but I had real issues using udev rules as they created timing problems or something like that.
It's pretty telling how the top two solutions here for this common problem involve writing custom scripts. I love scripting at least as much as the next guy, but this seems like something that should be easily configurable out of the box without a text editor.
If your setup is relatively static, then there are GUI tools that will work just fine. But if you have displays coming and going in different orders, it's quite a big ask to have a GUI tool that can define the sort of conditional logic that is required to get things right. This is precisely what I've got in my case.
It does for me (fedora, gnome, wayland). Other stacks are more work. I was put off using Sway as my desktop, because screen resolutions and layouts was manual.
It gets better. One of my ASUS monitors has a defect (it kicked in after several years of use) where it no longer provides EDID information. I believe that on windows and macOS this would effectively be the end-of-life for the monitor, which is a shame since in every other way it is still 100% functional. On Linux, I was able to tell the kernel to use a file for the EDID information, and created the file using the other (identical) monitor.
Very much a corner case, but also another perfectly good monitor kept out of a landfill.
Windows may actually let you override the EDID info, it's been a while since I used windows but I think it lets you do it in device manager... from the era when displays came with drivers on a cdrom.
There are also EDID cloner/emulator hardware solutions that you plug in inline on the display cable.
My linux machine remembers the displays and where I place them relative to each other (I'm not using a laptop).
But ... I sometimes use 1, 2 or 3 of the displays, and the configuration needs to be different depending on the number and specific displays in use.
My Linux machine can do this automatically, because this is scripted. Maybe you can do this on macOS also, but I doubt it can be done with nothing but the GUI tools.
ime, this isn't totally accurate in the world of usb-c HDMI dongles. I've had the same 2 monitors plugged into the same 2 ports since the machine went to sleep, and it will occasionally switch their positions upon waking.
We Linux users keep endlessly personalizing our systems, forking off into the sunset with ever more weird and over-optimized specialisations, and then we wonder why stuff bugs and why new users have a hard time.
Look at Windows and OSX - and the numerous problems they have supporting just one system. And here we are with our 362894 million DEs and we want different initsystems and file systems and plugins and themes and god knows what, but we get so mad when this obvious mess does not work perfectly .
Maybe stop for second and marvle at the fact that this thing can boot at all, given that no 2 people in the community can agree on how that even should be done.
(To be read in a comedic tone, channeling my Lunduke here)
100% agree. I tried running Pop OS as my main system a few years ago and got sucked into ricing, WMs, extensions and themes. Spent all my time customizing and fixing the bugs that arose, hated it and went back to Windows.
Now I'm back on Pop but this time I know enough to not customize too hard and it works great. A DE/WM is meant to function as one cohesive piece even if its built out of many components, so if you take a functional DE and start poking around then of course it will break. Even the amount of customization I can do easily without breaking anything is far above what Mac/Windows offer.
> 100% agree. I tried running Pop OS as my main system a few years ago and got sucked into ricing, WMs, extensions and themes. Spent all my time customizing and fixing the bugs that arose, hated it and went back to Windows.
This is always my issue with Linux. Windows is boring and ugly - to the extent that I probably can't fix it - so I don't bother and I just get stuff done instead. But with Linux, I can imagine this perfectly configured workstation and I know it's possible because I've seen the things people do. So I spend hours tinkering until I break everything or get bored. Then I come back months later and the Nvidia kernel module fails to build after a kernel update or something and I have to start over from scratch. Not a recipe for productivity.
This isn't a problem with Linux, per se, more of a personal failing. I think I've almost got it out of my system.
It's actually the same thing on Mac or Windows. Younger me spent a lot of time poring over forum posts for tweaking macOS (those undocumented `defaults write` commands) to customize things. One day an OS update broke my highly custom /etc/sudoers and I lost sudo access. That's the moment when I realized I'd gone too far.
I do the same nowadays. I spend extra time learning the defaults (muscle memory etc.) in my OS and programs, which is about the same effort as customizing them (if not less).
The difference is that when setting up a new environment it costs nothing. If I had tons of tweaks and custom scripts that would be a setup cost every time.
I still run manjaro on one of my laptops but just recently returned to windows 11 for my other laptop because running 2 different resolutions (external display) on linux is just too much of a pain. Windows just works. But with wsl2 I find myself not missing linux :/
Same here... when I started with Linux (around 17-18 years ago), I had to customize everything - from the aesthetics, ergonomics to the system level stuff.
Now I just change wallpaper and mouse cursor theme, check enabled services and remove the packages I consider unnecessary. Including the fresh installation time, it is less than 1 hour effort.
Things work differently on Linux. The software is supposed to respect Unix philosophy and standards. So it should matter what combination of packages you run. They're not supposed to be tightly integrated, the gnome team seems to have forgotten all of this. As for managing displays, it has become much better with wayland and it's improving fast.
I think the problem is that making things look to the user as an integrated, polished experience is exceptionally hard when you have the loosest of loose coupling all over the place, and differing views as to how to get things done. Sure, you can put APIs and documented IPC interfaces between things, but that doesn't mean everything will match up in a usable way. So, eventually, you just decide: screw it, we'll own the entire stack, or at least gain enough influence over the people who make parts we don't control, such that everything works seamlessly for us.
In part, this is why Apple can give their customers such a polished, "just works" experience. Buy all-Apple gear, and stay in their software ecosystem, and (modulo bugs) everything will work together seamlessly.
I don't like this outcome by any means. And I still think that some desktop components certainly benefit from loose coupling and well-defined interfaces, and it's possible to avoid the "polish" downsides in some cases. But doing everything that way, while still being able to put everything together in a polished way, might actually be impossible.
Maybe the idea if a perfect,integrated experience doesn't matter as much as we think. The web's chaotic ux won. Games... roll this own up. Now electron is eating the engineered ui.
> Maybe the idea if a perfect,integrated experience doesn't matter as much as we think.
This is not an uncommon view. However I can’t help but be convinced it’s the main reason the year of Linux on the Desktop hasn’t happened yet.
The user really does want their computer to seem like just one “thing,” at least in the way that it comes out of the box. When we visit websites it’s understood that these pages don’t interact with each other and weren’t designed to work together.
The OS, on the other hand, is supposed to be a system per the name.
I'm not sure what you're asking. The browser is not an OS, so the websites are not supposed to form a system. People using Microsoft's default browser is yet more evidence that people generally stick with the OS defaults, because they are supposed to work.
I wanted to write OS, and couldn’t edit it later on :/
My point was that windows is the most popular OS despite it having a ridiculous amount of “native” looks. So linux not having one is not the reason for its less than ideal desktop uptake.
Looks are not the most important part. Functionality is.
Besides, as bad as Windows’ UI fragmentation is, Linux’s is worse. When opening a Linux application, from Pidgin to Firefox to the control panel to Telegram, there is absolutely no guarantee of what you’ll be getting. Having 7 design languages is bad, but what’s worse is having 0.
Absolutely. This is why I'm absolutely hostile towards current-day Gnome. Even if they do make some incremental improvements, the reluctance to do so in a "share-it-back" way is the exact opposite of why I got into Linux in the first place. I quite literally wish it failure.
(But KDE? I don't know, I don't use it much, but it seems like at least their stuff is mostly-compatible-if-you-work-it-a-bit, unlike Gnome's full on hostility to 'cross-compatibility.')
I'm in the same boat. Trying to negotiate with GNOME developers only got me laughed out of the room, now I couldn't care less about what they think. They promised to kill sacred cows, and all we got was this lame desktop that gutted 90% of the things I liked about GNOME 3.38. Luckily, it doesn't look like we need to wish it failure much these days. I don't know many people who use GNOME since 40 (much less any developers on it), and compared to the KDE development roadmap their progress is utterly lethargic. If anything, it's a project that is sustained by a few paid contributors from Red Hat and a miasma of third-parties who have some vision of GTK and GNOME converging into an Apple-like platform.
There actually are a good number of paid contributors from small companies, and also a good number of volunteers too. That's what I have seen.
Don't feel discouraged if your first contributions didn't make it in, GNOME is a big project so I'm sure you could find some other areas to contribute to if you really wanted. Keep in mind that core areas such as the shell and GTK are probably bad places for first time contributors as they tend to be very complex, it's best to start with a smaller app/library and go from there. Of course if you don't want to contribute then you don't have to either, but I think all of this applies to most large open source projects that I have seen.
I'm not mad that my patches were rejected, the GNOME team is notorious for neglecting basic functionality like thumbnails in the filepicker for almost two decades, even with literally hundreds of pull requests with suggested fixes. I'm mad because the current maintainers have no interest in extending the discussion around what GNOME should be. For all their talk of inclusion and diversity, their attitude runs in the complete other direction. They're encroaching on the same issues that systemd made, where their software's scope is expanding way too far with far too little substance. I've been told to "not bother" making apps that I don't plan to distribute via Flatpak. I've been told that disliking Adwaita is paramount to fascism, and when I try to reason with people and explain myself I get told to read the Code of Conduct.
You're painting this out to be a personal issue, which it isn't. The culture among GNOME developers is one of the most toxic I've ever seen, and it's continuing to poison a desktop environment I desperately want to love. Every time I suggest something I get shut down though, so why bother? Why would I willingly hurt myself in the process of trying to make a usable desktop? The only thing I can do now is share my experience as a warning to other developers who want to make Linux-native experiences: GNOME does not want your help, don't waste your time trying.
Sorry to hear that but I think on some level it is personal. I have never experienced any of what you're talking about, everyone I've talked to has been pretty respectful and open to collaboration. It's a large project so experiences may differ, we may not have ever interacted with the same people, or you may have caught them on a bad day, etc.
On those individual things, you can certainly make apps outside flatpak although on a technical level I think that flatpak (or a similar packaging mechanism) is going to be the best option for a great number of apps, and I would expect that to become the focus for many app developers just because it's a lot easier and saves time. I think that comment about Adwaita is pretty inflammatory and may be seen as being against the code of conduct, not sure, but it certainly isn't my view and I doubt it is the view of the majority of contributors.
"neglecting basic functionality like thumbnails in the filepicker for almost two decades, even with literally hundreds of pull requests with suggested fixes."
I mentioned this elsewhere but I'm very disappointed to see this issue get continuously brought up, I don't think there is much we can say that is productive at this point. I've never seen a pull request from this that was actually finished to completion. Is there someone in particular you're waiting for to approve this? If so, can you think of something that could help them out? Or do you think they don't want help at all? Because from my perspective, that is not the case.
Edit: Also, hundreds of pull requests to implement thumbnails? Is that an exaggeration? I'd like to see a list of all of those if possible.
Keep in mind, it's not unusual for a large patch to go through many revisions before finally making it in. Take a look at the Linux kernel for another example, you don't have to look far to see many patches that take a long time to go through review or just never make it in because of various reasons. I don't think you are being fair by painting this as a GNOME behavior, it is simply reality on large projects with a lot of complexity and moving parts. It sounds like you are also saying systemd suffers from the same issues (it probably does) but unfortunately it seems that is another area where it's a complex problem space, so that's the trade-off that you make.
I'm making an honest offer to reach out and help and correct past wrongs, I am sorry if it's patronizing, if you want me to do something else then just ask. I'll be here if you change your mind. Please understand that if you refuse to take anything but complete agreement for an answer then it is going to be very difficult for anyone to help you, I can relate to your experiences but I'll never be able to experience them fully myself. So in that way, the ball is in your court.
Don't worry about my project, I'm only commenting here to help you and to explain that those other opinions are not shared by everyone. In fact commenting on social media is usually a waste for my projects as it only seems to attract more negative/toxic comments from people who (incorrectly) assume that I share all my opinions with an upstream project or something like that.
Please avoid such hostility, open source is a team sport and we are all on the same team, even projects that do things in ways that may be not immediately useful to you or I. That kind of hostility is what leads to open source people burning out.
I am not sure why you think they don't have a "share-it-back" mentality or why you would wish any project failure. They use mostly the same set of open source licenses as KDE.
No. You don't get to simply declare we're on the same team if you're obviously trying to score points for the other side. In this case "the other side" is exclusivity and Gnome keeps doing it.
As a lawyer, I hate to inform you that the entire point of licenses is to put real teeth on ideas that people are likely to renege on. LIKE HERE.
I am not trying to score points for any side, all of these projects can and do share code when it's practical and useful to do so. There is no "exclusivity", I'm sure you know that most of these licenses in use are compatible with each other. The end result is that we are already unwittingly on the same team, it took me a while to realize it but that's the whole reason things like Github and such have taken off. I'd love to discuss your experiences with that as a lawyer if you have any interesting stories, but I would have to ask you to please avoid the hostilities, that doesn't help when trying to fix issues. We both have the same access to the same code.
When Gnome stops being hostile to people who want to modify it in the original spirit -- not necessarily just letter, but SPIRIT -- of free/open source, I'll stop being hostile.
I'm not trying to "fix issues." I'm trying to point out that, sure, Gnome may be technically following the rules, but the entire project is being a collective jerk about it.
Now, we can go deeper on why -- I have no reason to believe individual developers are jerks. I understand there there's money and influence and (cargo-culting) pressure from large companies involved.
But presently - Gnome is a collective putting out a product that is harmful to the environment -- not in an ecological way, but one that reinforces bad ideas about how to make software, aka "only do the bare minimum to comply with the license, otherwise try to dominate via whatever means possible."
I don't know what you mean spirit. To me the spirit has always been "here's some code, do what you want with it" i.e. the exact opposite of what you're saying. I think you may have some missed expectations, I can't understand how you'd perceive that as being dominating or what you mean by someone being a jerk. Both you and I have the same access.
Edit: In my experience, pointing out how someone is a jerk doesn't really help in open source either. That usually just causes them to become defensive and only increases the hostility. Since the code is open it's much better to just fix it for yourself and not worry about what someone else thinks. That is, if you think the situation is truly unrecoverable. If not, then it's better to set aside your differences and work it out.
I'd say "learn your history?" This all came about because of Richard Stallman et al, who absolutely were not "here's some code, do what you want with it."
They were "We have a good thing with this Unix deal and how we do it, we share freely, backwards AND FORWARD. How might we continue this in a wider fashion; knowing that some might be inclined to take and not give back?"
And thus, the GPL was born. MIT-style licenses are fine in some cases, but you're working of the back of Linux, and that's GPL territory.
I know who Richard Stallman is, I've met him several times and I used to volunteer for the FSF. The GPL says that the user is allowed to share modified versions with other users. It does not say that the original author is required to accept some modifications into their version. If you don't believe me then the GPL FAQ has a bit more information about this: https://www.gnu.org/licenses/old-licenses/gpl-2.0-faq.en.htm...
"Some have proposed alternatives to the GPL that require modified versions to go through the original author. As long as the original author keeps up with the need for maintenance, this may work well in practice, but if the author stops (more or less) to do something else or does not attend to all the users' needs, this scheme falls down. Aside from the practical problems, this scheme does not allow users to help each other."
It's pretty explicitly spelled out here that it is absolutely what they meant, it wouldn't work at all if it wasn't "here's some code, do what you want with it."
If someone is trying to get you to take a case along the lines, I would have to say don't take it, it's probably not going to be a winner.
It isn't just GNOME, almost all Linux distributions rely on tightly-coupling a set of libraries and binaries. That's why you can't take a binary from one distro and reasonably expect it to work on another and it's why there are dozens of repos and thousands of package maintainers and shipping on Linux is a pain in the ass.
Meh. I don't think it's technically difficult to just ship binaries that work on any distro, as it's incredibly common - Firefox, Blender, Arduino, Cura all manage it just fine. Mostly they just come as a tarball you unpack, occasionally you get an AppImage so you don't even need to do that. Technically it all works fine. But the Linux world still holds a cultural aversion to running mystery binaries, echoing from a time when the sense of battle against proprietary software was felt more keenly than it is today. Expecting people to run binaries they didn't compile was considered rude and antithetical to core values; distros being the only authorized exception. Even today, attempts to dislodge the distro as the official distributor of binaries are met with controversy and suspicion.
I've been working with Linux since more than 20 years and I don't recall any cultural aversion against binaries, at least the ones shipped by your distribution.
Compiling everything has always been a very niche hobby limited to mostly Gentoo and similar distribution users.
What we've maybe always being diffident about is app bundles, big images of apps with all their dependencies most of which duplicates of already installed libraries.
But I guess we're moving on from that with snap, flatpack and the likes gaining traction.
We agree; distros are/were the anointed compilers and distributors of binaries. No distro would ever include an upstream-compiled binary in their official repositories, even if it worked perfectly well and had no dependencies.
You: "it has become much better with wayland and is improving fast".
Two comments above you: "wayland has been a steaming pile of crap since it still can't work in basic graphics modes 10+ years after being announced, to the fact that it has loads of quirks that vary from one environment to another."
The Unix philosophy is in practice horseshit for anything that isn't semi-structured text processing, and for discoverability. Gimme a software with all the functions laid out in menus that I can explore !
"They're not supposed to be tightly integrated, the gnome team seems to have forgotten all of this."
That is a misconception that I've noticed, maybe it's because KDE uses different APIs for a lot of things, or maybe it's because of the way some distributions package it. But none of the Linux desktops are really that tightly coupled when you actually dig into it. For example managing the desktop settings is handled by its own daemon appropriately called "gnome-settings-daemon" that loads all its settings modules via a plugin system. KDE is structured similarly in that its code is split across a great number of small support libraries. If there is no secondary consumer of those APIs it's probably because nobody has bothered yet to make and test additional combinations of packages that actually work.
Yea, I have absolutely no issues when running a mainstream distro with mostly default behavior (PopOS/Ubuntu). When I veered off into StumpWM land, I had a really strange issue that I had to debug.
My laptop is pretty much always in "Nvidia graphics" mode to power the external monitor. When I tried to install Stump I didn't have it hooked up. My laptop screen was completely blank, but everything was functional. To make a long story short, it was completely unrelated to Stump. My Nvidia output was prioritized and the internal screen wasn't activated.
I ended up learning a lot about udev, how to write udev rules (that ended up useful when trying to do something with a keyboard with a unicode symbol in its name that a program didn't like), and how to use xrandr. All in all, it was super satisfying to figure out, and I would say the frustration was worth it for me.
I know exactly zero other people in my life that would say the same thing, but I also know zero other people that would try to install StumpWM.
None of this works on my fedora 35 system. Welcome to nouveau and a laptop. From display's that don't resume from power save mode, to odd bugs with monitors appearing to be "on and enabled" in the appropriate display settings but aren't actually displaying anything.
IMHO wayland has been a steaming pile of crap since it still can't work in basic graphics modes 10+ years after being announced, to the fact that it has loads of quirks that vary from one environment to another.
That said, intel graphics tend to work for me, but pretty much nothing else consistently in linux.
Sincerely, sad user running X11 on recent laptop with hybrid graphics...
I don't think Linux nor RedHat are to blame for your poor experience with nVidia graphics. The one-finger salute Linus gave nVidia years ago was well deserved. A lot of work has gone into reverse-engineering attempts to improve nouveau but there is still a long way to go despite their efforts.
Exactly. Either use the proprietary driver for the GPU you bought, deal with the flaws in the FOSS reverse engineered driver, or get an AMD GPU since they actually release their own legit FOSS driver.
Hey I get it, making shit work is hard. Its always more fun to do greenfield development on a new project. But basing the entire premise of that project on the faulty assumption that a certain set of GPU functions is stable and well supported across most of the devices in the field? That just shitty engineering. If nothing else the wayland guys could have created a llvmpipe style implementation 8+ years ago, so that wayland isn't a case of, does my hardware+drivers actually fit the narrow set of requirements to run this environment stably. Maybe simpleDRM will fix that, who knows.
But what is apparent is that a significant portion of users, be they nivida users (and no the proprietary driver fedora wraps isn't better in this case, it just has different problems), users with the latest bleeding edge intel/amd hardware or users with slightly older hardware that wasn't part of the amdgpu test matrix, are being left to fend for themselves with X11. Ignoring that of course gnome is really the only thing that supported wayland until kwin started adding support. All the lightweight or non mainstream WM's are just running on X shims.
Well, thanks redhat! But, if the state of graphics on linux is so bad, why are they spending their scarce resources writing an entirely new graphics stack rather than maintaining the one that works?
This is IMHO the firefox/etc problems all over, instead of fixing user visible problems, while their market share plummeted they were creating an entirely new programming language to rewrite the browser in (something their competitors apparently didn't need to do). Result, eventually they can't afford to do either option because they ignored the fact that exactly 0% of their users cared about whether the browser was written in rust, or some limited subset of C++ that could be maintained.
Also, if one looks at the graphics drivers in linux, there are dozens and dozens of them that aren't amdgpu, intel, or nvidia. How many of those actually work with wayland? I'm fairly certain it isn't many.
PS: Also, if someone tries to participate, but is ignored, who is at fault? Much of the opensource community works by working to find near 100% consensus. Thats why it takes forever sometimes to get a solution. Yet, when nvidia does stick their toes into it, their contributions are ignored because "we know better", despite their market share? Doesn't sound like concensus, sounds like they got run over.
> But, if the state of graphics on linux is so bad, why are they spending their scarce resources writing an entirely new graphics stack rather than maintaining the one that works?
X11, being 37 years old has fundamental limitations in it's design that make it unsuitable for modern computing. There has been better ideas in the decades that followed that are incompatible with the X11 way of doing things.
This is a discussion that has happened years ago, all that's needed is to perfect the implementation so that everyone can move onto it.
It’s not even the wayland protocol in question, but a goddamn kernel API that nvidia used to refuse to implement. It just happens that wayland tries to use the.. linux graphics apis.
The reason X works better with nvidia proprietary drivers is because you were using a hybrid proprietary X server in the first place.
Haha, let’s not throw the baby with the bathwater though. Unix philosophy relies on good API between subsystems and enables the user to combine them as they see fit. The problem is that the interfaces are getting a bit rusty (no pun).
macOS is totalitarian as UNIX is libertarian if I may have some fun with analogies. No top down direction but individualism is encouraged. There is something to be said about that! It’s a test bench for super cool ideas - e.g. suckless tools.
I have no idea about some of those other desktops, but all the basic moving parts are mostly necessary:
- logind because you probably still want lid/sleep events to work when at a VT
- gdm because you probably still want lid/sleep events to work while nobody is logged in
- desktop settings because different users may want different things to happen on those events
I'm a little confused as to why the gdm settings were needed, at least for me those only take effect when the greeter is actually open. And in GNOME the separate screensaver daemon has been removed, in part because it simplifies the whole equation.
It's been a long time since I've worked with this stuff in any depth, but I wonder if there's the concept of "handoff". While a VT is front-and-center, logind should be in control. While GDM is in the foreground, GDM should be in control. While the desktop environment... etc.
But it seems to me that each of these things aren't entirely aware of the other, and constantly fight for control. I know logind has an "inhibit" API that lets another program take over some functions, but who knows if everyone uses it properly. And does GDM have the same thing? Or is it expected to just know when it should step aside?
Yes, part of it is the inhibitor API although IIRC there are some other parts. GDM and GNOME share the same implementation of this API so they should at least do the right thing, not sure about other desktops and login managers.
I've got a Thinkpad T470 running Ubuntu 18. I use two external monitors - one over USB3.0, the other over HDMI. When I wake the computer from sleep mode, 50% of the time it wakes up with the HDMI screen disconnected. If I now go enable that screen in display settings, my 3-screen layout resets. I have to resort to unplugging both, waiting for it to figure out how to display things on just one screen again, then plug each screen in separately, HDMI first, taking a moment to let it get its sh*t together each time. It drives me crazy. I've learned to interrupt cooking dinner and pop into my office room to jiggle the mouse just so I don't have to go through this procedure again because I hate it.
Yea, the whole psychotic dance that computers do when you plug or unplug another monitor is just a terrible user experience. This happens not just in Linux, but in macOS and Windows, too. Have two monitors and plug another one in, and it blanks both of them, screen comes back on one of them, blanks again, then comes back on another one, then a second one comes on, blanks both of them, then, if you're lucky, all three will light up in some random order. Also, if you're lucky all three monitors will display correct, most recently configured resolutions. It's so bad, and seems to affect all kinds of monitors, all kinds of graphics adapters, and all OSes. And it's been this way since the very first time I had a second monitor back in 1999. Plugging in a monitor shouldn't confuse a computer so much but here we still are.
When I plug a monitor in, I expect the other ones I'm looking at to simply stay on. This doesn't seem like a huge development effort, but what do I know?
It's really bad. If I turn off my main monitor, it reverts to a default setting and I have to reconfigure the rotation of right-monitor, enable middle monitor, and swap the positions of left and right monitor. Every day.. but I hit a key combo to fix it.
Ended up making an awk script that can be run by hand to create the xrandr command line based on the current configuration (to be applied when it fucks up). Not at my computer now but if anyone is interested in it, I can link to it.
The one time I had a Mac book some years ago external screens would only work if the laptop was on power, and after sleep I had to reattach the screen each time. This is issue was present the whole time (some months) I had this 2000$ beast (for the whole series, not just my Exemplar), afaik the 2015 macbook pro. Forums filled with the problem, no solutions in sight. Honestly a horrible experience that I could do nothing but wait with such crucial issues.
On Linux I could just edit my xconf and everything was fine.
I guess you never ran into https://spin.atomicobject.com/2018/08/24/macbook-pro-externa.... The chroma is broken on a large number of external displays when using OS X. Just search the apple forums for how common this problem is, and Apple hasn't fixed it in years now. It's one of the reasons I gave up on OS X because I need external monitors to just work without blurry fonts.
I'll just say this is a good reason to use a full DE like Gnome or KDE. At least with Gnome/X11 I never had any issues with external monitors. And it seems to always remember the display config of any monitors I had used in the past.
autorandr has been great for automatically swapping between configs. I use arandr to set up config on a new machine and then autorandr to manage on common hardware setups.
I have had none of these problems with a base Ubuntu install.
Installing Xmonad though introduced all of them. And my active conclusion is that there's simply a hundred corner cases and it takes time and effort to beat a specific configuration of window manager, graphics subsystem, and system status monitoring into submission. Change a major piece and break the accreted fixes.
Installing Xmonad was worth it, but lot of configuration was needed and the end result still isn't as reliable as what I left behind.
Having to type that before I put my laptop in a bag is worth not having to worry that the laptop will spin up in the bag and cook itself trying to update the OS.
Which is a problem, but my 2015 Macbook did that way more often (which is to say, a handful of times) than my Thinkpad which did that exactly once, while I was setting it up.
It's been a struggle getting two monitors to work on my GeForce 2080ti. I've been trying to install Linux natively on my machine and unless I turn off one monitor, I get crazy artifacts and I can't even see the UI at all for the setup. I've tried different distros even... All of the latest versions of these distros have this problem, so I'm assuming it's some kind of kernel bug.
That does not at all sound normal. A stable system (linpack and furmark at the same time stable) shouldn't do that. It's not impossible for that to be a BIOS/EFI issue. Personally my 2060super with Debian stable has been a surprisingly pleasant experience.
At least you haven't gotten to the part where you can effectively(not literally) send hotplug events and such over displayport if you have two computers connected to one monitor. Kscreen gets a bit obnoxious in that situation. It wouldn't be too hard to send data between computers with that...
What I don't get is that all the problems I've had with external displays on Linux, I also have with the same display on OS X. On Linux I have a script to explicitly configure what I want, on mac OS I usually just unplug and replug and hope it remembers the right layout and detects all displays this time. I could do the unplug and replug strategy on Linux too, but the script is easier
I think all of the initially stated requirements work for me with default settings on LM20.2 + xfce (on an x230), however not over a usb-c hub (just a display port cable and power cable directly attached to the laptop). I wonder if the problem isn't really the combination of power+display over thunderbolt.
I have and it drives me mad. Yet another "feature" that I need to figure out how to disable or it keeps biting me. I wish computers would stop trying to second guess my intentions and just do as I say and not more.
> I have a USB-C dock that provides both power and a Thunderbolt display output over the single cable to the laptop.
That's where I stopped reading. Mac and Windows also have problems with such setups. Not to mention having displays with different resolutions and refresh rates
> Mac and Windows also have problems with such setups.
Lol no.
I have issues on my Windows laptop when I try to use 2x 4k + one 2k display display from the single thunderbolt connector, due to Dell TB19 dock allocating too much bandwidth for the DP and HDMI ports (while 2 of my screens are USB-C) regardless of the resolution they use or the number of screens connected.
But with just 2x 4k USB-C screen, it provides both power and display output over a single cable, both in Windows 10 and 11.
And if I connect the 3rd screen to the dock after connecting that single cable, everything works.
FYI, since it is a dock bug, the same thing happens in Linux.
> Not to mention having displays with different resolutions and refresh rates
Lol no!!!
My displays are different: the 3rd one connected by DP is an ultra wide CHG49 - so neither the same frequency, aspect ratio or DPI.
Yet it still all works fine. Linux users are weird when they start assuming just because I use Windows, I must have an experience as bad as theirs.
Even my terminal (mintty) is far better than the best Linux has to offer - and I'm not even talking about AHK that lets me control my keys in a language far more advanced what the few configuration xkb or xmodmap offer.
> The amount of flickering windows gets away with is astonishing to me, and I say that as someone coming from linux.
I have noticed a little flickering on my Lumia 950 that runs Windows 10, but I'm ready to give a pass to 5 years old hardware running an experimental build of ARM64 Windows.
Maybe we have different hardware? I'm mostly using Lenovo laptops with one Microsoft and one Dells tablet.
> Also, what about audio that likes to update in the background and can simply fail until I reboot?
You mean driver updates? It depends on your Windows Update settings.
If you mean the audio getting rerouted, I like that when I plug a new device it gets the audio sink priority without fiddling.
And I like I can control that behavior with AutoHotKey scripts to do precisely what I want if I need to.
> The thing is, OSs are goddamn complex and thus they have plenty of bugs all around.
That is an absolutely fair comment! I'm just surprised linux geeks especially in a specific age group (over 25) seem to believe Windows is evil and anyone using it must not be a techie / must have a horrible experience and would find immediate enlightenment and satisfaction if installing Ubuntu.
All kind of nuance seems to be lost when you explain that it works fine for you, and give practical examples of advanced things that would be extremely complicated and sometimes totally impossible to do in Linux
I use my external display as a USB-C dock. I run it at its native 3840 x 1600 resolution at 120Hz or 144Hz, connected to my MacBook Air 13" running retina native resolution at 60Hz. Never had an issue with this setup on macOS.
That was my experience with a Macbook-Pro 2019, trying to use 2 2k monitors on it was a nightmare. Sometimes it would work and sometimes
I would need to spend a while trying different plugin sequences until it worked again. It was such a problem that I never wanted to take
it with me and use it as a laptop and just left it there connected all the time.
This wasn't an issue with older models that had some other means to connect a monitor other than thunderbolt but these newer models only had
thunderbolt display and it didn't work properly.
this was with and without a dock, had the same issue when I used a dock and when I connected the two monitors into the laptop's thunderbolt ports directly.
It's correct in my experience. My mac refuses to recognize 2 independent monitors in the following setup: laptop -> USB-C -> dock -> 2x 1440p@144hz Displayport monitors. Supposedly this is because Apple doesn't implement the entire Displayport spec, I don't really know the details. My windows laptop handles this particular setup just fine, but forgets one of my displays at work every so often. It's a mess.
> Supposedly this is because Apple doesn't implement the entire Displayport spec, I don't really know the details.
I can see that. I had 2 27" 4K HDR 144Hz monitors that worked fine in Catalina. Big Sur and Monterey, they do not: they'll do 95Hz SDR and 60Hz HDR.
If on my screens I change DisplayPort mode from 1.4 to 1.2, they'll do better (120Hz SDR). So something in Big Sur broke DP 1.4. And Apple doesn't care, because there have been hundreds of bug reports on it ever since the Big Sur betas and it's still broken as of Monterey 12.0.1.
I guess Apple's philosophy is "we don't care if you don't have a Pro Display XDR" (and I say that as someone who owns one).
I have to use SwitchResX to define a custom resolution (95Hz), otherwise my gaming monitor won't go above a sad 60Hz.
Meanwhile, Linux runs perfectly on the same setup all the way up to 160Hz. Yet another place where Linux has fundamentally better hardware support than Mac.
I currently have that setup for my old work-provided Macbook Pro from 2017. One cable going from the laptop into the thunderbolt dock (no extra cables, as the same cable also charges the laptop). The dock outputs to 2 external displays, one 4k@60hz, another 1080p@60hz. Same 2 monitors are also plugged into my Windows desktop directly.
Been using that setup for almost 2 years, zero issues whatsoever, so I have no idea what you are talking about. I would be willing to concede that I just got lucky with the stability of my setup, but my friends who use similar multi-monitor setups with thunderbolt docks have no such issues either.
I see a lot of people have taken an issue with my statement. Someone with a few accounts has downvoted every single post of mine from yesterday. Fanbois are really something.
It's great that some of you, most people even, don't have these problems with multiple screens on your preferred OS. However, that doesn't mean that no one does.
Think about it. If everyone had these problems, they would have been fixed. Things are not black and white. Not every setup is the same.
Not true. For example the Amazon reviews of any displayport KVM will be mostly one star. This is usually because dynamic switching of displayport works so badly on all operating systems.
Yet, usbc->display port works fine for my monitor on windows, but doesn’t work in any distro I’ve tried. And using hdmi works but I get flickering if I want to run the monitors in any refresh rate other than 60.
It works as well on Linux as it does using Windows for me, using Intel graphics (both have minor issues with displayport MST, i.e one USBC and two monitors). Are you using NVidia hardware by any chance?
I recently watched most of your demo videos and later some of Immersed's and found it quite odd that Immersed, unlike Simula, has a virtual monitor concept. Adding a monitor even looks a bit janky as it makes all monitors disappear shortly [1]. Immersed's animated 3D environments do look pretty cool, but seeing you use Emacs in a Linux-based VR system definitely got me more exited. Keep it up!
The key things I needed to find out:
The last step is mostly left as an exercise for the reader. There are some good examples online; I'll put my current version here https://pastebin.com/V5yGgCbh for a couple of weeks.