Hacker News new | past | comments | ask | show | jobs | submit login
Windows OS, Services and Apps: Network Connection Target Hosts (2021) (helgeklein.com)
289 points by pmoriarty on May 27, 2022 | hide | past | favorite | 285 comments



I read a lot of comments here wanting to block the connections, but wouldn’t be easier/better to switch to an OS that doesn’t spy you? The answers possibly is no, because they might use X or Y that requires them to use Windows, so a better question is, what holds you back to switch to Linux? (macOS also talks to a lot of servers and collect telemetry as Windows, maybe in a different scale, so isn’t an alternative if you are worried about your data being collected).

I personally prefer Linux, but what’s holding me are the streaming services (netflix, amazon, etc), I can only watch SD content if I’m on Linux, so my second opinion is macOS, I can watch at least in a decent resolution and also their notebooks are the best ones out there (the air is amazing between price, performance and battery life) for my use case, that’s also work.


It can come down to one program that doesn't have a good alternative on another OS, my version of that are the DAWs / music production tools I know and love. Wine solves a lot of this (but not all) for Windows programs, the macOS equivalent Darling[0] seems to still has some way to go until GUI programs are reliably supported.

[0] https://github.com/darlinghq/darling


If you’re open to switching to Reaper, it’s my understanding that their Linux version is coming along well. I’ve read good things about it plus LinVST. Of course, I’m also unwilling to test this myself and I’ll probably run MacOS until the day I stop making music.


I use Renoise and Ardour as my DAWs on Linux.

Bitwig (which has an Ableton-like interface, and was made by Ableton developers) is also available for Linux.


Reaper on Linux is really quite impressive IMO


Bitwig is the best Linux DAW in my experience, although it isn't FOSS. Running third party VSTs is also much more of a pain if even possible. I'm also stuck with Logic... for now at least.


It guess it depends a lot on personal preferences and expectations, but I'm very content with my Bitwig setup on Linux. There are obviously a lot less native third party VST plugins available on Linux, so if you already own lots of them and want to make a switch that could be a problem, but if you're just starting out, there's more than enough available:

https://www.kvraudio.com/plugins/linux/vst-plugins


yeh plugins ended up being a big thing for me, but in the end I decided I'd rather be free to use whatever OS I want to use rather than keep valhalla shimmer.

Can be a bit of a pain migrating from windows though if you have loads of previous projects, all expecting to find a specific vst in a specific place. Thats why I'm dual booting windows for now.


IIRC Bitwig was founded by ex-Ableton developers. I haven't tried it yet, but that's good pedigree.


You can also dualboot a linux distro and windows


I haven’t noticed or didn’t care that the content was in SD on Linux. I was pretty happy it worked. Use a Roku for streaming HD to the tv for the few things I watch there.

There is a weird “wildvine” plugin thing that chrome and Firefox use for video drm for some services. I think that enables video.

Linux seems to be the only OS that is respectful of the user.

But desktop Linux has come really far. It my daily driver on my home machine and I’m switching to it at work (or cluster is Linux anyway). I’m hardly a sys admin type and it’s been working great for me the past few years. Even steam works on Linux now.


Yeah, I see this repeated a lot but I haven't really noticed a difference. Maybe I'm just not picky on the topic. But my understanding was it was never traditional SD, but 720p.

Regardless, I use firefox and Amazon, Disney+, Netflix, HBO Max all work reliably for me. I believe there was a period were a couple of them didn't work for a month or two on linux when they came out but I wasn't subscribed to them then and they've always worked for me.


Quality on streaming services isn't good anyway, and depending on the exact hardware and devices you have connected you won't get "HD" on Windows, either.


Linux still has horrible font rendering, which makes web browsing uncomfortable. It doesn't come close to Windows or Mac. Fonts often appear soapy, kerning is off etc.


Is it really as bad as you make it out to be? I've got everything at home: old non-retina MacBook Airs running OS X, a M1/retina MacBook Air running OS X, Windows (well, wife's computer) and then Linux laptops and desktops.

Fonts on my main Linux "workstation" may not be as perfect as on the M1 Mac I use from the couch but it's because my 38" monitor ain't retina and it's a far cry from making "browsing uncomfortable".

Yes, many of the CSS stacks (even for the big sites which really should know better) are totally broken when it comes to Linux (they don't list enough fonts that various distro will have by default, they list fonts that many distro will not have by default and unsurprisingly end up defaulting to shitty fonts) and as soon as one site started using that one CSS stack, they all started the copy/pasta. But I wouldn't go as far as saying that browsing on Linux is "uncomfortable": many people do that on a daily basis and are perfectly happy.

Also not all websites have broken CSS font stacks for Linux. Some saw the light.


> Is it really as bad as you make it out to be?

Yes.

> my 38" monitor ain't retina … perfectly happy

Put another way, either you are not visually discerning or your bar is lower.

// Since “FHD” laptops, and monitors <4K, are still a thing, you are not alone.


I think you misunderstood me. I've actually worked in print. I've written and typeset more books than I can count using professional DTP software (and LaTeX too). I know a thing or two about fonts.

I do not expect a non-retina display to display fonts as beautifully as a retina display, just to make things clear.

What I'm saying is that fonts under Linux aren't bad to the point of being uncomfortable.

And just to make it clear: I've got Windows and Linux, on non-retina displays, showing the same sites. Linux ain't worse.

There's a big difference between saying: "Retina displays are better than non retina ones" and saying: "Linux font sucks".

Yup: not everyone is on 4K / retina display. We know that. It doesn't make font rendering on Linux broken.


Since you bring it up, my second sysadmin gig was head of systems for a printing company, late 80s - early 90s, where I brought them into the digital typesetting era. So I also know more than a thing or two about fonts.

But I suspect you bringing it up, and me responding to it, doesn’t make either position more authoritative. And given “I’ve written more books than I can count” it could be your discernment of numbers or bar for objectively counting things is lower. ;-)

About retina vs. non retina, I’d argue font rendering algos matter more the less resolution they’re working with, so are even more important on non-retina displays. I’d further argue MacOS regressed on this by killing off sub-pixel rendering in Mojave, effectively dropping support for non-retina.

On low quality displays, Windows and Linux could/should look better than MacOS. When they don’t, they’re “doing it wrong”.


I have to agree here. Font rendering in Linux is across the board noticeably worse than macOS. I still can't quite understand why.


Agree with what? That's not the original point. The original point was that Linux font rendering "doesn't come close to Windows or Mac".

You're all shifting the goalpost.

Nobody is saying that fonts on a retina display on OS X do not look better than fonts on a non-retina display on Linux.

What we're disputing is that fonts rendering on Linux is so bad that it's "uncomfortable".

Now we may have a discussion as to whether everybody should be using retina display or not but that'd be another topic.


I have a 2012 MBP and two other machines running Linux. I'm not talking about retina displays at all (I don't have one). The rendering difference is noticeable. "Uncomfortable" is exactly the word I would use.


I have Ubuntu running in Parallels on an iMac, retina display, and the Ubuntu fonts seem noticeably less "nice" to look at than the Mac. Now, I didn't take the Pepsi Challenge here, i.e. a blind test, so it might just be internal bias -- but to me, the Mac fonts are noticeably better.


My experience with font rendering on Linux is that it's much better than on Windows, on comparable displays. I have fought with my Windows station at work to get acceptable font rendering for anything but the optimized-for-ClearType system fonts, but font rendering on Linux is great, and if you have different preferences, there are adequate levers to adjust it (via, e.g., GNOME Tweak Tool).

(In particular, I get color banding and inconsistent weights in Windows, especially with thin fonts.)


Linux can have crappy font rendering.

I can also have gorgeous font rendering. I would mess around with some live distros to see what you prefer.

This is how my comment looks to me (on linux): https://i.imgur.com/wvruFyX.png


It looks very bad tbh. This is how my comment looks on Mac for comparison. Uploading to imgur to test if there is compression in play here. https://i.imgur.com/hSirmx6.png


You're either using 200% scaling or browser zoom because I had to scale your image by 50% on a 4K display for it to look 100% sized. This is apples-oranges.

And if you're looking at those other screenshots on a 4K display then you need to also zoom to 50% and get your eyes close the the display, otherwise you're just seeing the results of an image scaling algorithm, which will indeed look terrible.

Here is Fedora/KDE/Flatpak/Firefox with default settings at 4K resolution and 100% browser zoom for comparison:

https://i.imgur.com/s3ynhuj.png

Also with my preferred browser zoom: https://i.imgur.com/9PXDbxY.png

For the above images you'll need to zoom 50% on a 4K display, because PNG image files do not carry pixel density metadata. If you have 1080p display then you have to keep 100% zoom and sit far away instead.


no I'm not using any zoom or extra scaling. It's the native scaling of mbp retina display. Screenshot is ~2 times bigger than what I actually see.

edit: your second screenshot looks very sharp and nice on my display.


Retina display is at least 200% or more scaling, right? You're comparing hires text rendering to lowres. Screenshot is 2x bigger because browser doesn't know about pixel density and scales it 2x by default. It does the same to those low res screenshots so they look normal sized on 4K but they're blurry because they're being image scaled.

See: https://johankj.github.io/devicePixelRatio/

If you see more than 1 at 100% zoom you have resolution scaling. Try browser zooming to 50% or 200%.


Yes retina display is 2x. But saying a display is 4k doesn't tell anything about its pixel density or scaling.

I look at the original commenters screenshot at 50% browser scaling but it's still blurry. I guess there is an element of image compression there.

On the other hand; from my own experience (I own linux, macos and windows pcs) on the same display Macos text rendering is much more sharper and pleasant to look at.


> Yes retina display is 2x. But saying a display is 4k doesn't tell anything about its pixel density or scaling.

I mean with fractional scaling where 1 < devicePixelRation < 2 you're doing it wrong and everything will look bad anyway, so I just assumed integer scaling. And 4K @100% is unusable anyway.

But more to the point you can't easily display those screenshots with different fractional scaling without pixel grid misalignment.

> On the other hand; from my own experience (I own linux, macos and windows pcs) on the same display Macos text rendering is much more sharper and pleasant to look at.

On hires displays (devicePixelRatio >= 2) you're probably just comparing default fonts. And on lowres display (devicePixelRatio = 1) OSX is much much worse than Windows's Cleartype, it's not even a contest.

Linux was always able to be as good as Windows on lowres displays, just was held back by patents, so the Freetype defaults used to suck, and you had to configure it yourself for the full effect.

Idk if it's any better nowadays, 4K display are so cheap and ubiquitous that I really don't get why anyone would buy anything else, except I guess gamers. But with all those AI scaling technologies (FSR 2.0 etc.) hopefully 1440p is on the way out.


> And 4K @100% is unusable anyway.

That depends on the physical display size and viewing distance.


No, not really. 4K@100% is 4x working space as 1080p. There's no way to put that much information in your field of view all at once. If you scale it up (or get really close) where anything is legible it goes out of your field of view and you can only look at (roughly) 1/3 sized region of the screen at a time. That's not usable as a single display. You're basically emulating multi-monitor 1080p at this point.

If you need more working space there's always 5K. That's 1440p@2x. You can even DIY one for pretty cheap. I know there's at least one 3:2 4K display on the market as well, though I wish there was a 24" version.


> That's not usable as a single display. You're basically emulating multi-monitor 1080p at this point.

That's a distinction that doesn't have to exist. If you don't insist on maximizing all windows, a single large monitor gives you more flexibility that multiple monitors with the same total screen area. E.g. for many applications 1080p is too wide to be effectively be used by a single window but too narrow to have two windows side by side. Dividing a 4K or screen by 3 is much better and also allows you to freely choose the window sizes. Some basic tiling support in your WM (either automatic or via shortcuts) is recommended ofc.

I use a 38" 3840x1600 monitor (= about 110 DPI) at 100% scaling and it works just fine for me at normal sitting distance. That's the same horizontal resolution as 4K and I don't think the missing 560 pixel rows would make much difference to how I would use it.


I use a 23.7" 4K (2160p) @ 200% and I think it's barely enough pixel density (180ppi), so I can't relate to this at all.

Apple's "retina 4K" LG Ultrafine 4K Display is actually 4096x2304 at this size, which is much higher density at 218.58 PPI.


Not bad for me on Fedora 36 Wayland, 2x scaling and no hinting, like God intended.

https://imgur.com/a/ZFTwW6A

This is as good as macOS' rendering as I've ever seen on any other machine, and I'm really fastidious about my fonts. It's really easy to achieve:

1. Install Fedora

2. In GNOME Tweaks, disable font hinting. Looks good only on HiDPI screens, this is why it's not default. If you're not on Fedora, change all fonts to Noto because they're much higher quality.

3. Optional: copy the Windows and macOS fonts to avoid the crappy "lookalike" replacements. The screenshot above uses Verdana straight from a Windows 10 installation, which is exactly what's requested by HN's stylesheet.


https://i.imgur.com/ptUyCn0.png another linux screenshot fwiw


interestingly I much prefer mine. :\

I guess font rendering is very subjective.


Setting aside font selection and looking at rendering, yours is softer around the edges which may subjectively feel pleasant when looking at it as an image but is objectively worse for reading.


Look at the last word, "linux". In both the "n" and "u", the left stem is thin and crisp, but the right stem is thick and blurry. Maybe it doesn't bother you, but I couldn't put up with that if I were using it all day.

You're right about trying different distros. Manjaro+Gnome does much better IMO: https://i.imgur.com/kiLDWzJ.png


That seems pretty terrible to me, even discounting the fact that it's monospace (I use monospace for editing too). Especially notice how the letters have inconsistent thicknesses, sometimes thicker on the left vs the right or vice versa, sometimes even different thicknesses depending on where they appear in the sentence (e.g. look at the 'n')


Hmm not sure if it’s the quality of the image or the font, but that might prove his point, that looks quite bad.


Is not subpixel rendering of fonts impossible to illustrate unless you sit down in front of the same monitor (or at least similar hardware)


Yes, doesn't have to be the same monitor but it needs to have the same subpixel orientation and you need to have the same resolution or 2x resolution and scale 50% so it pixel aligns.

If you look at those screenshots on a 4K display they're incredibly blurry because all you're looking at is your browser doing image scaling optimized for photographs.


It's both. The image compression makes it not perfect, and it's also a monospaced font, which makes it impossible to say anything about the kerning.


How does it look for you?


I hope the blurriness is because of image compression. Because if it is not, the rendering is really bad here.


Hrm, looks fine to me; this is how windows looks through the same compression: https://i.imgur.com/JRt9aIo.png


Wow, everyone is different I guess. That looks awful to me. I would go mad if I had to use that.


The kerning on your screenshot is very chunky. In particular look at the kerning on the lower case r’s in your screenshot. The r is given way too much horizontal space particularly to it’s right, especially so in the word “crappy”.


AH, that's because I'm using a monospace font for editing, normal pages use the normal font: https://i.imgur.com/otqaSTa.png


That looks horrible to me too, perhaps I'm used to windows but the word "comment" sticks out to me there as particularly bad.


It's not even the right font, unless you're using custom CSS. The Hacker News stylesheet calls for "Verdana, Geneva, sans-serif", and your PC renders a serif font instead. Something's wrong with your fontconfig setup.

What's your distro?


I'm using Archlinux and/or Gentoo.

Fontconfig is working fine but Verdana is a Microsoft font and Geneva is a Apple font.

My font fallback for sans-serif is DejaVu Serif.


The font fallback for sans-serif should be DejaVu Sans, not Serif. I didn't have any issue last time I used Arch.


I think it’s a monospace font, which makes the keming look bad


Shouln't it be ke r n i ng instead of keming if you are talking about a monospace font.


That screenshot is not very good at all, and I'm on Linux myself. You're just a few tweaks away from better rendering.


Willing to be wrong of course, I suspect that what I'm seeing and what others are seeing are quite different.

Perhaps there's a component of how the LCD is presenting it involved? to me my font is extremely uniform and sharp with no bleeding or blurring.

What tweaks do you have? I used to run X11 and ran the gamut of x11 based font rendering tweaks (example of how my old machine looked: https://i.imgur.com/J9biJsW.png )

But with sway/wayland I have much less ability to change things.


This screenshot looks better than the other of yours, but honestly I think it's more due how how antialiasing is rendered there vs in your Firefox screenshot.

Additionally, for ages Linux had shipped with Full hinting, which means distorting the font so straight lines are always on integer pixel boundaries, which looks atrocious. These days distros like Fedora use Light hinting, which is a little better, and in my opinion No hinting looks best, which is what macOS does, but on a low DPI screen means having blurry font. So on low DPI screens you have the choice between distorted but clean fonts, or undistorted blurry fonts.

I don't know about sway, in my case I'm using Fedora with 2x scaling, so I have the option of disabling hinting and avoiding distortion.


That honestly looks like something from a laptop in 1998. I haven't seen a screen that bad in a very long time.


That's an interesting perspective. Certainly not saying you're wrong. I found the font on Windows (note: Windows 10) unbearable despite deploying a plethora of "tweaks" and "fixes", and wasn't content until I made the the switch to Linux & MAC (home and work, respectively.) edit: Noto Sans or Futura BK Book for Interface & Document, Hack Regular for Monospace.


This may have been true in the past but I spend my day swivel-chairing between a current-gen MBP and a Debian derivative running Cinnamon. Both displays are the same and I don't find that the Mac is better. Maybe I'm less critical than others.

What's interesting is the MBP handles external displays horribly with the current version of OS X. Lid closure is a crap shoot on what the device decides to do.


Firefox lets you set local fonts in place of webfonts. The bad font rendering on desktop may not be completely ameliorated, but you can play around with some of the more well-designed font families (Noto, Adobe Source, IBM Plex) to give it a better look.


When I used Gnome3 on a HiDPI notebook fonts looked fine once I switched my interface font to a newer Helvetica and turned all of the system font smoothing feature off


I disagree. Which "Linux" are you even talking about? (it's not the Kernel that's relevant here of course)


> what holds you back to switch to Linux?

I occasionally need to do some development work in a Debian VM. I can't even remember the name of the window manager it came with; but I just can't figure it out. The equivalent of the dock / taskbar just isn't useful. I had to jump through hoops and hoops just to get Visual Studio Code to work, in part because it took awhile to figure out how to have root access to my own VM.

Now, I obviously could use a different window manager; but that leads to the real problem: It's just too much work for me to figure out how to use Linux, and find a setup that I like. And, after I do that, it's quite a bit of work to "keep up" with the Linux community and the changes. And, then it's quite a bit of work to ensure that I can find good, compatible hardware.

Finally: I'm very happy with Mac, and Windows 11 is "good enough" for me. I'd consider purchasing a laptop with a Linux-based distro pre-installed; but even then, that's a huge financial risk if I end up not liking it, or I hit compatibility issues. At least the Microsoft Surface has a 60-day return policy.

Which leads to my final point: I use a computer as a tool. Linux on the desktop isn't a tool, it's a hobby. I could see myself using Linux if I wanted to develop an alternate shell / UI as a hobby, or using Linux on the desktop if it was common in my profession.


Out of the box, KDE is very close to Windows - a start menu, a dock, and a task tray. I don't use Gnome, but I don't expect it's that dissimilar either.

No idea why VSCode doesn't work "out of the box" for you. Again, I can only assume you've picked some odd window manager, possibly a tiling one or something like that? Gnome and KDE "just work".

> Linux on the desktop isn't a tool, it's a hobby.

That's fine. My opinion, is the opposite to yours. My vanilla KDE install has been fine and I use my linux desktop as my day-to-day tool for writing production, revenue earning, code. I've been running KDE full time for over 4 years now with no problems (or rather fewer problems than I had with Windows and Apple, but YMMV).


"It's just too much work for me to figure out how to use Linux"

This is the crux, really. I'm adept at server administration, and have used Linux there for decades, but on the desktop it is, from my POV, pretty much a dumpster fire of bad usability.

Had Apple not gone to OS X 20 years ago, I suspect desktop Linux would be materially further along, but with a well-designed and well-supported commercial *nix OS in the market that ships married to bespoke hardware, there's materially less motivation to make desktop Linux better for people who won't want to have to tinker to make things work.


(Joke) Unless you're North Korean: https://en.wikipedia.org/wiki/Red_Star_OS

Jokes aside:

> Had Apple not gone to OS X 20 years ago, I suspect desktop Linux would be materially further along, but with a well-designed and well-supported commercial *nix OS in the market that ships married to bespoke hardware, there's materially less motivation to make desktop Linux better for people who won't want to have to tinker to make things work.

Aren't Android and ChromeOS linux-based?


Android is, but nobody's running Android on the desktop.

ChromeOS is, I guess, but how much of a true desktop OS is it vs. a front-end for Google? (Not trolling; I honestly don't know because I don't use any Google products, so it's never been on my radar.)


The very first Chromebook was just a laptop that ran the Chrome browser. An update made it more Windows-like, but by that time I gave it away to the Digibarn.


> I personally prefer Linux, but what’s holding me are the streaming services (netflix, amazon, etc), I can only watch SD content if I’m on Linux, so my second opinion is macOS, I can watch at least in a decent resolution and also their notebooks are the best ones out there (the air is amazing between price, performance and battery life) for my use case, that’s also work.

I would not choose operating system based on this. Instead, if you can afford, you could buy for example Apple TV as separate box which is not too expensive, has great quality for video and audio. But I guess there might be a chance that you want to watch on higher quality on other places than your home as well. For that, I have personally just used Windows sandbox.


That's kind of bonkers to me that Netflix don't support HD/4K on Linux. Considering the number of random porn site players that play HD just fine...

What's holding them up besides laziness?


Possibly their logic is that a Linux 4k player would more easily allow third party Netflix clients to be developed, because Linux software is more easily reverse engineered? Maybe their DRM is different for different kernels and different resolutions and they don't want to maintain an extra codebase? Really not sure, seems absurd to me as well.


DRM and Content contracts. They can't just throw 4k into an unsecured environment because they need to make a best effort to enforce that their 4k content isn't pirated.


I'm sure that is the reason, but I find it hard to believe that ripping content could possibly be that much harder just because you're on Windows.

And Netflix exclusives regularly make it to pirate sites in 4K afaik?


Windows/Apple have special APIs which allow video decrypting to be done on the GPU. And the GPU will not allow you to access the decrypted video surface.

This is why you can't screenshot Netflix on Windows/Apple - the region where the video is will be black.


It's not. If you can see it, you can copy it. It just makes life harder for a small number of legitimate users on Linux or rooted Android phones because their license/contract says they need to use DRM.


> And Netflix exclusives regularly make it to pirate sites in 4K afaik?

Yes, generally the day of release.


Not really.

No 4K versions of most shows from the last month.


Which is absurd because literally anything Netflix has can be downloaded in 4K from the usual places anyway.

Not disagreeing with you, just pointing out their faulty reasoning.


Market size? Also people consume Netflix via TVs.


It’s entirely DRM. On Windows you only get HDR/4K if you use the native silverlight/UWP (!) app with their DRM. Part of that is that all your monitors have to pass validation, not just the one on which netflix is playing.


Hah, I thought Silverlight was dead years ago, but I don't keep up with Windows tech at all.


Silverlight is long dead, the poster was mistaken: https://help.netflix.com/en/node/2090


Funny thing is, on an older Mac, If you try to run Netflix with the most current supported version of Safari, they redirect you to a page about Microsoft Silverlight[1]. If you don't read that page carefully, you might conclude you need Silverlight to use Netflix, which, of course, 404's when you go try to find it. Totally broken experience. I encountered this recently and had to download Microsoft Edge for Mac.

I'm not sure if I should be more irritated with Netflix, who can't be bothered to maintain backwards compatibility with not-that-old browsers, or Apple, who can't be bothered to get newer versions of Safari to work on older Mac hardware. Both companies are displaying a total disregard for paying customers who just don't happen to have the latest expensive hardware.

1: https://help.netflix.com/en/node/2090


Yes and no. That post is about the plugin, but silverlight the standalone application framework is now just part of UWP (and this is about the netflix standalone app after all)


Ok, but the the standalone application is still not required for 4K. You can watch Netflix in 4K using Edge on Windows or Safari on Mac.


You’re right, edge is supported again (during the migration from old edge to new edge it was unsupported for a while). I totally missed that, although it doesn’t change anything regarding linux support.


I tried running Pop OS for the last 3 weeks on my desktop machine. It's got 90% of the polish I expect from a UI/IX - which I can live with. I sadly switched back to Windows yesterday because I couldn't solve the following problem: I use Remote Desktop to get into my work machine - I have a 4K monitor and I couldn't get the scaling to work the way I expected with Remmina or freerdp clients. I had hidpi scaling enabled (150%) in Pop OS, but whenever I would connect to a remote session it would connect at 2560x1440 which made text ever so slightly fuzzy. The only way I could get the behavior I expected was to disable hidpi in PopOS, but then everything is too small.


Disclaimer: I know nothing about Windows or RDP.

I was just on a developer group call the other day where another developer from another organization was having weird scaling issues RDPing into the vendor's sandbox. The solution was to adjust some buried registry setting on the remote machine, not fiddle with the local machine. Unfortunately I paid little attention to the details as I rarely RDP and when I do I can tolerate display issues.

I realize this is super vague, but it was so fresh in my brain I had to respond. Maybe someone who actually knows something can provide actual details.


I've tried fiddling with host and remote, was unable to get a satisfactory result.


I've found remote desktop a pain as well.

On the occasion where I _have_ to use it I don't set it to fullscreen, just have it as a floating app at whatever resolution I care for. Then I just deal with the lower real-estate for however much time I need to be using it.

Nearly all of my remote work is SSH or SSHfs-based now, which (if you're used to terminal stuff) is so much nicer than any equivalent I've found on Windows.


> I personally prefer Linux, but what’s holding me are the streaming services (netflix, amazon, etc), I can only watch SD content if I’m on Linux,

I hadn't noticed. Wife's 43" TV is connected to a linux box with firefox that is used for Amazon Prime, Netflix AND Disney+.

I haven't noticed poor picture and assumed it ran at 1096x1080. I will certainly have to check this out.


Turns out, for me, watching on a 43" screen in SD is no different to watching on a 43" screen in FHD because I don't use my glasses when watching TV.

I can't really say that it has made a difference to my enjoyment of the shows I watched[1]. It is something to keep in mind for if we ever replace that TV with a 60" one.

[1]I just now compared a FHD download of "My Name Is Earl" to the SD Disney+ version and did not see a stark enough difference to switch away from Linux.


Yeah a big screen is only exciting for the first few times you watch a show. Then it just becomes the screen. We used to watch TV on a 19 inch CRT from clear across the room, and we enjoyed the shit out of it. The TV is not what made the show good or bad.

It was cool to go to someone's house who had a big TV, but now everybody has a big, high-res TV. Most people aren't even utilizing the resolution their TVs are capable of.

Just like most people are fine with a Sonos speaker or soundbar (IMO, they sound like any other crappy integrated speaker), most people would probably not care above 1080p.


For at least Netflix, there's no way to stream above 720p since Widevine only allows security level 2 on Linux.


Back in 2020 I couldn't stream from Amazon at a higher res than 720p from linux (tried every browser), as soon as I dual booted to Windows the quality improved a lot.

Could it be that you are using Plex, Kodi or similar that might allows you to stream at a good quality?


I'm guessing that (like myself) he probably just isn't that picky about it. The only IQ issues that I see that frequently bothers me is washed out dithering in dark scenes or low quality due to connection hiccups. I don't think UHD would improve either of those.


> I'm guessing that (like myself) he probably just isn't that picky about it.

Correct, see my reply to myself upthread.


I just checked on Ubuntu Desktop in Firefox and yes, Full HD is not an option.


Or just pay 5$ for a VPN instead of 20$ for netflix&co and pirate everything. Suddently the OS stops mattering. What was the point of netflix blocking HQ Streams for linux users? To prevent piracy? How ironic


There is always a way. But convenience is also important and time is valuable. So instead of just clicking on a title on Netflix, we should instal & maintain a server and search for content on torrents?


There are fully integrated solutions for kodi. Sure there's some setup but the alternative is paying a lot more for low res content on linux.


Yeah that and many more convenience reasons are why people like me don't prefer to use linux on desktop. Also even if you have your setup, not being able to view hdr content at all on linux is a big turn off for me personally.


I already use a pi-hole for our local network, everything msn, windows, microsoft related is already blocked for all our devices, so it's really not a big task. It's also not a big task to maintain the list of domain to block, every few months I find a new one to add. I also don't mind some level of telemetry.

And I personally like Windows as an operating system, NT is a very cool and well documented technology, and since WSL has been available it is also my favorite Linux desktop experience. For a while my main complaints where about the frustrating dev experience (mostly for web stuff, tooling often only consider a Unix environment), and the lack of package manager, but WSl + Windows Terminal + winget is just fantastic. And that's without talking about more niche things, such as the awesome Sysinternal Suite (so many hidden gem here!), the event system, powershell (looks ridiculous when you first read about it but it's such a ridiculously powerful shell once you discover that you can interact with .Net classes), Win32 APIs, etc.

(Edit: I say this as a relatively recent convert to Windows, I started with Mac OS classic, then OS X, then ArchLinux for years, so the majority of my experience has been with Unix environments)


You're assuming they're not just going to start using their own DNS or DoH for these requests.

Orwell got things slightly wrong in 1984; he never realized that we would be paying for the privilege of being spied on.


My personal assumption is that Microsoft isn't a malicious actor, so yes I don't expect them to try to dodge domain blocking. I also don't see their telemetry as spying, everything I've seen so far tells me that they only use gathered data in aggregate as a replacement for QA, and without PII (unless you decide to report crashes, send feedbacks, or take part in their insider program) and they seem to respect user settings regarding personalized ads.

But I don't like that you have so much advertising in the default Windows and Edge experience and understand the general frustration regarding telemetry, not everybody can control their network the way I do just to have a reasonable experience...


Windows 10/11 would be cool (I really liked 7), if you could buy a professional version. (not the one they call so). I dont want ads and telemetrics.


Windows 10 Enterprise LTSC?


Yes. I like that. Minor nitpick: wsl support was a bit behind

Mayor nitpick: can you buy it, if you are not a big coorp?


> can you buy it, if you are not a big coorp?

I wouldn't know because I 'found it online' (if you know what I mean).


What's holding me back is accessibility. See [Linux Accessibility: an unmaintained mess](https://medium.com/@r.d.t.prater/linux-accessibility-an-unma...).


I am working more on Linux these days because I am too tight to upgrade from 12Gb RAM, and that seems like not enough to do dev work in Windows, but an endless bounty to do the same on Linux.

Also rm -rf being instant is a big win (I know it probably isn't under the hood but I don't care, as a user I can get on with my day).

Also another nice suprise - I can get photos off my iPhone without it screwing up. Not possible on Windows. I can get the original DCIM images in the weird iPhone format (which is perfectly viewable in Linux too) and I can access all the hidden files to the extend iPhone makes them avilable.

I am really liking the experience. I am not into 'hacking' and just want it to get out the way and make it easy for me to install stuff and use it, so I go with Ubuntu.


> Also rm -rf being instant is a big win (I know it probably isn't under the hood but I don't care, as a user I can get on with my day).

Oh it actually is. It's just that Windows is dog slow at handling files (as opposed to their contents), for a multitude of reasons (I understand that there actually has been a little bit of work in recent years on some of these issues to improve performance for WSL and git).


I work with Linux for compiling Android images (with VSCode on Windows through SSH), and I also customize the kernel (drivers and DT, mostly). I also do some Qt porting to Linux, but I used Windows my entire life (apart from DOS).

And honestly, I won't boot my PC from Linux (at home or work) because I'm afraid it will break. I'll explain: I have some VMs with Ubuntu and, from time to time, I hit some problem that made the system unrecoverable. For instance, filesystem breaking or not mounting, I also remember Nvidia drivers updates that generated a black login screen (1), or strange error messages showing up before seeing the Ubuntu login screen (that I wonder myself what should I do with them?).

Not to mention the times my SSH connections to development servers don't work, and I have to physically go there and reset the PCs.

I don't want to be twiddling with my main PC, which puts food on my family table. The fear of coming to work (or from work at home) and turning on my PC just to find out that some partition isn't mounting, my printer is not working anymore or some other problem that would take me an hour to fix... is too much of a risk for me and my mental health.

On the functional side... there are some things that I can't understand... It's been years already and I cannot control the scroll speed of the mouse wheel in Ubuntu without hacking with the xinput system. What does that tell me?

And why asking my password every 5 minutes? I always said: if you want to remember a phrase, use it as your Ubuntu password. You'll be asked for it for every little thing and in a couple of hours you will remember the phrase forever.

https://askubuntu.com/questions/1129516/black-screen-at-boot...

PS: I love-hate Ubuntu, and I am not saying that these things don't happen on Windows, but from personal experience, Windows is much more stable since like the past 10 years.


I have been using ubuntu and flavors for well over 12 years. I gotta tell you, never has anything of what you describe happen to me.

And I am a menacingly wild-user. I put my systems through a lot! I do a lot of full-stack development, arduino(and other iot stuff) coding, blender work (3d), inkscape, gimp, a lot of audacity for audio, a lot of kdenLive for video-editing..

and my laptops (although always being mediocore in spec) never broke a sweat. And I have never ever had to 'format' my PC cuz I'm locked out of it.

So, maybe, it's you who doesn't understand how to use the system. Maybe!


> So, maybe, it's you who doesn't understand how to use the system. Maybe!

It is possible that you are 100% right. But I also believe your answer (or that kind of answer) is part of the general problem.

I have to say that I'm totally capable of fixing those issues since I do it all the time for Android (which I debug and customize). The problem is: I don't want to. Not on my main PC.

On the other side, what am I possibly doing wrong to generate those problems? I install Qt, Firefox, Thunderbird, Git, build-essentials. I never hibernate, I always do a clean shutdown and... that's it...

The "Ubuntu has experienced an internal error..." dialog box still appears from time to time


Yeah but is it really user's problem if a system requires lot's of knowledge to do computer stuff without breaking the system? I don't have the time neither want to learn the intricacies of an operating system to do my work.


In Windows you're much more likely to wake up one day the victim of some ransomware which has encrypted your files and demands bitcoin to get your files back again.

It's not unknown for Windows updates to break systems either, and when you do have serious problems on Windows there's a limited amount of things you can do because the system is so opaque.

At least on *nix you (or expert users helping you) can dive in to the source to figure out what's happening. With Windows you can be left at the mercy of waiting for Microsoft or whatever closed-source software you're depending on for a fix.


I switched to Linux specifically because Windows kept breaking. One update broke compatibility with the stock SSD in my laptop, but the symptom was random bluescreens with seemingly different errors each time. I installed Mint to see if I could get a better error about which bit of hardware was faulty, but it never had any issue in Linux. At least if a kernel update broke something similar, it'd be easy to boot from an older one. Windows sticks you with the version you have or a full reset.


Eh, Linux is more stable than Windows 10 in my experience, even with a bunch of extra bs and custom configuration that I needed.

Windows 7 was rock stable. Indeed, it was W10 that caused me this kind of headache a few times - just updated itself overnight and blue screened. No way to recover even with their recovery tools. Still have no clue what was wrong with it.

Oh and it still to this day starts hibernating randomly on my ZBooks, with the event log saying it is caused by a CPU overheat event.

First of all, it was not overheating.

Second, Windows or any other OS should never handle these events, it was always and should always be handled at a hardware/firmware level.

Also, hibernating when overheating? Lol, yeah, that's very helpful. Good luck doing that when the processor is actually overheating.

On that note, Microsoft Answers is the most useless website/service in existence and I don't know why they even bother keeping it up.


"it still to this day starts hibernating randomly on my ZBooks, with the event log saying it is caused by a CPU overheat event."

Hybernating or thermal throttling?

Thermal throttling is done on the hardware level and can happen while using any operating system, including Linux.


It's hibernating. Apparently, Windows has the ability to perform some actions based on ACPI, which presumably is disabled by default. For whatever reason, some update enables this feature (once every 6+ months, but not right after restoring from backup, strange as hell), but I guess the thermal zones are improperly calibrated? No idea.

https://docs.microsoft.com/en-us/windows-hardware/drivers/br...

It just hibernates. Passive or active cooling setting, disabled thermal controls in group policy or not, registry tweaks or not. Tried everything.

It just hibernates, middle of work or not. At least it doesn't shut down ¯\_(ツ)_/¯.

I'd say upon reaching ~60-70 degrees on the CPU, but it does it at idle (<50degC) too, so it seems more random than anything. Once, it stopped when I had a Kali Linux VM with a USB Wifi adapter passed through to it (only difference at the time) running. Worked for a few days, then it started doing it regardless.

People with some Dell and Lenovo laptops also have this problem, and it seems so rare that the most common answer is "lol, it's overheating dumdum, thank Microsoft or it would break".

Only surefire solution is to restore from backup.

If it did that under Linux, at least I could easily tell it to ignore everything and let the hardware handle it.


For work I use Macos because that is the only option I have for several reasons; and I also like it. For gaming and media consumption, there is no HDR support on Linux and there won't be in the foreseeable future. So, Windows there.


Also: even if Linux gaming is getting better everyday thanks to lots of work by contributors and companies (Valve, Codeweavers, hey even Nvidia recently, etc), as a developer I get to do enough software spelunking at work, and I'm supremely happy to pay the small cost of periodically dealing with MS bullshit.

The small cost of dealing with $ms_bullshit_du_jour is a periodical { 1. sigh, 2. find & run some PowerShell snippet to uninstall or disable the bullshit, 3. be done with it till next time } sequence, but the large benefit is the guarantee to be running games on the de-facto standard platform where they are end-to-end tested by the game devs/QA !

Games are complexity beasts by nature, and I have high fidelity & stability expectations; I don't want an extra source of trouble. (Yes, even if I would absolutely prefer to run games on Linux! All my other machines run Linux)! Until Steam{OS, Deck} gets enough usage to be routinely tested by 90% of the games shipping on Steam, both AAA & indies, for me Linux gaming is a case of "I would absolutely love to, but for now, thanks / no thanks" .


Sales of steam deck seem to imply another 1-2M units, which would increase Linux marketshare to more than 2%. It's not ridiculous to suggest that developers may test for Linux in the near to medium term.


Sure, as soon as an operating system with a good Win32/DirectX/Windows HAL implementation appears, I will be the first one to switch. ReactOS perhaps?



I think it will be implemented by proton/wine first


I use Linux (primarily NixOS and Arch) for most things. I like MacOS for work, too, though.

The one thing I keep Windows for is gaming. More specifically for game streaming. Steam Link works okay (and is cross platform), but I haven't seen anything as performant and meets my needs as much as Nvidia Gamestream (Windows-only for server-side streaming). I use Moonlight to connect to it. I haven't found any alternatives that work as well that are also available for Linux.

I wish there were a game streaming application (open source preferably) as performant as Gamestream that worked as well on Linux.


"I read a lot of comments here wanting to block the connections, but wouldn’t be easier/better to switch to an OS that doesn’t spy you?"

I mean, in a perfect world, sure. But it's not always an option.


I recently considered installing Linux on my desktop workstation. Then I decided I wanted to buy a wireless card for the machine - and any device I saw available locally had dubious issue/debugging threads online, people unable to get it to work, BT functionality missing, etc.

I've used Linux desktop in the past and it looks like things haven't changed much. I don't like the idea of having to compile kernel modules just to get my machine to work (and then having to do it on each distro upgrade) - I have better things to do with my time.


Keep in mind that you'll only find results from people who have issues. Few people will go online just to say their hardware works with Linux.

To give a concrete recommendation for a wifi adapter, look at Intel-based hardware. Intel stuff generally works out of the box because they contribute drivers to the Linux kernel directly. The ASUS PCe-AXE58BT is based on Intel's AX200 so should work out of the box on any modern Linux distro.

You basically never have to compile kernel modules anymore. Even if compilation is necessary, DKMS-based packages mean that the recompilation is done when you install your package updates, so you don't need to think about it.


The Asus card isn't available in my region. I can find TPLink Archer TXE75E V2 which seems similar (BT5.2, 6E) - I can't find any info on linux support or BT.


I haven't used it, but the page lists "The latest Intel Wi-Fi 6E chipset" so I would run under the assumption that it works fine.

It's the chipset that you would need to ensure compatibility with, unfortunately I can't find anything concrete on the actual chipset used, Intel only has 3 wifi 6E chipsets and as far as I can tell (from here https://venzux.com/intel-ax411-vs-ax211-vs-ax210-wi-fi-6e-mo...) they should all be compatible with Linux.


There's just one brand of Wifi that's crap under linux and that's broadcom. Are you telling me you can't find an Intel wireless module anywhere?

Also, if you use a modern distro instead of, say, gentoo, you don't have to compile anything yourself, there's packages and dkms available if you really have to go with broadcom. Do you not install drivers on Windows?


Indeed I bought a while ago one of those pci-e cards that comes with the same BT/wireless that Apple uses, it wasn't that hard to make it work but if it works out of the box, then better, also I think the intel wireless chips are one of the best ones out there.


All of the models I was looking into were Intel based - but I could't find any info if they are actually supported/tested - and just threads people complaining they can't get them to work on their distro or can't get BT to work.


Pretty much every device is supported under Linux once one avoids the very latest product. I'm not talking about using Granma's 286, just don't buy the latest hardware or anything younger than say six months. Hardware manufacturers are encouraged (just to say) to support only Windows out of the box, therefore Linux support requires either personal unpaid work by members of the community or investment from FOSS companies, which for obvious reasons needs more time.

One extremely important aspect of Linux support is that once it happens, it stays there virtually forever: there's no such thing as declaring a product obsolete so that users are forced to migrate to the newer version. This applies to software too; WINE allows Linux users to run perfectly good Windows software that stopped working on native Windows ages ago forcing Windows users to shell out a lot of money.


Try https://linux-hardware.org/ for a good source of known good hardware.


Won't deny, BT depending on the vendor, is just a hell to make it work. And you are not gonna get the same functionality (like ADP2 + mic).

In the end is the compromises you are willing to make. For development Linux is way better IMO, even gaming now really good with DXVK (Steam Proton)


I would at the very least make an effort to validate those claims before repeating them. Linux has improved greatly in the last few years, and I haven't had a WiFi issue for a long time.


You can find and buy Intel AC series wifi cards meant for laptops on PCIe surfboards with external antennas. They're plug and play and fantasfic


Does BT 5 support work ? High quality wireless headphone connection for calls is pretty important for me, this is the reason I'm getting the wireless card for workstation (I use laptop for calls right now)


I have been using Linux for several years, but I am going back to Windows myself.

I vastly prefer Linux, but I need LINE (a messenger app popular in some Asia countries). Their Linux app (actually a Chrome extension) stopped getting new features 5 years ago. That's kinda THE deal-breaker for me.

Another minor reason is Dropbox Smart Sync. I can use Selective Sync, but having smart sync is of course better.

(I would rather Windows than macOS)


> macOS also talks to a lot of servers and collect telemetry as Windows, maybe in a different scale, so isn’t an alternative if you are worried about your data being collected

Sorry to go off topic, but can anyone expand on this? I don't use MacOS but have been thinking about it, I always thought they were a significant improvement over windows (even if not quite as good as linux)


> I personally prefer Linux, but what’s holding me are the streaming services

Yes but then these turn off, drop the content you were watching with little/no warning and are unreliable. Buy physical media while you can at least then you have a licence to watch it when you please until the media degrades. (Or better in the territories which allow for backing up across media formats)


I've never had a problem watching streaming stuff on Linux.


It might be changed? I remember back in 2020 when I wanted to watch something I was dual booting to windows, because Amazon was streaming at 720p and Netflix 1080p, so the only option to watch in a good resolution UHD was an OS that has a browser with a signed DRM system.


True, it's a problem that money solves. But frankly I'd rather keep the money as I'm content rewatching my favourite shows off disk.


> what holds you back to switch to Linux?

Currently Linux has reached a state where I could tolerate most of the things I hate about it and would be willing to do so to be rid of Microsoft's bullshit, except for the following: VR in Linux is a garbage fire, DCS is hit and miss, and my 1080ti will likely be an endless source of problems.


I "solved" this issue by connecting a macbook to my TV so that I can use all streaming platforms without any problem, and I use a separate machine for work. And I prefer to keep it this way.


required to use windows for work (the software I produce must run on desktop windows machines), and only have my personal PC to do work with. I'd like to have money for a second machine but I don't.

The thing with the linux community, is that once linux covers 90% of the perceived use-cases for a desktop/laptop/workstation computer, it really feels like it should be enough to see a big switch. We can relax, right? But that 10% of use cases where windows is the /only/ option is the other 90%.


Dual boot or have 2 devices. Also Wine?


> I can only watch SD content if I’m on Linux

Sounds like you have a good reason to get content from third-party sources that don't saddle you with annoying BS.


I use ubuntu or mint on my computers and stream on a roku. MY DAW runs on a different computing running Mac OS.


music production. 3d modeling, gaming.

a lot of commercial software just doesn't have support for linux.

for regular use and a lot of my personal programming, I use linux, but it just isn't feasible to be 100% depending what you need


Another option to consider is FreeBSD, which can be especially attractive for those of us who want to avoid systemd.


I'd argue using an OS and init system with significantly less support compared to Linux is more likely to have a lot more reasons to hold you back.


There are non-systemd linux distros like alpine and gentoo. With elogind and such, you can run gnome and such without systemd.


I've run Gentoo since 2004, and all the non-stop compiling has finally driven me away from that distro. It's just too much of a waste of time, a drain on my system's resources and wear on my disks.

It used to be bearable back in the day, but now software has gotten so bloated that compiling it all constantly is just ridiculous... especially on older, slower systems.


I agree. Gentoo ought to have better binpkgs support. Alpine is a nice alternative, albeit it doesn't have USE flags.


We swapped linux for FreeBSD in one of our classrooms at $WORK. It just works.


why not just use another init system? I've had great experiences with runit, and they say s6 is good as well.


It's not so easy to ditch systemd on a lot of distros, because it's tied so closely in to everything else.

Also, I don't know about you but I don't want to waste my time writing init scripts for every application I run that needs them. I'd much rather just use a distro that has already written them for me.


There seems to be some degree of overcounting in the reported figures: the number of “hosts” is the number of unique hostnames in the table, but the number of “IPs” is a straightforward sum of IP address counts that the table segregates by service, and I refuse to believe that (e.g.) the IP address of login.live.com as resolved by OneDrive is always different from same as resolved by Skype.

  $ curl -fsSL 'https://helgeklein.com/blog/windows-os-services-apps-network-connection-target-hosts/' | pup 'td:nth-child(5)' text{} | sed 's/:.*$//' | sort -u | wc -l
  291
  $ curl -fsSL 'https://helgeklein.com/blog/windows-os-services-apps-network-connection-target-hosts/' | pup 'td:nth-child(4)' text{} | awk '{ s += $1 } END { print s }'
  2764
There’s also a fair amount of infrastructural stuff such as DigiCert’s OCSP service, and every shard(?) in the Windows Update, OneDrive, etc. CDNs is counted as a separate hostname.

Not that I’m happy about of any of these connections, but the report looks much less interesting than the totals alone suggest.


> DigiCert’s OCSP service

A weakness of the OCSP protocol is that it gets sent the certificate hash as an input. This means that to a significant degree, an OCSP provider can track what software you are using, what sites you visit, etc... For the code signing certificates, they could also determine which year (or two) it came from.

DigiCert could sell that to marketing companies, and spy agencies / state-sponsored hacking groups could use it to determine if you are running vulnerable versions of software they have hacks for.

There would be ways to fix the protocol to be less vulnerable to this, but I'm sure you'll find that any such suggestion would be rejected by the major players like DigiCert in a strangely forceful manner.


There is actually now a dormant but a powerful system that's a win-win to privacy and CA's infra bills: OCSP stapling (and the associated must-staple directive). Instead of the client requesting OCSP validation directly, the web server does that on behalf of its visitors. This saves on CAs since that the web server can minimise the request to every minute (or even every half-hour if that's what the website operator desires) and is definitely a win for privacy (as users will no longer be required to send a request to the CAs). The only probable downside is that revocations aren't instant, but in theory you could arrange your CA to issue 5-minute validity OCSP responses allowing for faster revocation.

Now if this exists, why aren't people switching to this? Google. Chrome doesn't even actively validate revocation information anymore, and Android didn't even bothered to do many of the required validation since practically its inception (so much that Let's Encrypt exploited that fact to allow their CA to still issue valid certificates to older Android devices: https://letsencrypt.org/2020/12/21/extending-android-compati...). Since Google insists that revocation is broken (as demonstrated though it can be fixed now if we want to) no-one (at least at a corporate level) is seriously pushing OCSP must-staple.

P.S. you should at least read GRC's post about CRLSets (the only mechanism available in Chrome) that was posted in 2014 a few days after the Heartbleed discovery (https://www.grc.com/revocation/crlsets.htm). It's still (unfortunately) relevant today, amd says a lot about Google's security practices.


OCSP is done over plain HTTP (for obvious reasons), so the OCSP provider doesn’t have exclusive access to this data. There is not much value there for DigiCert and others when every ISP can potentially sell the same data.

OCSP stapling helps maintain privacy, so eg. ESNI isn’t completely pointless when stapling is used.


What sort of logic is that. "This data isn't private because we just broadcast it to the entire internet on your behalf" doesn't strike me as a valid argument against considering privacy.

If it is I'm going to use this way more in all the compliance meetings I attend. Oh, you're worried about the secrecy of all this proprietary private information we're holding? Don't worry, I'll just wrap it in a torrent, broadcast it to the DHT, and _now_ it's no longer private, so the secrecy doesn't matter.


>OCSP is done over plain HTTP (for obvious reasons)

It's not obvious to me. Why can't it be done over HTTPS? From what I can tell nothing stops you from doing that.


If OCSP was done over HTTPS, you'd end up with an infinite loop - you'd have to do an OCSP query to check that the OCSP server's certificate is not revoked, and so on.


Sure HTTPS wouldn't help if an attacker had the cert for the OCSP server, but I feel like that is rare to happen compared to other certs being revoked and I believe there are privacy / anticensorship benefits you ran get from https.

To me it seems simple to just skip an OSCP check compared to having to use HTTP.


i think because what is asked for is a list of revoked certs, and the connection being used could be already on that blacklist. the list must available without the involvement of what is being checked.


That's not what OCSP is. OCSP just lets you query the status of a cert.


OCSP is indeed broken in many ways.

On the bright side, with the ca/browser forum limiting the max length of a certificate to about a year it would be pretty easy to just use a single revocation list. A CRL 2.0 so to say. Just like the browser downloads the Google safe browsing list.

EDIT: Just to be clear; With CRL 2.0 I don't mean blockchain...


> I refuse to believe that (e.g.) the IP address of login.live.com as resolved by OneDrive is always different from same as resolved by Skype.

I could believe that, actually. There are reasons you might want to always host different services on non-overlapping IPs -- DDoS mitigation, traffic prioritization, service isolation, limiting impact of unscrupulous ISP / government content blocks...


I don't think that the exact problem is double counting IP addresses or multiple CDN shards that are basically the same but there are several concerning things:

1) Why would relatively specific services need to contact so many different endpoints? The skype service is using about 40 different endpoints by the look of it. 2) Why can't the providers of these services use much more consistent naming? I am less concerned about pipe.skype.com and avatar.skype.com being called from skype because it makes sense. On the other hand, calling b-ring.msedge.net is weird. What is that site? Could be MS, could be anything. 3) It is not clear whether these services were expected to be enabled or not. For example, I don't like the idea that Windows is wasting internet bandwidth on Cortana or XBox live since I use neither of them.


> I refuse to believe that (e.g.) the IP address of login.live.com as resolved by OneDrive is always different from same as resolved by Skype.

Do you have a reason to doubt either the sincerity or technical ability of the Author? Did you find a fault in their methods?

I totally believe it, both from a load balancing perspective and from a stack perspective.

There was a version of Mac OS were different applications used different resolvers so, for instance, changing /etc/hosts would redirect SOME services/browsers to the IP in /etc/hosts and other services/browsers would still use DNS. I've had similar issues manually setting the DNS server address where some services ignored the setting if they could resolve against Apple's favoured DNS. This might still be true but I'm now longer doing security work and I'm not using these tricks anymore.


I recently helped a friend de-cruft their slow as molasses laptop, and as ever was struck how Windows is always freaking DOING shit. It's sitting there seemingly idle, but the start bar takes 5 seconds to open. You open task manager and it's grinding away at network, grinding away at CPU, grinding away at disc, all via opaquely named processes. The bloody thing is relentless.


Based on my experience, I blame the 3rd party stuff. All kinds of software and devices just love to add background processes to the system.

I guess this is the price you pay for system where everything is possible.


I'm sure it doesn't help, but even with all that stuff disabled it's bad. svchost.exe's mysterious machinations, windows updates, superfetch (or whatever they call it now), one drive syncing, malware checking, app updates, it's like 'everything everywhere all at once' for Windows.

You can make a case for all those things, of course. Just why does it have to launch into them all immediately upon startup? On a beefier machine it's probably not noticeable but it's sad to see on older PCs when Windows 10 at launch was quite reasonable when it came to lower specs.


Couldn't have said it better, Windows 10 is the first Windows that does "Everything everywhere all at once" without any regard to what other resources are in use. It is basically unusable on any 5400rpm hard drive.


Recent versions of Win10 still perform quite well on old hardware for me - could be that you just need a clean install?


Yes, clean install usually helps, but it wasn't my laptop so I didn't want to jump to extreme solutions straight away (otherwise I might have put Ubuntu on it!)

I think what the main problem was they hadn't used it in a while and accumulated a backlog of mandatory Windows Updates. While these chugged away they got frustrated with an almost unresponsive computer, turned it off, install interrupted for the same thing to happen all over again. I can see how doing this stuff in the background makes sense as you don't necessarily want to bother the user with it, but when it impacts performance so badly maybe it's better to throw up the message and the progress bar. At least they know what's up!


Same here. I installed the manual firewall SimpleWall and a bunch of Windows processes I haven't used in months + telemetry are constantly trying to connect to the internet (things like xbox game bar and widgets)


I wish to see this also for macOS, iOS and Android.


Aside from fsflovers link; there's a fantastic (paid) firewall software called littlesnitch.

You can use it to watch every process send network traffic, you can even collect samples of the traffic and plot it on a map: https://www.obdev.at/products/littlesnitch/index.html

(not affiliated, just a happy customer; it's one of the few things I like the mac ecosystem for.. there's attempts to port it to linux with https://github.com/evilsocket/opensnitch; but it's not as polished of course)


I've been using this for years and love it, too. But didn't Apple recently make some change to macOS recently that allows many "system" processes to bypass this?



LittleSnitch was helpful on Mojave in this regard. However, since Catalina, there are literally hundreds of connections per day from "nsurlsessiond" and "gateway.icloud.com" (and a lot more), with no way to figure out what it is they are uploading/downloading/checking.

That happens whether or not you are logged into icloud/facetime/messages at the same time, or not.

Anyone aware of a way to figure out what nsurlesssiond is downloading/uploading in the background?


On Android NetGuard can do the same monitoring, it's a great open source app but I do sometimes wonder what the chance is of some Android system calls bypassing the local vpn firewall it sets up.

https://netguard.me/


The Linux equivalent is called OpenSnitch, works great. It also does not cost 50 euros


Friend, your link ends with a semi-colon.

https://github.com/evilsocket/opensnitch


That's awful, I'm sorry.

Thank you for fixing it, I can't edit the post now.


MacOs: https://sneak.berlin/20210202/macos-11.2-network-privacy/

  The desktop appeared, and the system was connected to a dedicated wi-fi network for the purpose, DeviceUnderTest. 60 seconds or so of “no activity” simply staring at the desktop elapsed, and the system was rebooted, automatically reconnecting to the test network.
  
  Wi-Fi was then disabled.

  ...

  In this few minutes, the system generated 38 megabytes of network traffic.



Last time I used macOS, 2-3 years ago, it was in constant communication with Akamai servers. Blocking some of those hosts to prevent it from doing so would render the system unresponsive when trying to launch an application.

Note: this anecdote may no longer be current. I’m not sure what changes Apple might have made since I tested this.



and other nix distros like both Ubuntu Desktop and Server.



Linux has lsof that can show connexions. Here's a config generator for conky that demonstrates the usage:

https://github.com/viviparous/plonky


Modern OSes (and not just Windows) are getting overly bloated and complicated. Talking to all those hosts and IPs is just a symptome of that nobody seems to care about such things anymore.

My main work-laptop, my server and work workstation are running Ubuntu (because of support, yadda yadda), but I've recently tried Arch linux on my personal laptop.

I'm not yet an Arch fanatic, and find the installation procedure particularly baroque (having to manually hunt down which firmware packages to install to get basic hardware like Wifi working, had to explicitly install a network-manager, or else no networking at all)...

But what I did find immensely refreshing, was working on a minimal system which only had the components I had asked for and nothing else. I knew why everything was there, and how it was supposed to work.

I'm really coming to appreciate that more and more, especially after a botched server-upgrade and having to wait for apt to reinstall 1000+ packages I have no idea if I need or not.

So yeah. Windows, Mac, Linux, or whatever. Keep it minimal, please :)


Isn't the issue that these are a paid product and the supplier has to keep adding "value" (or pretending they are) to keep selling new software?

Some things get better over time like better graphics cards and 64 bit but MS wouldn't keep making money if they said, "yeah just keep Windows 98, it basically does everything you need". Instead they add fluff, search that no-one wants, messaging that not everyone uses etc.

Maybe they should do Windows Lite which would only include the desktop and windows update and nothing else. Sell it for $10 inc updates for the first year. You then pay a subscription if you care about more updates and nothing if you don't.


Sending telemetry data at all without real consent (ie not some ultimatum) is scary. Windows is not a daily driver OS.


IRL it's a daily driver (most used desktop OS). I suspect due to affordability (w.r.t. mac), ecosystem (plethora of apps), and good UX (wrt linux).


> Sending telemetry data at all without real consent

It's in the EULA and privacy policies.


Software isn't immutable. The code that makes these requests has to physically exist somewhere. Which means you can find it and overwrite it with nop's.


For now. Modern chips will make it harder and harder, until one day we will have almost no control over the machine we are running. For our own good, of course. Plus nobody will care appart from us. And not all of us. Not the us that are working on this techs at least.


As long as signature enforcement in BIOS can be disabled, everything is possible. And it will always be possible to disable it because otherwise someone would sue the hell out of Microsoft, OEM, or both, for not being able to run Linux or other competing OS on a PC they bought.


Given we have one win for a 1000 fails with big companies, and that they clearly don't pay much consequences for their bad actions but benefit a lot from it, I doubt it.

Espacially if this becomes state mendated for national security reasons. Indeed, have you seen anyone been sued for the PRISM program?

In fact, it's already happening: have you seen a lot of things modified in iOS lately? It's super hard, so few people even attempt now.


> have you seen a lot of things modified in iOS lately?

There's an unpatchable bootrom exploit in all devices up to and including the iPhone X. But yes, I agree with your sentiment.

Though one thing to keep in mind: phones and tablets are content-consumption devices and they have historically been locked down for about as long as they existed. Computers are productivity devices. You may want to multi-boot 10 different operating systems for very practical reasons and no one should prevent you from doing that. Manufacturers understand this. Even the Apple M1 is capable of booting arbitrary, unsigned OSes, despite being a direct descendant of the same line of locked-down SoCs used in iPhones and iPads.


Until it runs where you can't touch it. Like the Intel Management Engine.

https://en.wikipedia.org/wiki/Intel_Management_Engine#Assert...



Steps to mitigate:

Change Windows Firewall settings to block outbound connections by default [0]

Install Unbound [1] so you can actually see the DNS requests (and block them if you want it) your system performs.

    server:
        verbosity: 1
        extended-statistics: yes

    logfile: "C:\Shares\Public\Progs\Unbound\unbound.log"

    server:
        log-queries: yes
        log-replies: yes
        log-servfail: yes
        val-log-level: 2
[0] https://imgur.com/a/ayq5yiF

[1] https://www.nlnetlabs.nl/projects/unbound/about/


Why didn't they release it in a better format, like CSV? We could then easily turn it into a C:\Windows\System32\drivers\etc\hosts file.


yeah but windows defender will wipe it clean evertime it updates.

Plus if you have a massive (>3MB) hosts file it causes other issues during bootup, which requiers disabling another service (dont remember which one off the top of my head, as I am using Pi-Hole these days)


In 2022, nobody should have a 3mb hosts file.

This is not the way...


Unfortunately running your own DNS isn't an option in every context. Not to mention, 3MB is nothing, it shouldn't struggle with that.


Is blocking on a DNS level even sufficient? I'd imagine there are hard coded fall back IPs involved at least sometimes.


A lot of IOT devices (including smart TV's), completly ignore DNS


Easy! Just set router level firewall rules to redirect DNS.


DoH and DoT are slowly changing that, it's getting troublesome to redirect.


It certainly helps, but it's not perfect, no.


I am blocking shy of 2 million domains with Pi-Hole, and in fairness, the file I was using is 3.11mb and if I open it in Notepad++ it contains 106369 lines (about 40 of the lines are comments / empty)

I am also not sure at what size the issue with the service appears.


Because then a load of stuff would stop working!?


And they assure you it's all safe, secure, and private.


They are protecting your privacy. It's written right there in the start page of Edge.


Is there any way to block connection to these sites with DNS or something?


Install free Comodo Internet Security.

Disable DNS cache service so that all your apps resolve DNS themselves (can only be done via the registry, change the service's startup type from 2 (auto) to 4 (disabled)).

In CIS, create a new group for all files under c:/windows.

Create a rule denying all in/out requests to that group.

Create a rule allowing only DHCP and NTP requests (255.255.255.255:67 and <whatever-timeserver-you-trust>:123) for svchost.exe (place that rule above the one for c:/windows to ensure precedence).

Use third-party utilities instead of the likes of ping.exe, e.g. hrping.

Refer to CIS documentation in case of any troubles.

Enjoy your privacy-hardened windows.


take it with two tablespoons of salt if someone who avoids the onboard ping tool claims there are no ill effect of blocking all network connections of OS components except DHCP and NTP. This recommendation will cripple windows features and your enjoyment may vary. (breaking updates, breaking crls, breaking active directory integration, breaking office integration, breaking push notifications, breaking companion app integration, ... and so on)


[dead]


you misunderstood the reference: "this" was meant to mean the recommendation in the parent post. I edited my post to be clearer.

concerning the intentional breaking of the windows updater and then using a third party update tool: such workarounds don't make the recommendation less bad. The question is: why break the onboard updater in the first place? And all those other features? Sure you have an alternative to office365 and to onedrive and live without the companion app is possible and happy, and maybe you even have a third party crl updater as well and a workaround for the side effects of blocking oscp.

But honestly if you want to replace everything because microsoft is bad and can't be trusted, then start by replacing the kernel.


> The question is: why break the onboard updater in the first place? And all those other features?

Because none of those features are irreplaceable and/or crucial to run a perfectly fine workstation.

> if you want to replace everything because microsoft is bad and can't be trusted, then start by replacing the kernel.

Unless Windows kernel phones home over some covert physical channel which is undetectable by tools like Wireshark running on a separate gateway (and secretly supported by all routers in existence), there's absolutely zero need to replace the kernel when all said phoning home can be stopped with a properly configured firewall.



This looks neat, I'll check it out.

I'm currently using Binisoft (now Malware Bytes) Windows Firewall Control to block unwanted traffic. I'm quite happy with it.

The amount of traffic that it regularly blocks is insane. Windows and installed apps constantly want to chat with their cloud friends.


You don't want to block all of that. CLR checking (else your apps may become slow to start) and windows updates (else you may become vulnerable) should be enabled.

Someone else has some suggestions?


Having CRL/OCSP getting blocked on corporate networks is one of those things I have to commonly troubleshoot. You can get some weird timeouts and failures in applications that can be fun.

For example an internal app starts up fine and works, but then can't connect to github. Instead of showing cert errors, I've had ones show errors that make it appear that you may have a DNS problem or that the connection to github itself was broken.


If I was still using windows I would totally block everything and run a wsus server in a VM that has a vlan and firewall config.

The issue is you can only control your firewall at home. Whenever you are out you are pretty sure that MS and Apple bypass any rule you'd put on the local firewall.

I prefer running sane operating systems.


Pretty sure your sane OS makes calls to CRL/OCSP lists to validate cert revocation. Many of the calls in that list do just that.


https://www.bsi.bund.de/DE/Service-Navi/Publikationen/Studie... (in German, but some documents are in English).

Not at all.


I doubt you'll have a working operating system if you were to block all of those Hosts


Windows works (nearly) fine if you block all connections except those from the "Core Networking" group.

Of course you won't have access to some services in that case, like Windows Update, etc. One particular weird thing I have noticed is that the process that checks CRL on behalf on others is LSASS. So basically you have some extra tuning to do to get a system to your liking if using just the Windows Firewall, after you block everything but "Core Networking" (and programs you want to authorise), but it looks reasonable. I would suggest saving the initial state before you go that route, though.

If you trace connections you will find funny things, like cl.exe phoning home.


You would be fine, Microsoft allows and supports fully air gapped deployments and no internet connectivity deployments.

Yank your internet cord out and start windows.


Shameless plug: https://github.com/Barre/privaxy

It can be easily configured to block any host or path.


It's not a general solution for an office. I tried this and people couldn't install Office because some of the same servers for telemetry are used for activation.


AdGuardHome or PiHole should be able to block this.


You can probably configure the firewall to block.


Seems like there was some project creep in the Windows OS. I was always taught an OS "manages computer hardware, software resources, and provides common services for computer programs". For example, games are not anything that could be in the above definition (from Wikipedia).


We need a crowd-sourced solution for documenting and keeping up with Microsoft's bullshit. After every "cumulative update" I fire up WireShark and find new connections popping up from unexpected places. It's too much of a hassle and I'm just about ready to give up.

That said, Linux is not an alternative for me; I use Office daily, OneDrive, and Xbox Game Pass for gaming.

With a dedicated community we could distribute the work of reverse-engineering the purpose of each of these connections and classify URL endpoints by that (Ads / impressions / telemetry can just get blocked immediately; the rest can be documented based on what functionality is disabled / broken by blocking).


That escalated quickly... Being in a developed part of the world thats not getting in the way of the product functioning but I feel sorry for those who don't have a reliable connectivity and/or good bandwidth


Anyone feeling tempted to play the Devil’s advocate here? :)


As someone who works within an operations team, the telemetry that is seen within MEM/365 is extremely useful for detecting issues and providing overall health of the environment.

While MS does not help itself with some of the more invasive tactics, some of the telemetry is super valuable in detecting issues with drivers, updates and many other things.

Even the episode of MS08-067 https://darknetdiaries.com/episode/57/ has some interest bits on early telemetry.


The numbers on their own are a meaningless metric, most of these are just CDN IPs for windows updates, and CRL IPs.


IP address and domain name count mean nothing. Why would you care if windows pinged 1 Microsoft domain vs 20,000. If they are all controlled by the same entity and send the same data, these numbers mean nothing.


Ok so if we all send data to azure it's safe or to ovh that's just fine.

It's not terrifying but it does reveal the scale, this isn't a licence check against something like genuine.microsoft.com this is a completely different scale all together


I will. I am actually glad that the OS interacts with many different hosts. This allows me to set up fine-grained rules to prevent specific communication channels. The real information I am missing here is which ones should be left reachable to ensure proper maintenance of the OS.

Having a small number of hosts and smaller number of queries would likely result in a worse situation: queries would be impossible to filter because filtering 1 host would likely break too many things.

Also, in terms of telemetry, I consider Microsoft to be the disabled child in the classroom. I am sincerely convinced they have absolutely not the skills nor the vision to turn the data they collect into a process that could be personally harmful to me. Google and Apple, on the other side, specialize in employing engineers and product managers that have absolutely no concern to privacy. Their only constraint is to make sure whatever they do is lawful, which is, in my opinion, the worst way a company should behave. Until today, I haven't seen Microsoft do anything worrying except showing traces of telemetry collection in my router logs.


"I am sincerely convinced they have absolutely not the skills nor the vision to turn the data they collect into a process that could be personally harmful to me."

Never underestimate your enemy.


Typing in https://r.bing.com/ from the list gives a interesting page.


That's just one of the default messages from Azure Front Door / Azure CDN.


I’m surprised by the number of requests to port 80, especially for requests to PKI certificate related domains. I assume there’s an element of chicken-and-egg somewhere that means they can’t use TLS in certain cases. Can anyone offer insight into why this is OK and not subject to MITM attacks on the certificate validation/supply chain?


X509 cert retrievals is predominantly done over HTTP, because otherwise you’d need X509 to implement X509.

It may sound “insecure”, but if you can’t trust X509 without HTTPS, you effectively can’t trust either X509 or HTTPS.

This is why we have layers in software. We trust X509 because of fundamental math. Therefore we trust HTTPS built on X509. Therefore we trust systems built on HTTPS.

Everything is perfectly fine.


At least with Windows, you can install something like w10privacy and easily quash most of the phone home staff. No such thing for Android and iOS.


Is there a pihole list or something ?


These list contain most MS spyware domains, I think: https://github.com/crazy-max/WindowsSpyBlocker/tree/master/d...

Be careful, though, it's easy to accidentally break Windows Update.


Are there any connecting to china?


Why do people still use Microsoft is a great mystery to me.


That's obvious. It's much less expensive to buy a Windows PC than a Mac. Linux on the desktop is still quite terrible in a number of ways (particularly if you are ever unfortunate enough to deal with drivers and kernel modules for your slightly uncommon hardware). Chromebooks still don't rate that highly and haven't really entered the public consciousness as a result. Not to mention that many people are using Windows at work, in education or in plenty of other places where they ultimately don't get to choose.


I am surprised people on hackernews are still regurgitating talking points from 20 years ago - Linux on Desktop is quite terrible ? When was the last time you used Linux ..

I have been using Linux systems from the past 10-15 years and they have been my daily driver for almost everything - and I live on my computer. Never have I faced any issue that can't easily be solved. (If at all there were issues)


I am surprised people on hackernews still don't seem to understand that, on the whole, happy Linux users are a self-selecting group based on their own skills, understanding and tolerances.

I'm not saying that people don't have good experiences with Linux desktops. I'm saying that those experiences are still nowhere near universal, especially when you start to involve people who aren't really power users. The most common desktop environments are still wildly inconsistent at best and frequently an accessibility nightmare at worst. Trying to work out whether a given piece of hardware will be well supported (or supported at all!) is difficult. Resolving package manager conflicts, handling non-free firmwares, building and managing kernel modules, these things are _not trivial_. That's before we've even addressed the software that people are often forced to give up. I use a Linux desktop frequently and there are papercuts _everywhere_.

I'm really not sure where this prevailing mentality that you can just install Linux and ride off into the sunset comes from.


Yes.. but are we really saying that people can install Windows and ride off into the sunset?

IME there are always people who "get stuck" with their computer or end up in "nuke from orbit" situations.


> When was the last time you used Linux

Yesterday and it was terrible.

> Never have I faced any issue that can't easily be solved. (If at all there were issues)

Good luck viewing hdr content.


What was so terrible about it?


Off the top of my mind; mouse & trackpad, sound stack, bluetooth stack, hd/4k content streaming, no hdr at all, using multiple displays with different dpis... I tried every recommended driver&setting for mouse&trackpad. They are all terrible compared to Macos. The whole story of using multiple displays with xrandr, xinerama etc. is a joke. Agreed; you get a better experience with Wayland regarding that but then you lose the ability to do screenshare for the most part (without doing hacky stuff instead of doing work).


> Never have I faced any issue that can't easily be solved.

Which doesn't mean others didn't have issues, huh? Also some people might have not specific knowledge to solve the issues.


FreeBSD is also an option. MS desktop is almost unusable, bloated slow and clunky. (I have an MS boot) I imagine people are just used to it, it really is terrible


So true. If anything, something broken on Windows puts you in a worse position.


> Linux on the desktop is still quite terrible [...] for your slightly uncommon hardware

You have revealed your point of view. However, for the great majority of Linux users, Linux on the desktop has been a reality for well over a decade.


You are saying why people are stuck in MS, not choosing it.


You didn't say anything about "choosing" Microsoft. You said "use".


It was implied


1. It comes preinstalled on most PCs. It is just there.

2. Work in general (Office 365), and for software development "microsoft shops" although that is now more cultural with .NET5 running on Linux etc.

3. Games? (I am not a gamer)

It takes a bit of effort to switch to Linux, for most people's lives they are probably not aware of Linux, or even if they are don't see the point because Windows just works and does what they need.


I'd say convenience is the topmost reason. For most, it's not a matter of choice, Windows is preinstalled, they have been provided experience with it in schools, and Microsoft is often the standard choice in offices, with doc/docx, xls/xlsx being the de-facto standard document formats. For many, Windows is synonymous with a PC. Windows, in this context, is a low-level utility, beyond the boundary of care, like the manufacturer of car tyres.



enter the cloud-operation system. next release will be contacting twice the amount of IPs and that'd be fine.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: