I'm astonished by the pushback from the Docker devs on this. rflay, ThomasNegeli, and Nuru did heroic work by gently and thoroughly explaining why mandatory, automatic updates are unacceptable for a dev tool. I would not have been so patient.
The fact that they had to hold the maintainers hands through this is disappointing. "it's a feature request not a bug" shows Stephen did not even understand the problem he introduced. Either that, or he didn't read all the comments before him.
> "it's a feature request not a bug" shows Stephen did not even understand the problem he introduced.
He clarifies a little bit further down the thread:
> It's not a bug because it's working exactly as designed. We're very aware that you regard it as a misdesign.
If a project / community has adopted specific definitions for given words such as "bug", "feature request", etc., then IMO you should not get mad at people for using them as defined.
You may feel strongly that the auto-updating feature is a terrible idea in application, and you may even disagree with this project's decision to define a "bug" as "something that isn't working as designed", and that's fine. But I don't think it suggests that Stephen does not understand the issue, or did not read the thread.
"bug" vs "feature" is a silly debate about semantics until someone uses it as a justification to push back releasing a fix. Bugs get fixed immediately, but feature requests can wait.
And also, who says that mis-designed features aren't bugs? In this case, they designed a bug, and shipped it.
Introducing an entirely new user-facing dialog option into the upgrade install flow doesn't sound like a 'bug fix' to me in any system. That's completely new UX, localization, security concerns, documentation...
Autoupgrading might totally be a misfeature for your usecase, but it's how this program works. If you don't want it to do that, then uninstall it.
"Just don't use it. Just quit your job, stop paying your rent". This advice is precisely as useful as people that scream "move to Canada" at anyone who suggests we make any political progress.
It benefits everyone if we improve software. It benefits nobody if everyone never reports their problems and simply quietly disappears and quit their job when something in the software they use needs to be fixed -- and I think it's pretty obvious that course of action is unreasonable and unworkable.
I don't understand. We are talking about Docker Desktop for Mac, right? The piece of free-of-charge software that makes it a bit more convenient to use docker (which let's be clear is a Linux tool) from a Mac?
If you don't like the way that convenience is provided to you, then yes, you don't need to use it, and that is not tantamount to telling you to give up and go move to Canada.
The developers have chosen to implement autoupdating.
That meant that in December they pushed a version which had a bug in it, and users received the bugged version. Then they pushed a version with a fix, and users received the version with the fix.
- Docker for Mac periodically ships bugs that make the program (and Docker itself) unusable.
- Devs cannot downgrade -- so their workday can be productive and not wasted on fighting with the tool.
You seem to be implying that people are entitled. I suppose technically this can be true but Docker for Mac is viewed as a critical infrastructure and thus people should have venues to make it workable even if the latest version is bugged. They ask for choice and are against force automatic updating to the latest version.
> That doesn't seem worth throwing a tantrum over.
Such emotional reactions cast a doubt on whether you are arguing in good faith.
This broke development environments. I really hope no one is using Docker Desktop running on Mac in production.
Not to trivialize the issue; lost productivity due to a a bug in a software update sucks. But people are not getting irate customer phone calls or losing revenue because of product downtime due to this.
> But people are not getting irate customer phone calls or losing revenue because of product downtime due to this.
It could be an indirect cause of irate calls because there’s a bug in production but the dev environment is broken by Docker’s auto-update so the devs can’t work on a fix for the production bug.
Dude! This exactly. Think about having to explain that to NASA. Ahh sorry I can't fix this major SEC issue because our developers can't do a release atm. It went over as well as you'd think. I had to pull out our actual contract to be like "Well... we actually have like 8 days to fix this criticality of SEC so..." Then prayed to Al Gore and Linus Torvalds that what ever this dependency was would be resolved before I got my ass handed to me.
> This broke development environments. I really hope no one is using Docker Desktop running on Mac in production.
If it breaks even just my desktop working environment, and that impairs my ability to remediate a production problem, it's causing production downtime. Dev environments aren't decorative.
But still, if there was no bug in 3.0.2, people wouldn't have tried to downgrade to 3.0.1 again. So the bug was not in 3.0.0, nor in 3.0.1, but only in 3.0.2.
So a user experiences a separate bug in 3.0.2, tries to downgrade to 3.0.1, and is unable to as they are re-upgraded right back to 3.0.2. Do you not consider that a new additional bug, that one is unable to downgrade to 3.0.1, a stable build, at that point? I’d argue that’s a new bug that wasn’t previously discovered.
To me the question is, if it's still a bug once the original bug in 3.0.2 got fixed. I'd argue no, you might argue yes.
I think auto-upgrades of Docker are totally fine if it makes the lives of Docker devs easier and they can concentrate on more meaningful things. After all, the majority of the user base is not paying a single cent for the software. If a paying customer complains about auto-upgrades and wants to buy into their own support branch, that's a different story.
Thanks for elaborating, yes, I think we probably just disagree on that part. What if someone upgrades and they just don’t like the new functionality of 3.0.2 and they want to go back to 3.0.1 just because? This idea that bug-free releases would prevent downgrades is simply unrealistic IMO - as another comment pointed out, why even have versions then?
I understand your position, but the example here feels a bit off. If we were talking about an auto-upgrade from 2.x.y to 3.0.0 I'd probably agree with you. For a minor version upgrade maybe, too. But why would I want less bugfixes for the exact same feature set? Yes, there was a bug in a fix. But as soon as that's fixed, everyone should be fine.
I've no idea if Docker's auto-upgrades also include minor an major versions, though.
That’s a good point, I think that makes sense. Maybe not a good idea for larger releases like you said, but yeah for bug patches, why wouldn’t I want those. That’s fair.
There's a point for the developers and to communicate features and bugfixes. And they still help in supporting installations. But you are correct, they become less important. They (patch versions at least) don't necessarily have to be communicated to users.
I think that's a silly definition of a bug. If it breaks the product, it's a bug. It doesn't matter if it was created at dev time or design time.
Hell, a huge chunk of dev work is rethinking design decisions that are misguided/under specified/break things because you only see these issues once you start having to implement details.
similar to "feature not a bug" happened here https://news.ycombinator.com/item?id=26468421 the dev said its not a bug visual impaired users cant see things in his app and went full agressive on people reporting. so hard he blocked any new people from reporting issues too and hid comments/discussion about imrpoving the situation
someone else said "this is how you get forked" and they are right for foss things, but docker isnt so trickier
Seriously. Before I made it to the part where they agreed to stop auto-updating, I was on the third revision/toning-down of the comment I was writing in my head. I get that having multiple versions in the wild makes support more difficult, but what's more important in a dev tool: your support workload, or your users' control over their configuration management?
As someone who maintains an open source development tool that I work on in my spare time and don't get paid for, reducing my support workload is absolutely more important to me. You'd be surprised at the amount of time I waste asking people to try to reproduce an issue on the latest version, and when they do, their issue goes away. Unfortunately an auto-update feature for my tool isn't feasible, but if it were, I'd add it in a heartbeat. If that would turn some people off on it, I'm fine with that.
On a side note, I see plenty of developers click the "Remind Me Later" button over and over and over on the update dialog for their dev tools (and even OS). As a pathological example, I noticed one of my coworkers running a 3-year-old version of IntelliJ IDEA recently when he shared his screen on a Zoom call. But people who don't even install patch updates for months is common as well.
That's fine for something a person is working on in their spare time, but when it's a multi-million dollar company whose mission statement is "we simplify the lives of developers who are making world-changing apps," I think it's fair to have different expectations.
> As someone who maintains an open source development tool that I work on in my spare time and don't get paid for, reducing my support workload is absolutely more important to me.
Docker Desktop is produced by a for-profit company, and while it is free of charge itself, it is a key customer-facing part of an ecosystem Docker monetizes by selling subscriptions to services which it leverages. It's not some solo developer’s spare time project, and if Docker wants to be a viable business, it won't be treated like one.
> On a side note, I see plenty of developers click the "Remind Me Later" button over and over and over on the update dialog for their dev tools (and even OS).
It's not like the developers of some OS-es didn't work hard for users to have this attitude. Nowadays you better wait at least a few days before you give in and agree to update your system, it's just a sane approach to risk control.
Unfortunately not straight-forward with Docker Desktop as it's closed-source.
Sure, underlying components are open source, but I've seen no other effort to glue them together in a cohesive, single-purpose package like Docker Desktop.
How is that a dark pattern? Are you going to say that Chrome and Firefox auto-updates are a dark pattern too?
Keeping people on the latest version of your software has very clear benefits to both you and your users. Unfortunately, as we see here, it can also have downsides. Ultimately it's up to the developer to decide what trade off works best for them. Input from users is important, but it's not the users' decision.
I wonder if that discussion already occurred in Docker and now they are simply messengers executing something already decided.
I've had the position of saying that something desired will not be done with the only thing I could honestly say without airing private disagreements being "reasons", as the person I was talking to knew I also didn't think it made sense.
Deflecting with vague statements has also been a tactic I have used, hoping that they would not persist.
No, because that puts company management in the crosshairs. You don't let some developer do that without first getting the spin machine time to come up with a reason why it's good for the users. Otherwise people question the vendor's ability to provide the product they want.
This whole page is full of folks who are currently questioning the vendors ability (or am I misreading something?). That does seem like a problem that should
put focus on the management.
No, as I don't want to deal with the person who made the decision later when someone goes to complain to them instead and they blame me for not obfuscating and pointing the finger at them.
On the flip side, supporting old versions is often very difficult and time-consuming. For an open-source project I maintain, sometimes I feel like I spend more time asking people to upgrade to the latest version (which often fixes the issue someone has) than I do helping people with legitimate issues. (I exaggerate, but it's a not-insignificant drain on my time.)
If these people are paying for Docker Desktop, then, sure, maybe they're entitled to upgrades and support on their terms. But if they're not, I'm completely sympathetic toward developers trying to reduce their (unpaid) support burden.
Yes, this might turn people off and hinder adoption, but that's their choice to make.
Yep. There's a growing desire of even open source projects to shove updates down your throat without any recourse for you as user. I see such software as trojan horse. Developers argue that you need much needed updates with security fixes, etc. Which is of course true. But there's no justification for denying user the freedom of controlling when to update.
EDIT: But seeing this in Docker doesn't surprise me at all. It's this kind of project that knows better than you. Compare this with general purpose tools such as SSH, Git, gcc. There's nothing special about projects like Docker and Snap and they will get eventually replaced by projects that expose more knobs to fit (private) use cases of other users.
>But the whole goal of auto-upgrading is to avoid the spread versions, it's a mess to investigate when you have reports from 1 year old version, that's the main reason why we choose to do that.
Why not just outright reject issues on outdated version of the software? Decide that you won't support versions older than x and roll with that. This way the users can consider that when they weigh the risks of updating.
>For those with this pain right now: I think installing Docker Desktop 2.5.0.1 is the only solution.
> it's a mess to investigate when you have reports from 1 year old version
Here's a process I've done, it may sound like a lot of work but it goes quickly if your tools are set up properly:
Require versions and logs with bug reports; checkout the tag for that version; build; reproduce the issue and if it doesn't reproduce on the latest build then it might've been fixed (can ask other devs, reference changelogs and recently closed issues, git blames in that part of the codebase to lead back to PRs, etc.); then investigate.
If the bug is already solved in a newer version, you can close the ticket or provide a hotfix build with the fix if it's low effort (big architectual changes are a no-go; now you have a reason why or why not to provide a fix).
If the bug hasn't been solved, you can fix it for the customer and forward port that to the latest dev branch. Two birds with one stone.
> Why not just outright reject issues on outdated version of the software?
Because that still requires time to deal with, especially if you have free-form support avenues, like a Slack channel. If someone comes in and pastes a stack trace, I still have to take time to ask them what version they are using, and tell them to try on the latest version. And no amount of bot autoresponses will ensure you don't have to get personally involved.
Yeah, if it's something like GitHub Issues, you can set up an issue template that requires the version number, and a bot that checks all submissions and auto-closes if the version number is missing or not the latest. But at least for one project I work on, it's rare that someone goes to the issue tracker first before asking for help in Slack.
> Why not just outright reject issues on outdated version of the software? Decide that you won't support versions older than x and roll with that. This way the users can consider that when they weigh the risks of updating.
i don' t understand why this is not something more companies do.
Most operational systems (networking equipment, server equipment etc) have release cycles, in which current version -2 is the usual schedule in regards to support.
Or just do it the way openBSD does it, and only support the last two releases of the software.
I’m a fan of this approach (or variations with a LTS schedule). But it’s worth noting that it’s also widely criticized depending on context. For instance, Apple has a similar support schedule for its various OSes, and that draws a ton of nerd ire because it’s frequently interpreted as “planned obsolescence”.
Because if you look at their version adoption chart there’s a long tail between release and propagation. They’re still either supporting older versions for some time or making the arbitrary decision to ignore those users who haven’t restarted to get the new version.
> Why not just outright reject issues on outdated version of the software? Decide that you won't support versions older than x and roll with that. This way the users can consider that when they weigh the risks of updating.
In my experience the overwhelming majority of bug reports do not include the version.
You could even have a bot automatically asking for the version if it's not mentioned, and closing the ticket automatically if the version is no longer supported.
Sorry about it, we rushed a bit the last update and introduce a severe bug.
But the whole goal of auto-upgrading is to avoid the spread versions
Maybe you wouldn't have so many versions if you slowed down and actually took the time to make sure each version you do eventually release was more stable...?
With a lot of "modern" software, I feel like it's not just "move fast and break things" anymore, it's "keep moving and breaking things". In particular, certain types of software like browsers and OS invoke feelings of dread upon each update: "what did they break/what user hostile crap or other unwanted changes did they try to sneak in this time?"
Docker is, like a lot of other software, "foundational" in that its error-free operation is relied upon by many. How can developers build anything reliable if the foundations are shaky?
Maybe that's partly why retrocomputing is so popular. With something like a C64 you never have to worry about the platform changing, so you can spend all of your energy on solving the problem you had in mind, instead of the ones created by the platform trying to shift under you.
Great points. "Move fast and break things" is for new companies that don't have people relying on them.
People who use this saying to keep moving forward should be aware of the fact that Facebook no longer use this (). I've heard "Move fast with reliable infrastructure". In other words, Facebook invest massive amounts of money to allow them to test and deploy new features without breaking the whole site. If you don't want to make that investment, you don't get to move fast without breaking things.
I'm not in Facebook, above information is from a podcast. If you are from Facebook, I'd love your input
I don't know how developers can look at the mess Microsoft causes with forced Windows 10 updates and think it's not only fine but they need to add another layer so that when the OS auto-updates without asking it might kick off a chain reaction of user applications wrecking themselves recursively.
At least we don't have to worry about a Skynet I guess.
> Maybe you wouldn't have so many versions if you slowed down and actually took the time to make sure each version you do eventually release was more stable...?
Considering that containers and orchestration is still a rapidly changing field I disagree with this assessment.
Personally I don't see why auto-update is the job of the software... why not just use a system-wide package manager? I suppose on Mac or Windows, that means using the Store, which heavily restricts what software can do. How shortsighted...
I mean lets not pretend that's a good choice for either devs or users either. From the user side, you're a the mercy of how fast your distro decides to package new versions. For most things that's fine, but for your main product you might want something newer. Of course devs can setup their own repos but then the devs have to do additional packaging, host infra, etc.
From the dev side, getting packages accepted into various distros can be a pain in the ass. Just look at the recent blowup around the Python Cryptography package when they decided to add Rust and Gentoo's complaints.
Something like PPAs is the sweet spot since it uses the package manager but is under developer control. It's a shame that Mac and Windows don't have something like that.
If my project, AppFS [0], more widely adopted then packaging would be trivial for all distributions. And updates would be automatic or not depending on user preference.
Outside the Store, most Mac software uses Sparkle for updates, which gives you a nice UI with patchnotes and a choice to update or not (or even opt-in to automatic updates if you're feeling brave): https://sparkle-project.org/
Interestingly, they appear to be using Sparkle [0].
Oddly, stephen-turner claims that Sparkle is part of the reason it's taking some time to fix the issue. Direct quote:
> As said above, we're working on this: we're planning to download the update in the background and then give the user the choice whether to update to it on next start. It's taken a little longer than we hoped because the Sparkle framework we use on Mac doesn't expect that workflow: once the update has been downloaded, it wants to apply it without confirmation at next shutdown. We are keen to retain the invisible download but give the user a choice whether to apply it later.
It's hard to understand why they're struggling with a problem that seems to be solved by every other app that uses Sparkle...
I’ve only had the pleasure/pain of using Sparkle once, but it’s definitely not as simple to use as “every other app” might suggest. At least coming from the perspective of “I’ve only ever released one Mac app to the public.” I mean it wasn’t as complicated as modern web build tools, but it definitely needed some finesse.
It looks like this is the framework used by Docker Desktop too [1]. Perhaps they enabled auto updates by default? (I wouldn't know, I don't have any experience with Mac projects.)
Yeah, the problem is that Docker Desktop apparently downloads updates automatically in the background, and Sparkle by default apparently wants to kick off the install as soon as the download finishes.
Which of course punts the concern from "Docker Desktop is installing potentially-broken updates without my consent" to "Docker Desktop is consuming network bandwidth - which might very well be metered - without my consent" - that is, hardly an improvement.
If the Docker folks could not be actively hostile to user experience for two seconds, that would be great :)
There is a class of developers who feel very, very strongly that updating the user's software should be their decision, and not the user's. For the lazy or reckless user's own good, or whatever.
Well for what it’s worth I vastly prefer that attitude as someone who has to manage different dependencies on different projects. Sometimes it’s not up to me when one project is using Foo@13.0 and another is using Foo@27.0. Package managers which isolate that to the project or to some environment are a lot easier to manage than the system-level package managers which require you to roll that isolation yourself.
Edit: and I can’t help pointing out the irony of this being a consideration on a thread about Docker.
Even supposing all we had is e.g. Debian distros, why does Docker Hub exist? Why not just host all the artifacts in some repository and allow `apt-get install docker-image-$image-name`?
Why do so many programming language ecosystems use their own package management (Python's PyPi and pip, Rust's crates, Node's NPM, ...)
I'm open to people's thoughts on the way in which this following analogy doesn't hold. But I think the general (and imo deeply unfortunate) answer is that the software can provide a better experience if it handles these things like self-updates, even if it shouldn't be its "job".
Those language-specific package managers do not auto-update the packages you have installed with them; I'm don't follow how you go from the premise that there are many different package managers out there to the conclusion that auto-updates are a "better experience".
> the conclusion that auto-updates are a "better experience"
I don't know that they are necessarily a better experience overall, but it is certainly a worse experience to run into a bug or limitation that has already been resolved in a newer version.
It's likely that you'll run into some bug for any software release. What you can control is whether the user makes a deliberate choice to upgrade; the risk of bringing in new bugs or breaking changes may outweigh the cost of dealing with the bug, so in this situation, upgrading for a fix has known benefits and is intentional. This thought process is helpful for pitching your case with stakeholders as well.
> Why do so many programming language ecosystems use their own package management
Because then you have to support maintaining your ecosystem in a dozen or more variations of OS ecosystems. Easier to bring your dep manager to the OS than the other way around.
Well even the package manager managing it can be disastrous. I have definitely lost critical days of work because `brew install` upgraded, and broke, things completely unrelated to what I was installing.
"Sorry about it, we rushed a bit the last update and introduce a severe bug.
I'll try to fix it today and if I cannot, rollback last grpcfuse patches."
It feels like an habit of the Docker team, they always try to defend their anti-user decisions with vague and uninspired corporate bs. They're barely trying.
Another classic: https://github.com/docker/docker.github.io/issues/6910
What the hell does "Docker's new direction to go back to its roots and focus on developer tooling" even mean? Docker is developer tooling, that's the whole product. That's not "going back to your roots" that's just doing your damn job.
I wrote that comment at the time; to add some context to that comment: removal of the "login to download" was shortly after the "Docker Enterprise" business went to Mirantis. While Docker started as a developer centric tool, focus shifted to Enterprise products (Docker Enterprise Engine, UCP, Docker Trusted Registry, Docker Desktop Enterprise). After the move of the enterprise products to Mirantis, Docker's focus went back to developer products.
Yeah, a lot of times they don't seem to understand that their core product is "you can run this command, and the virtual machine will work on any system that runs Docker, limited only by resource availability".
Heck within the first few weeks of dockerizing our system at a startup (c. 2015), they introduced a backward-incompatible change to docker-compose that we only found when onboarding new employees -- the exact thing it's supposed to be for!
> we've made this change to make sure we can improve the Docker for Mac and Windows experience for users moving forward
"How should we improve user experience? I know! Let's ruin user experience by forcing users to signup for an account they don't actually need! Brilliant!"
These may simply be issues that are not at the discretion of these developers to either change or discuss with third parties.
As tempting as it is to weigh in publicly on such matters, the best way for a developer to get results is often to say little in public and try to resolve the issue behind the scenes (as was apparently done here).
And the paucity of honesty in public communications is a testament to how few CEOs (and others in a leadership role who could buck-stops-here) are that sincere, willing to take that kind of heat, or to invest in understanding the problem in the first place. It’s also often a testament to how pathologically organizationally shielded they often are; I’d bet money even 5k thumbsdowns on an issue doesn’t bubble up to the top at most orgs.
It's common to be defensive with these sorts of things. I wish people would instead be inquisitive And ask about potential downsides vs. upsides, but it's much easier to respond that way when you feel like your job or status isn't at risk.
Well, the Also I'm going to move this into the roadmap, as it's a feature request not a bug. is just as bad. If the process broke my desktop then it is a bug.
If your software ships viruses, malware, and ransomware to a customer as part of an update / install then it most certainly a bug that needs to be fixed now.
You've apparently missed my point entirely. Well done.
I am talking about the malware itself...
Pretend you write malware. Your malware does what you want it to do: it destroys data. Is the data destruction a bug? Nope. Not a defect, either. It's a feature; the intended effect.
Docker intended for Docker Desktop to auto update. So, it's not a bug, but a feature. Misguided feature? Sure. User-hostile feature? Yep! It's a feature, anyway.
Talking about malware has no point in this unless we are talking about something shipped with docker. It is most definitely a bug (or defect) when the software cannot run.
I can solve this silly debate by using the jargon of our times. It doesn’t matter whether it’s a bug, design flaw, dark pattern, bad UX, feature request. It’s a...
In the end, seems they are changing course after listening to user feedback. Clearly auto-updating works in some contexts, but not in others, and now we can all learn from it :)
> Clearly auto-updating works in some contexts, but not in others, and now we can all learn from it :)
I think the point was that we all knew auto-updating doesn't make sense in the context of developer tools. I'd even argue GitHub Desktop shouldn't be auto-updating. As a matter of fact, VSCode doesn't, and my compilers certainly don't.
Agree. "I was just following a successful model" is not the right way to approach engineering, without understanding the trade-offs and it seems that step was missed when implementing it in the first place.
Indeed, "I was just following a successful model" is exactly why software seems to have progressively gotten worse over the years. "Google does it and therefore I should, too!" No, you should maybe - just maybe - consider, you know, talking to your users and asking them what they want for once, and actually listening when they tell you.
And this applies outside of software, too. Apple dropped the headphone jack, so guess what every other phone manufacturer wants to do? Samsung shoved ads into its smart TVs, so guess what every other TV manufacturer wants to do? So much for product differentiation.
Isn’t one of the main benefits of containers that you can set static versions, validate them, and then be able to deploy them with confidence that later changes won’t be used? It’s a shortcut to a known working configuration. I would expect container maintainers to appreciate the value of not updating as soon as its available.
Auto-updating does not work in any context because you can seriously break the end users stuff with no recourse. It needs to ask, always as you don't know what else is going on.
Well, many people when faced with a question from the computer will be to try to ignore it for as long as possible, that's not ideal either. Opt-out auto-update feels like the right way for consumer software, opt-in auto-update for professional software, like developer tools.
For professional (and especially tech-oriented) software I think there’s more leeway to ask the user. Prompt on install/first run would probably strike a better balance than opt-in.
Auto-updates by default are great, but having no way to opt out of auto-updates (or at least, an update to a given known-bad version)? That's pretty bad.
Why would you want auto-updates though. It just breaks your stuff. Grandmas' or devs' alike. I feel users rarely benifits. But sure having apps with user hostile functions being pushed without a way to say no is convenient for the responsible at those places.
Atleast when you get a prompt you know why stuff fails afterward. Debugging silent auto update issues is a nightmare.
I don't think what you're saying is necessarily true. Browsers for example are very resilient at this point, and seem to handle auto-updates just fine. Worst case scenario you can use a different browser for a couple of hours. The upside is that auto-updating works in most cases and users hardly notice, while you're shipping security fixes straight to them.
With developer tools it's different. Your tool stops working and suddenly you cannot work. You often can't just change the tool either, without having to rewrite stuff. For this, it's necessary that versions stay the same until we're ready to upgrade. Same goes for libraries you're using and so on.
Browser updates break addons and workflows on a regular basis. Google for how to do something like add a shortcut to a website to the Android homescreen from Firefox, and chances are all the tutorials you find will be out of date.
Browsers have the benefit of massive interest in their beta releases. This substantially reduces the number of breaking changes which make it into auto-updated versions.
Perhaps Docker could learn from this and only auto-update to versions which have been used by a large audience for N weeks with no regression reports.
On average, updates make stuff better. Getting them automatically and quickly is, on average, a good thing.
Only when that becomes untrue for a sustained period of time should you turn off updates, but at that point, you're just sticking to a single pinned version until you're able to switch to a competitor.
Vendors who just keep pushing user-hostile shit should lose customers, not (just) updaters.
Ye well I am fine with auto updates as long as it is possible to disable (without hacks or esoteric config file settings in GUI apps). The convinence of having the computer click "install latest version" in a prompt for you is minimal.
I think it should be the user's choice. This trend in slow rollouts to see what breaks and using users for beta testing is annoying.
The deliriously unhip Wordpress installation running my website has auto-updates for minor versions turned on. Major updates still require my intervention.
It pretty much always just... works. New exploits just get patched out.
Of course, with the exception of the new "Gutenberg" editor, WP has always been ultra conservative in terms of major usage change.
"But the whole goal of auto-upgrading is to avoid the spread versions,
it's a mess to investigate when you have reports from 1 year old version,
that's the main reason why we choose to do that."
I see this all the time, especially in open source products. A vendor decides to screw over its users because the vendor wants less work and doesn't feel like finding a better solution. And they usually get away with it, because big open source projects are incumbents that are very hard not to use.
If you make a product, please prioritize solving the user's problems and pain over your own. Not only is it the compassionate and ethical thing to do, but it also helps engender goodwill for your product. People often choose a product solely because of their feelings for it (branding 101).
You went from "open source products" to "open source projects" , and I'm confused about your point. Is Docker an example of both an open source project and product?
> If you make a product, please prioritize solving the user's problems and pain over your own.
I'm not sure I understand the context under which you want this prioritization. Just asking for clarification.
> Blocking outgoing network connections to desktop.docker.com can keep you on on the functional 3.0.1 by not allowing the auto-upgrade to 'find' the broken 3.0.2 version, this can be used as a temporary patch.
You'd need somethling like Little Snitch to achieve it, but at least you won't lose a Monday.
I use Little Snitch, usually on "Alert Mode", and I almost breathe a sigh of relief whenever I block an automatic outgoing connection to an upgrade server. It can be distracting and annoying to block connections at a granularity to keep the application working correctly, and sometimes I get a bit fed up and just allow all connections, but usually temporarily. Since I even want to be consulted for outgoing network connections, you bet that I want to be consulted before a software upgrade!
SaaS is literally you being subject to unpredictable A/B tests, UX changes, API changes, etc. This is what people signed up for. Automatically managed updates. I don't know too many places that don't rely on some sort of cloud service/SaaS today. Can Github even hit two nines availability this year, I wonder. But we all sit idle while they fix their shit for the 8th time this month.
Enterprises should be paying customers if they want that kind of support: it's expensive and requires substantial staffing investment to avoid making matters worse by, for example, delaying installation of security updates.
There's no rudeness in my comment. I'd argue the tone is much softer than the original post.
I agree that auto upgrades can cause enterprise nightmares, but my point is, you aren't using an enterprise product. You're using a free and unpaid product. Why do we have the right to attack and make demands of someone who isn't getting compensated?
What's surprising me most is how much the devs/IT guys are considering that the computer of their users belong to them and that they can choose to do whatever they want because they write the software !!!!
How is the "auto-updater" different from any malware calling remote website or mining bitcoins???? In both cases, the dev decided what must happen with the users computer without its consent !
Yeah, I like the way Ubuntu handles it. Basically, if you come boo-hooing in Issues and you are using an ancient non-LTS version, you'll be asked to install an LTS (or latest) version first. That way you only have to maintain a tiny subset of your releases, and your users are incentivized to use LTS or make sure they are up to date.
Docker for Mac is a dumpster fire. We gave up on it a year ago. There are loads of hot-reloading dev-stack-in-the-cloud solutions nowadays that anyone who can avoid local Docker should.
More performant, better battery life and keeps your crotch from igniting.
> Seems like you need to default to auto update but have an opt out.
We have that. Our software auto-updates, but when you start our program you get a launcher where you can select one of the last five minor versions for each available major version.
Typically the next major version is made immediately available and active in the test environment, while a "boss user" gates them in prod.
When setting a new major version as active, the previous ones are still available for a long time, along with any potential new minor versions for those.
This has made our life so much easier, since any critical issues can almost always be worked around by the user simply launching a previous version, either minor or possibly major. So we can be much more aggressive with pushing out changes, which our customers also appreciate.
It's not perfect but works very well for us and our customers seem happy.
It does require us being careful when making database schema changes, or similar potentially breaking changes. But a lot can be handled quite transparently in the database (using views typically) or through code, and our database upgrade tool can also migrate data as a last resort.
This sort of thing is the exact reason why I have been staying away from Docker for my projects.
Having one company be totally in control of runtime environments just isn’t robust. There are just too many scenarios where some issue they face internally could cause a massive disruption.
And when I looked I didn’t find any Docker alternatives, so I just stuck with plain old Bash scripts, rsync, running the app in Ubuntu.
I use them for everything where I can. Years of docker in production has burnt me out. I've come to loathe its design and anti-patterns. There's also just plain ridiculous stuff like making people signup for an account to download it on windows/mac
It might do, I haven’t had the time to investigate. It was 3-4 years ago when I had to make the decision. I remember there was some open source tools mentioned on some websites, but I certainly didn’t get the impression it was very easy to get it working.
I guess that was the appeal of Docker, they had done all the difficult part and created a product. But then it’s the ‘all eggs in one basket problem’, and other stuff you mentioned like the signing up for accounts etc.
I had so many other things to learn and build at the time, that the extra complication of containers didn’t make sense.
If I was in a situation where I was building out a fresh site I would for sure give LXD another look.
Wow. Autoupgrading is such a rookie mistake that I'm shocked that anyone at Docker would approve this tactic. Auto-upgrading and breaking environments is a sure-fire way of sowing deep seeds of mistrust and would force lots of companies to move off Docker.
That's a decision that needs to be completely reversed, and the executives that approved that need to be dismissed immediately.
This, especially because based on my experience with running Docker desktop yesterday, this issue should actually be closed. There's now a "Quit Docker Desktop" and a "Restart and Upgrade" (and a status icon) on the bar menu (append: I'm on a mac).
Docker Desktop for Windows still auto-upgrades without asking you. Not sure about Mac. The fix is supposedly coming in the next release (sometime in April).
Unless you're planning on building a microservices-based house of cards, you may not need Docker or similar. I run Django apps just fine with a combination of Python (via Pyenv), Postgres and Redis all running locally on a Mac.
If you still wanted to use containers, podman is a great drop-in replacement for docker.
Heroku's always nice and simple for getting stuff out there.
Vagrant + a provisioner of your choice is still good alternative and works solidly for local development if you want to keep developing away from the local filesystem.
There are quite a few options that don't involve docker at all, it just takes a little research to remind yourself that there's a world outside of the one presented by docker :/
Is there a file format like "docker-compose.yml", where I can describe a handful of Jails that can talk to each other in their own virtual network, and that sets up a handful of services exactly as I describe?
Ideally, a Windows user, a Linux user and a Mac OS user check out this file, run a command ("voodoo up") in the directory of that file, and have a VM with the FreeBSD jails setup running exactly as described in the file.
I agree a compositional/coordination cli tool is needed. However here are a few options:
- FreeBSD style jails already offer docker config, but here is an article showing a few basic things going on, notably the Network Access section has a FIXME by it: https://bsdwatch.net/articles/jails-as-virtual-servers
Docker solves a lot of problems. I don't want to use it, but I want an easy, cohesive to all those problems!
Like, each type of application I have to run in my fleet needs the right environment, with proper settings, packages, etc. And when a developer at the company needs to work on that service locally, they need a prod-like env spun up quickly and simply that they can easily work in.
There's many ways to do this, like "one prod box and ssh" but they all have tradeoffs...
My team uses https://www.chef.io/products/chef-habitat/ to solve what you explained there. Worth taking the time to go through if you want repeatable builds, runtimes, and environments (be they bare metal, VM, or even docker).
Google Chrome has been using auto upgrades for years and I haven't heard any complains about that. I'd argue it's also a really important tool in a dev's toolkit.
I suppose it's because, while not every version is great, it hasn't (that I am aware of) totally broken Chrome for any significant chunk of their users. It may be that is due to a bigger budget for QA at Google, or perhaps the intrinsic challenge of the two tools, but for whatever reason it hasn't happened (that I know of). If it did, I think there would be plenty of squawking.
Chrome does several things right. (Disclaimer: I used to work on ChromeOS/Chrome, but this is just my opinion (but you can have it if you want).)
- A beta stream that's one release ahead, which tends to be selected by web developers and other whiny^H^H^H^H^H demanding users.
- Gradual release of new versions, starting with something like 1% of users, then 10%, then 100%.
- Significant changes are behind run-time flags (chrome://flags), and ship disabled by default. After one or often several releases, where it can be optionally enabled, it becomes enabled by default, with the ability to back out if something goes wrong. Finally, only once it's been successfully enabled for several releases, the flag and the old code can be removed.
- Nobody minds if a feature doesn't make it in to a release; there'll be another one in month or two. There's no pressure to cram in something half-baked to meet a deadline.
At some level, this is a tension between continuous delivery and incremental updates and burden of support. I think they are correct in the assertion that you get version explosion, but possibly there is a middle ground which is, before you file a bug, have you tried updating first (aka turning it off and on again) and having a policy of ignoring/closing bugs of old versions as un-reproducible.
Probably a no-win situation overall for the dev team, however.
There is never a perfect way of resolving differences of opinion on a project. I see both viewpoints as valid (not an invested party here, just an observer). But in the end, I would give the maintainer of the project veto power over feedback. It seems like this is an attribute of their design they don't want to budge on.
On my system docker comes from the package manager, which doesn't even run unless I explicitly trigger it. Is not using the GUI bundle not an option for people who need more control?
Last time I tried it, particularly using it for a Docker Compose replacement, it certainly wasn't the drop-in replacement that had feature parity with Docker that it was promoted as.
Don't use docker-compose (or podman-compose), write kubernetes pod files instead! That's what we've been doing (but yes there are still some features that aren't completely compatible with that)
I’ve also been using it and found it to be really good. Has helped me understand much more about containers and I’m a fan of its approach. Buildah is also a powerful tool.
No. We're still on the last stable version (2.5.0.1)
Every version after that either doesn't start [1] or makes random containers miraculously stop working.
The 2.x version has your CPU burning, but that seems to persist in 3.x [2].
I stopped trying after they pulled in the auto-upgrade feature. Our team has no time playing bug-hunt easter egg at random times.
I honestly don't comprehend how this software gained such a wide adoption on Macs.
Docker on Linux works flawlessly meanwhile.
No, and it's very likely that Docker may never improve very much on MacOS. There was already no shortage of trouble on x86 MacOS, I doubt the switch to ARM (and ostensibly, Big Sur) will make things any easier.
The fact that they had to hold the maintainers hands through this is disappointing. "it's a feature request not a bug" shows Stephen did not even understand the problem he introduced. Either that, or he didn't read all the comments before him.