Hacker News new | past | comments | ask | show | jobs | submit login

> but in the back of my mind I'm thinking, man, anyone could just code up some real nasty backdoor and the project would be screwed

That's true of course, but it's not a problem specific to software. In fact, I'm not even sure it's a "problem" in a meaningful sense at all.

When you're taking a walk on a forest road, any car that comes your way could just run you over. Chances are the driver would never get caught. There is nothing you can do to protect yourself against it. Police aren't around to help you. This horror scenario, much worse than a software backdoor, is actually the minimum viable danger that you need to accept in order to be able to do anything at all. And yes, sometimes it does really happen.

But at the end of the day, the vast majority of people just don't seek to actively harm others. Everything humans do relies on that assumption, and always has. The fantasy that if code review was just a little tighter, if more linters, CI mechanisms, and pattern matching were employed, if code signing was more widespread, if we verified people's identities etc., if all these things were implemented, then such scenarios could be prevented, that fantasy is the real problem. It's symptomatic of the insane Silicon Valley vision that the world can and should be managed and controlled at every level of detail. Which is a "cure" that would be much worse than any disease it could possibly prevent.




> When you're taking a walk on a forest road, any car that comes your way could just run you over. Chances are the driver would never get caught. There is nothing you can do to protect yourself against it.

Sure you can. You can be more vigilant and careful when walking near traffic. So maybe don't have headphones on, and engage all your senses on the immediate threats around you. This won't guarantee that a car won't run you over, but it reduces the chances considerably to where you can possibly avoid it.

The same can be said about the xz situation. All the linters, CI checks and code reviews couldn't guarantee that this wouldn't happen, but they sure would lower the chances that it does. Having a defeatist attitude that nothing could be done to prevent it, and that therefore all these development practices are useless, is not helpful for when this happens again.

The major problem with the xz case was the fact it had 2 maintainers, one who was mostly absent, and the other who gradually gained control over the project and introduced the malicious code. No automated checks could've helped in this case, when there were no code reviews, and no oversight over what gets merged at all. But had there been some oversight and thorough review from at least one other developer, then the chances of this happening would be lower.

It's important to talk about probabilities here instead of absolute prevention, since it's possible that even in the strictest of environments, with many active contributors, malicious code could still theoretically be merged in. But without any of it, this approaches 100% (minus the probability of someone acting maliciously to begin with, having their account taken over, etc.).


It's not defeatist to admit and accept that some things are ultimately out of our control. And more importantly, that any attempt to increase control over them comes with downsides.

An open source project that imposes all kinds of restrictions and complex bureaucratic checks before anything can get merged, is a project I wouldn't want to participate in. I imagine many others might feel the same. So perhaps the loss from such measures would be greater than the gain. Without people willing to contribute their time, open source cannot function.


> It's not defeatist to admit and accept that some things are ultimately out of our control.

But that's the thing: deciding how software is built and which features are shipped to users _is_ under our control. The case with xz was exceptionally bad because of the state of the project, but in a well maintained project having these checks and oversight does help with delivering better quality software. I'm not saying that this type of sophisticated attack could've been prevented even if the project was well maintained, but this doesn't mean that there's nothing we can do about it.

> And more importantly, that any attempt to increase control over them comes with downsides.

That's a subjective opinion. I personally find linters and code reviews essential to software development, and if you think of them as being restrictions or useless bureaucratic processes that prevent you from contributing to a project then you're entitled to your opinion, but I disagree. The downsides you mention are simply minimum contribution requirements, and not having any at all would ultimately become a burden on everybody, lead to a chaotic SDLC, and to more issues being shipped to users. I don't have any empirical evidence to back this up, so this is also "just" my opinion based on working on projects with well-defined guidelines.

I'm sure you would agree with the Optimistic Merging methodology[1]. I'd be curious to know whether this has any tangible benefits as claimed by its proponents. At first glance, a project like https://github.com/zeromq/libzmq doesn't appear to have a more vibrant community than a project of comparable size and popularity like https://github.com/NixOS/nix, while the latter uses the criticized "Pessimistic Merging" methodology. Perhaps I'm looking at the wrong signals, but I'm not able to see a clear advantage of OM, while I can see clear disadvantages of it.

libzmq does have contribution guidelines[2], but a code review process is unspecified (even though it mentions having "systematic reviews"), and there are no testing requirements besides patches being required to "pass project self-tests". Who conducts reviews and when, or who works on tests is entirely unclear, though the project seems to have 75% coverage, so someone must be doing this. I'm not sure whether all of this makes contributors happier, but I sure wouldn't like to work on a project where this is unclear.

> Without people willing to contribute their time, open source cannot function.

Agreed, but I would argue that no project, open source or otherwise, can function without contribution guidelines that maintain certain quality standards.

[1]: https://news.ycombinator.com/item?id=39880972

[2]: https://rfc.zeromq.org/spec/42/


> But that's the thing: deciding how software is built and which features are shipped to users _is_ under our control. The case with xz was exceptionally bad because of the state of the project, but in a well maintained project having these checks and oversight does help with delivering better quality software. I'm not saying that this type of sophisticated attack could've been prevented even if the project was well maintained, but this doesn't mean that there's nothing we can do about it.

In this particular case, having a static project or a single maintainer rarely releasing updates would actually be an improvement! The people/sockpuppets calling for more/faster changes to xz and more maintainers to handle that is exactly how we ended up with a malicious maintainer in charge in the first place. And assuming no CVEs or external breaking changes occur, why does that particular library need to change?


Honestly this is why I think we should pay people for open source projects. It is a tragedy of the commons issues. All of us benefit a lot from these free software, and done for free. Pay doesn't exactly fix the problems directly, but they do decrease the risk. Pay means people can work on these full time instead of on the side. Pay means it is harder to bribe someone. Pay also makes the people contributing feel better and more like their work is meaningful. Importantly, pay signals to these people that we care about them. I think the big tech should pay. We know the truth is that they'll pass on the costs to us anyways. I'd also be happy to pay taxes but that's probably harder. I'm not sure what the best solution is and this is clearly only a part of a much larger problem, but I think it is very important that we actually talk about how much value OSS has. If we're going to talk about how money represents value of work, we can't just ignore how much value is generated from OSS and only talk about what's popular and well know. There are tons of critical infrastructure in every system you could think of (traditional engineering, politics, anything) that is unknown. We shouldn't just pay things that are popular. We should definitely pay things that are important. Maybe the conversation can be different when AI takes all the jobs (lol)


I get why, in principle, we should pay people for open source projects, but I guess it doesn't make much of a difference when it comes to vulnerabilities.

First off, there are a lot of ways to bring someone to "the dark side". Maybe it's blackmail. Maybe it's ideology ("the greater good"). Maybe it's just pumping their ego. Or maybe it's money, but not that much, and extra money can be helpful. There is a long history of people spying against their country or hacking for a variety of reasons, even if they had a job and a steady paycheck. You can't just pay people and expect them to be 100% honest for the rest of their life.

Second, most (known) vulnerabilities are not backdoors. As any software developer knows, it's easy to make mistakes. This also goes for vulnerabilities. Even as a paid software developer, uou can definitely mess up a function (or method) and accidentally introduce an off-by-one vulnerability, or forget to properly validate inputs, or reuse a supposedly one-time cryptographic quantity.


I think it does make a difference when it comes to vulnerabilities and especially infiltrators. You're doing these things as a hobby. Outside of your real work. If it becomes too big for you it's hard to find help (exact case here). How do you pass on the torch when you want to retire?

I think money can help alleviate pressure from both your points. No one says that money makes them honest. But if it's a full time job you are less likely to just quickly look and say lgtm. You make fewer mistakes when you're less stress or tired. It's harder to be corrupted because people would rather a stable job and career than a one time payout. Pay also makes it easier to trace.

Again, it's not a 100% solution. Nothing will be! But it's hard to argue that this wouldn't alleviate significant pressure.

https://www.mail-archive.com/xz-devel@tukaani.org/msg00567.h...


Ya so we could have paid this dude to put the exploits in our programs good IDEA y


If you're going to criticize me, at least read what I wrote first


Using what bank? He used a fake name and a VPN.


Difference is that software backdoors can effect billions of people. That driver on the road can't effect too many without being caught.

In this case, had they been a bit more careful with performance, they could have effected millions of machines without being caught. There aren't many cases where a lone wolf can do so much damage outside of software.


A few more issues like this in crucial software and we might actually see the big companies stepping up to fund that kind of care and attention.


>But at the end of the day, the vast majority of people just don't seek to actively harm others. Everything humans do relies on that assumption, and always has.

Wholeheartedly agree. Fundamentally, we all assume that people are operating with good will and establish trust with that as the foundation (granted to varying degrees depending on the culture, some are more trusting or skeptical than others).

It's also why building trust takes ages and destroying it only takes seconds, and why violations of trust at all are almost always scathing to our very soul.

We certainly can account for bad actors, and depending on what's at stake (eg: hijacking airliners) we do forego assuming good will. But taking that too far is a very uncomfortable world to live in, because it's counter to something very fundamental for humans and life.


This is a good take. But even in a forest, sometimes when tragedy strikes people do postmortems, question regulations and push for change.

Sometimes, it does seem like the internet incentivizes or makes everyone else accessible to a higher ratio of people who seem to harm than normal.


> But at the end of the day, the vast majority of people just don't seek to actively harm others. Everything humans do relies on that assumption, and always has.

https://en.wikipedia.org/wiki/Normalcy_bias ?

> It's symptomatic of the insane Silicon Valley vision that the world can and should be managed and controlled at every level of detail. Which is a "cure" that would be much worse than any disease it could possibly prevent.

What "cure" would you recommend?


You need to accept that everything has a tradeoff and some amount of drama just seems to be built into the system.

Take sex work, for example. Legalizing it leads to an overall increase in sex trafficking. But it also does this: https://www.washingtonpost.com/news/wonk/wp/2014/07/17/when-...

My personal opinion is that if something is going to find a way to conduct itself in secret anyway (at high risk and cost) if it is banned, it is always better to just suck it up and permit it and regulate it in the open instead. Trafficked people are far easier to discover in an open market than a black one. Effects of anything (both positive and negative) are far easier to assess when the thing being assessed is legal.

Should we ban cash because it incentivizes mugging and pickpocketing and theft? (I've been the victim of pickpocketing. The most valuable thing they took was an irreplaceable military ID I carried (I was long since inactive)... Not the $25 in cash in my wallet at the time.) I mean, there would literally be far fewer muggings if no one carried cash. Is it thus the cash's "fault"?


But there have to be specific trade-offs, in each case.

I am reminded of the words of "a wise man."

https://news.ycombinator.com/item?id=39874049

Captain's Log: This entire branch of comments responding to OP is not helping advance humanity in any significant way. I would appreciate my statement of protest being noted by the alien archeologists who find these bits in the wreckage of my species.


I think drunk driving being an oil that keeps society lubricated cannot and should not be understated.

Yes, drunk driving kills people and that's unacceptable. On the other hand, people going out to eat and drink with family, friends, and co-workers after work helps keep society functioning, and the police respect this reality because they don't arrest clearly-drunk patrons coming out of restaurants to drive back home.


This is such a deeply American take that I can't help but laugh out loud. It's like going to a developing nation and saying that, while emissions from two stroke scooters kills people there's no alternative to get your life things done.


It certainly isn't just America, though we're probably certainly the most infamous example.

I was in France for business once in the countryside (southern France), and the host took everyone (me, their employees, etc.) out to lunch. Far as I could tell it was just an everyday thing. Anyway, we drove about an hour to a nearby village and practically partied for a few hours. Wine flowed like a river. Then we drove back and we all got back to our work. So not only were we drunk driving, we were drunk working. Even Americans usually don't drink that hard; the French earned my respect that day, they know how to have a good time.

Also many times in Japan, I would invite a business client/supplier or a friend over for dinner at a sushi bar. It's not unusual for some to drive rather than take the train, and then of course go back home driving after having had lots of beer and sake.

Whether any of us like it or not, drunk driving is an oil that lubricates society.


> Even Americans usually don't drink that hard; the French earned my respect that day.

Is drinking hard something so deserving of respect? Is working while impaired?

To me this reads as "I like to fuck off and be irresponsible and man did these French guys show me how it's done!"


Except they weren't irresponsible. We all drove back just fine, and we all went back to work just as competently as before like nothing happened.

It takes skill and maturity to have a good time but not so much that it would impair subsequent duties. The French demonstrated to me they have that down to a much finer degree than most of us have in America, so they have my respect.

This isn't to say Americans are immature, mind you. For every drunk driving incident you hear on the news, hundreds of thousands if not millions of Americans drive home drunk without harming anyone for their entire lives. What I will admit is Americans would still refrain from drinking so much during lunch when we still have a work day left ahead of us, that's something we can take lessons from the French on.

Life is short, so those who can have more happy hours without compromising their duties are the real winners.


As someone who knows people who died in a crash with another drunk driver, it is hard for me to accept your view. Certainly, at a bare minimum, the penalties for drunk driving that results in fatality should be much harsher than they are now -- at that point there is hard empirical evidence that you cannot be trusted to have the "skill and maturity" necessary for driving -- but we can't even bring ourselves to do that, not even for repeat offenders.

Eventually I am optimistic that autonomous driving will solve the problem entirely, at least for those who are responsible drivers. In an era of widely available self-driving cars, if you choose to drive drunk, then that is an active choice, and no amount of "social lubrication" can excuse such degenerate behavior.


I am very sorry about your friend.

I think the real problem is that people are really poor at assessing risk. And I think we can make some headway there, educationally, and it might actually affect how people reason around drunk driving (or their friends, assuming they still have their faculties).

Let's take the example of driving home drunk without hurting anyone or having an accident. Suppose that (being optimistic) there's a 1% chance of an accident and a 0.1% chance of causing a fatality (including to self). Seems like an easy risk to take, right? But observe what happens if you drive home drunk 40 times:

99% chance of causing no accident each time, to the 40th power = 0.99^40 is roughly 67% chance that none of those 40 times results in an accident. 80 times? 45% chance of no accident. Now you're talking about flipping a coin to determine whether you cause an accident (potentially a fatal one, we'll get to that) at all over 80 attempts. (I feel like that is optimistic.)

If I have a 99.9% chance of not killing someone when drunk-driving one time, after 80 times I have a 92% chance of not killing someone (that is, an 8% chance of killing someone). Again, this seems optimistic.

Try tweaking the numbers to a 2% chance of an accident and a 1.2% chance of causing a fatality.

Anyway, my point is that people are really terrible at evaluating the whole "re-rolling the dice multiple times" angle, since a single hit is a HUGE, potentially life-changing loss.

(People are just as bad at evaluating success risk, as well, for similar reasons- a single large success is a potentially life-changing event)


I'm certainly not trying to understate the very real and very serious suffering that irresponsible drunk drivers can and do cause. If any of this came off like that then that was never my intention.

When it comes to understanding drunk driving and especially why it is de facto tolerated by society despite its significant problems, it's necessary to consider the motivators and both positive and negative results. Simply saying "they are all irresponsible and should stop" and such with a handwave isn't productive. After all, society wouldn't tolerate a significant problem if there wasn't a significant benefit to doing so.


One of the well known effects of alcohol is impaired judgment. You're expecting people with some level of impaired judgment to make correct judgment calls. Skill and maturity can help, but are not a solution to that fundamental problem.

Would you be okay with a surgeon operating on you in the afternoon drinking at lunch and working on you later while impaired? Is it okay for every person and job to be impaired, regardless of the responsibility of their situation? If not, why is operating a few thousand pound vehicle in public that can easily kill multiple people when used incorrectly okay?


If it's American to make counterarguments based on reason instead of ridicule, then hell, I'd much prefer to be an American than whatever the hell your judgmental buttocks is doing.

And no, there is currently no substitute for a legal removal of your repression so that you can, say, get on with some shagging. I would love to see a study trying to determine what percentage of humans have only come into existence because of a bit of "social lubrication"

For some people, alcohol is indeed "medicinal".


You can laugh out loud all you want, but there are mandatory parking minimums for bars across the USA.

Yes, bars have parking lots, and a lot of spaces.

The intent is to *drive* there, drink and maybe eat, and leave in some various state of drunkenness. Why else would the spacious parking lots be required?


What is more depressing is how we can acknowledge that reality and continue to do absolutely nothing to mitigate it but punish it, in many cases.

The more people practically need to drive, the more people will drunk drive and kill people, yet in so many cases we just sort of stop there and be like "welp, guess that's just nature" instead of building viable alternatives. However the other theoretical possibly is that if people didn't need to drive, they might end up drinking more.


drunk driving may kill a lot of people, but it also helps a lot of people get to work on time, so, it;s impossible to say if its bad or not,


> https://en.wikipedia.org/wiki/Normalcy_bias

Indeed, that "bias" is a vital mechanism that enables societies to function. Good luck getting people to live together if they look at passerbys thinking "there is a 0.34% chance that guy is a serial killer".

> What "cure" would you recommend?

Accepting that not every problem can, or needs to be, solved. Today's science/tech culture suffers from an almost cartoonish god complex seeking to manage humanity into a glorious data-driven future. That isn't going to happen, and we're better off for it. People will still die in the future, and they will still commit crimes. Tomorrow, I might be the victim, as I already have been in the past. But that doesn't mean I want the insane hyper-control that some of our so-called luminaries are pushing us towards to become reality.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: