Hacker News new | past | comments | ask | show | jobs | submit login

Often overlooked is the type of mistake, which it seems you are touching on with "poorer responses to emergencies". If humans make 10 mistakes over a time period compared to 1 for the automated system, it might look as if the humans are worse, but if the cost of the human mistakes averages to 5 for some theoretical cost scale (so 50 overall), but the automated mistake costs 100 because there isn't a good dynamic response to it, the automated system doesn't actually cost less in the end. That's why we so often see automated systems backed by people, as we don't have automation that can respond as dynamically as a person.

It usually takes far fewer people to oversee automation than to do the original work though.




It only takes one human mistake that is 1000 to destroy all that. Many systems in oil refineries are automated because humans didn't want to shut the plant down where there was some trouble. Even when the problem is a predictable humans often refused to admit it they couldn't operate as is. (We know this pump is failing, the replacement won't arrive for 2 months though so we are going to continue as long as we can - once it fails we know we have 1 hour to hit the shutdown switch or the plant will explode)


> It only takes one human mistake that is 1000 to destroy all that.

And it only takes one machine mistake that's 10000 to reverse it again. What it comes down to is that until every mistake that can happen is known and accounted for, a human somewhere in the process will allow for better reasoning about any possible mistakes. Human errors are more likely, but they are fairly well understood and you can safeguard yourself by putting more people higher up the stack checking things. This is well understood, we've had centuries (if not millennia, depending on how you want to qualify it) to work on the problem of reducing errors in people's work.

> Many systems in oil refineries are automated because humans didn't want to shut the plant down where there was some trouble.

That's not a problem with people, that's a problem with the structure the people are in. When keeping the operation running is prioritized over fixing problems as they are found, you get a perverse incentive structure. A good example of this is shown in the NUMMI episode[1] of This American Life. In the 1980's at the NUMMI plant, GM never wanted the assembly line stopped, so cars would go through not completely finished, and then workers would volunteer to finish them on the lot manually (for extra money), but theyse cars often had problems and were very low quality. This is contrasted with what the GM execs saw at a Toyota plant (and tries to institute), which was that any problems on the line were encouraged to stop or slow the line and managers would then go investigate and either fix problems with that station, or allocate more resources there if it wasn't sufficient. The problem you outline is not inherent to the people working, the problem is the structure they are working within. The main difference in reality is that when an automated component isn't working right, people know it (or something associated with it, such as a feeder) needs to be fixed, so they fix it. Or more appropriately, they've been trained that this is the correct course of action. The same could be true of people working, it just often wasn't.

1: https://www.thisamericanlife.org/radio-archives/episode/403/...


Human errors are more likely, but they are fairly well understood and you can safeguard yourself by putting more people higher up the stack checking things.

This doesn't sound true to me. The worst man-made catastrophes seem to frequently involve a human manually disabling some sort of critical safety mechanism (eg, Chernobyl). You could look at the checklist as a primitive form of automation that helps precisely because it removes human judgement.

In theory, automation could encompass the knowledge of the finest experts in the field. Humans get credit for being able to think flexibly and accommodate information that maybe wasn't included in the original programming. But if you're experiencing a six-sigma event that the system wasn't programmed for, what is the chance that Joe Random Operator (completed a two-month training certificate program) is going to be savvy enough to respond better?


> The worst man-made catastrophes seem to frequently involve a human manually disabling some sort of critical safety mechanism (eg, Chernobyl).

But again, we're back to perverse incentives. Keeping the plant running was more important than diagnosing and fixing any problems. If the people involved were first assessed by how well they found and fixed problems, and then by how much uptime the plant had, I'm confident it would have played out quite differently.

There is a point to what you're saying though, and how it interacts with my statement. Humans work based on current incentives, while machines work towards the goals their designers were incentivized towards at the time of their creation. Keeping the former in good order over time is a harder task than making sure the latter is correct at design time. That is, it's easy in the design process to emphasize the cost of errors and try to account for them, but humans managers, being more dynamic, sometimes allow perverse incentives to change how a human workforce behaves accidentally. Human workers are more versatile, but that versatile comes with downsides.

> In theory, automation could encompass the knowledge of the finest experts in the field

Which is why I qualified my statement with "until every mistake that can happen is known and accounted for" and "allow for". The point is, how many machines are operating in an environment where that's true, and we're actually utilizing the information from all the experts correctly? Just taking into consideration that the physical environments where machines run is often not fully under control, I think it's actually rather small.


In the case I'm thinking of the company wanted maximum profit - which meant shutdown production when things failed, but continue until then. However there was a cowboy attitude: the operators knew that "Bubba" was almost able to to continue operation in a similar situation in the 1950s, and they think they are better. The reality is that "Bubba failed, but the catastrophic explosion was after 1 hour 20 minutes or some such thing, beating that record gives you bragging rights in the lunch room.


> In the case I'm thinking of the company wanted maximum profit - which meant shutdown production when things failed

Sure, and I don't dispute that things like this happen. I'm just saying that the real problem here is "the company wanted maximum profit - which meant shutdown production when things failed, but continue until then" portion. Automation often doesn't allow this, not by design, but because it just can't, because it's not versatile enough to work around it's fairly narrow programming. Humans can, and when they do and then there's a major failure, that's not a problem of the human workers, or even entirely the problem of middle management, but of the overall policies and incentives that allow or promote such behavior.

That is, automation is better at reducing errors, but I also think a lot (but not all!) of what we attribute to automation reducing errors is also just their inflexibility not allowing managers to go for short-term goals at the expense of long-term goals. It's like a car that refuses to drive more than 5 miles without needing to shut down for a couple minutes after it's been 3,200 (for some leeway) miles since the last oil change. It would force people to get oil changes more regularls, and while it would cost a little more for those that put it off far longer than they should (me), it saves money in the long run through the problems it prevents (and gets detected early be people looking under the hood). That doesn't mean people can't make sure to change it on time without that extra problem, it's just that we're generally pretty bad at quantifying risk, especially over longer time scales. That problem is mitigated somewhat by multiple people and a policy, so it's not a personal decision, which is why I think companies can do much better with humans as workers than they generally currently do, even if automation in the end yield less errors (but at the cost of flexibility in the system).


> That's why we so often see automated systems backed by people, as we don't have automation that can respond as dynamically as a person.

I imagine there's specialized training involved to make sure that the person monitoring the machine understands the nature of how the machine operates in order to service it and know when it's making mistakes.

I wonder if the people monitoring the machines are the same ones that are being automated out of jobs or if it's an entirely different skill set?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: