> In the case I'm thinking of the company wanted maximum profit - which meant shutdown production when things failed
Sure, and I don't dispute that things like this happen. I'm just saying that the real problem here is "the company wanted maximum profit - which meant shutdown production when things failed, but continue until then" portion. Automation often doesn't allow this, not by design, but because it just can't, because it's not versatile enough to work around it's fairly narrow programming. Humans can, and when they do and then there's a major failure, that's not a problem of the human workers, or even entirely the problem of middle management, but of the overall policies and incentives that allow or promote such behavior.
That is, automation is better at reducing errors, but I also think a lot (but not all!) of what we attribute to automation reducing errors is also just their inflexibility not allowing managers to go for short-term goals at the expense of long-term goals. It's like a car that refuses to drive more than 5 miles without needing to shut down for a couple minutes after it's been 3,200 (for some leeway) miles since the last oil change. It would force people to get oil changes more regularls, and while it would cost a little more for those that put it off far longer than they should (me), it saves money in the long run through the problems it prevents (and gets detected early be people looking under the hood). That doesn't mean people can't make sure to change it on time without that extra problem, it's just that we're generally pretty bad at quantifying risk, especially over longer time scales. That problem is mitigated somewhat by multiple people and a policy, so it's not a personal decision, which is why I think companies can do much better with humans as workers than they generally currently do, even if automation in the end yield less errors (but at the cost of flexibility in the system).
Sure, and I don't dispute that things like this happen. I'm just saying that the real problem here is "the company wanted maximum profit - which meant shutdown production when things failed, but continue until then" portion. Automation often doesn't allow this, not by design, but because it just can't, because it's not versatile enough to work around it's fairly narrow programming. Humans can, and when they do and then there's a major failure, that's not a problem of the human workers, or even entirely the problem of middle management, but of the overall policies and incentives that allow or promote such behavior.
That is, automation is better at reducing errors, but I also think a lot (but not all!) of what we attribute to automation reducing errors is also just their inflexibility not allowing managers to go for short-term goals at the expense of long-term goals. It's like a car that refuses to drive more than 5 miles without needing to shut down for a couple minutes after it's been 3,200 (for some leeway) miles since the last oil change. It would force people to get oil changes more regularls, and while it would cost a little more for those that put it off far longer than they should (me), it saves money in the long run through the problems it prevents (and gets detected early be people looking under the hood). That doesn't mean people can't make sure to change it on time without that extra problem, it's just that we're generally pretty bad at quantifying risk, especially over longer time scales. That problem is mitigated somewhat by multiple people and a policy, so it's not a personal decision, which is why I think companies can do much better with humans as workers than they generally currently do, even if automation in the end yield less errors (but at the cost of flexibility in the system).