It's interesting, but it opens up a rather unsettling counter-countermeasure:
If I'm Mallet, I don't want people using passwords I can't quickly force out of them. Therefore, I pretend I don't know about this, or don't believe it, and keep using the rubber hose on people who are implementing this protocol. Therefore, those people get worked over longer and end up in a worse spot than people who are able to give up a password quickly. Net result is that people are terrified of using this stuff because they know that if they don't give up a working password quickly they'll be tortured longer.
Moral: This is probably a really good way to remember secure (that is, high-entropy) passwords. Don't pretend it's good for a use-case at which it is very obviously a horrible fit.
The best technique for this that I have seen are systems with plausible deniability. This is where you have a pool of seemingly random data and several passwords will open different parts of the data up, with the free space padded by random data. The idea is that you reveal a password with dummy content. If they continue to insist that there must be another password you can reveal a second, or third password that will reveal different data.
Of course, any actor that is willing to torture an answer out of you is probably willing to torture n answers out of you and/or watch you die in the process. This is one of those problems that there may not be a technological solution to and we should probably focus on spreading the concept of human rights to our children instead.
> Of course, any actor that is willing to torture an answer out of you is probably willing to torture n answers out of you and/or watch you die in the process.
The eternal question of how to fight monsters without becoming one in the process.
Plausible deniability really is the best tool I can really think in this case, specially if the attacker is not aware of that possibility (it should be concealed as much as possible).
I don't see why, if you can torture someone into saying words, you can't also torture someone into playing a game.
People can be coerced into doing anything they would do normally (unlocking a lock; supplying their fingerprint; phoning their conspirator and saying to go ahead with the plan.) True coercion-resistance, I would think, implies that the mechanism has the ability to differentiate a stimulus supplied under coercion, even under the assumption that Mallet could have fed Alice some anti-anxiety drugs to regularize detectable symptoms of stress in her behaviour. (Coincidentally, "truth serums"—i.e. low-dose dissociative anaesthetics, ketamine or PCP being just as workable as sodium pentathol—also have anxiolytic effects. I can't imagine a state actor wouldn't take advantage of this if the goal is coercion rather than torture-for-the-sake-of-torture.)
As far as I know, there is one single useful mechanism for coercion-resistance: precommitment with counterparties. If Alice and Bob are supposed to have no further contact once the plan is in motion, and Bob knows this, then it is useless to torture Alice into contacting Bob—Bob will take the contact itself as a signal that Alice has been compromised.
You can automate this; if Alice (or her device) must attend regular KEx parties to get a new key (i.e. participate in a protocol with a security ratchet), then capturing her or her device prevents her from doing this, and so expires her key material (kicks her keys out of the keybag, basically.)
What about using a scheme where the only copy of the key is easily destroyed when needed? Something like private key stored on a USB flash drive with hidden "auto-destruct" button. If the owner sees the situation coming, he just says to the criminal: "The only copy of the private key was stored on this disk. I've just destroyed it. There's no point in torturing me." Seems to prevent both violation of privacy and torture. The criminal must believe there is no other copy, though.
That sort of thing is usually taken as an admission of guilt as to the contents of the encrypted material, as far as being charged with an actual crime (rather than just held indefinitely without charge) is concerned. Instead of holding you illegally, they're now holding you legally.
On the other hand, if you can remove the active mens rea component, this is a good solution. I imagine law enforcement is actively trying to prevent any software with a dead man's switch key-destruction component from ever becoming popular/ubiquitous, because it would enable people to claim they truly had no idea the software had a feature where their key would be destroyed in such a scenario; they were just using it because everyone else was. Imagine if BitLocker/FileVault had a mechanism like that, which could be forced on users by corporate policy!
Quantum cryptography can be secure against rubber hose attacks. It can be designed so that if you measure the wrong thing (i.e. try the wrong password), the stored information is gone forever. And due to the no-cloning theorem it isn't possible to circumvent this with backups.
If I'm Mallet, I don't want people using passwords I can't quickly force out of them. Therefore, I pretend I don't know about this, or don't believe it, and keep using the rubber hose on people who are implementing this protocol. Therefore, those people get worked over longer and end up in a worse spot than people who are able to give up a password quickly. Net result is that people are terrified of using this stuff because they know that if they don't give up a working password quickly they'll be tortured longer.
Moral: This is probably a really good way to remember secure (that is, high-entropy) passwords. Don't pretend it's good for a use-case at which it is very obviously a horrible fit.