This is interesting, but runs contrary to my understanding of how Etherium works. I'm clearly missing something, any chance you (or anyone else) could elaborate more?
My understanding was that the decentralization of Etherium would mean that everyone watching the contract would need a copy of the decryption key. If that's the case, what prevents someone from publishing keys early? Or is it that the key isn't stored in Etherium, and Etherium is only being used as the consent to publish?
If the key is being stored somewhere else and just waiting for the contract to validate, how do we prevent a censor from just attacking that system?
If the key is being stored somewhere else and just waiting for the contract to validate, why not also store the contract on the same machine and do checkins directly into that? Would that be significantly less secure/reliable?
Killcord treats ethereum as a project backend API. The smart contract is pretty simple in construction by design. Writes are restricted to one of two accounts (the owner account and the publisher account) and the publisher account is further restricted to only allow writes to the publishedKey variable in the contract. Reads are open to the public.
As stated in other responses, the decryption key is stored own trusted systems that run the owner or publisher killcord projects.
As for attacking the system this is something to think about. So why did I choose Ethereum for this?
Why Ethereum - The contract code (backend API) and variable state are written to the block chain, so the availability are dictated by the network itself which is made of around 20K nodes (give or take). Of course, as others have mentioned the other aspect of this is internet access for the publisher and project owner.
For the publisher, this can be accommodated by running the publisher in a geographically distributed set of trusted systems. What do I mean by trusted systems? These are systems that meet your risk profile. The code can run on AWS Lambda in multiple regions, or on a raspberry pi, or in a datacenter in iceland, the more, the merrier.
For the owner... If you are cut off from checking in, the system assumes something bad is afoot. This is why its important that anything put in killcord is something you really want to publicly disclose. Killcord should really only be a system that runs on your behalf in the case that you go MIA and you feel that is a threat to the data being otherwise released.
Killcord is described as resilient and resistant. The resilience is undefined, and the resistance is defined as censorship resistance. I'll ignore the censorship resistance, as it doesn't seem to have any qualities different from any other Ethereum contract.
I don't see what this project is resilient against. In fact is seems unable to recover from issues such as the trusted third party publishing early.
How is key confidentiality preserved? The integrity of the keys? What if the keys are changed or deleted? How are DOSes protected against, so early disclosures don't get forced?
There are quite a few issues with the project. Unfortunately, killcord doesn't seem ready for release into prime-time as a key-management method. Killcord seems equivalent in intended operation to a non-blockchain HSM, but all the protections of an HSM, all the key management, all the security controls, they are all gone. This actually introduces security issues instead of solving for them.
What is the actual problem that killcord is attempting to solve? There are likely more robust designs, such as secret sharing, that will solve the target problem.
Killcord is designed to let the public know that a killcord project exists, where to find the encrypted payload, and how to check the status of the killcord project.
Unpublished secrets are currently stored on the owner and publisher project folders in clear text on a config file. This isn't meant to replace an HSM or secret manager, by any means. Though I've got some ideas on how to incorporate systems like Vault, Chamber, or other secret stores in the future.
It is also, indeed, early alpha and dealing with secret management for the owner and publisher are absolutely top of mind.
See also https://github.com/petertodd/timelock and similar projects. There might be a way to combine these two concepts plus ephemeral keys as used in perfect forward secrecy, so that the switching technology isn't a single decision to publish a key, but rather time-locking a share of a Shamir-split secret and constantly rolling it forward as the pings happen -- or letting it run out and reveal enough shares for anyone to decrypt.
I think it's really, really hard to guarantee that information has been destroyed, especially in a decentralized system, so you won't have the assurance that information was (1) available to encrypt, then (2) unavailable to anyone because it was destroyed, and then (3) somehow recovered, recalculated, or discovered to once again allow decryption. That feels isomorphic to the problem of time travel.
But maybe combining these technologies will provide a way to compartmentalize the risk of early disclosure sufficiently to satisfy some use cases.
1. Client generates necessary files (including keys and payloads).
2. Encrypted payload is placed on IPFS.
3. Keys are placed on a trusted published (potentially single point of failure).
4. A smart contract running on the EVM continuously checks for pings from clients. If client doesn't check in over some pre-defined policy, then trusted published will be aware and publish keys to the smart contract, visible to everyone.
Is that... good? I mean, I understand that security isn't black and white, and really you're just trying to make it harder for someone to attack you, not impossible. But how much do you gain by decentralizing just the trigger?
Since the trigger logic fundamentally relies on you doing something, it seems like that logic could be local to machine, your machine could query any number of public websites/platforms/IPs and it would still be pretty difficult for anyone to censor you.
It also seems like a party that wanted to force you to publish early would not be hampered in any significant way by Etherium. In either scenario, all they have to do is incapacitate you or block the IPs that your machine is looking at.
I still feel like I'm missing something. Would anyone be willing to break down a (fictional or real) scenario where adding Etherium to this equation blocks an attack?
There's a bunch of attack vectors, but most fall on the trusted publisher and client itself. IPFS and Ethereum are, by assumption (difficulty wise), ``secure''.
Assuming both client and publisher's internal systems are intact, then you have two attack vectors:
There's the false positive attack vector, where you can shut down the client's network access and force the secret to be prematurely leaked.
There's the false negative attack vector, where you can shut down the trusted publisher's network access, and indefinitely keep the secret ``safe''.
However, in general, the first attack is not as worrisome as the second for these kinds of application. The second is more worrisome, and there's many ways to distribute the trusted published using some crypto threshold scheme such that as long as no more than some threshold of the trusted publishers are shut down, the secret will be released in case of client shutdown.
I imagine the second attack may be mitigated by the fact that the publisher might be easier to hide than with direct access. E.g., if your dead man's switch were just some daemon running on a machine somewhere that you have to ping periodically, attackers could find the IP address of the daemon by watching your network traffic.
In the OP, you and the daemon (aka the trusted publisher) communicate exclusively via the blockchain, so it will be a lot more difficult to find the daemon's location.
Not sure if this is in any way better than just accessing the daemon through Tor though.
These are valid points and anyone thinking about using killcord should be aware of these.
As for the second attack vector, the publisher is built with idempotence, so it is important that a killcord owner configures n-number of publishers in geographically diverse areas to mitigate the false negative attack vector.
The one attack that I can see it blocking is that it allows for 100% untraceable monitoring [edit: of the deadman's switch by the the system-that-should-send-the-message]. Since every bit of data pushed to Ethereum goes to every single full node, you can't find out who has the keys to the secret data and will release them.
Oh, cool, this would actually help protect against a lot of things!
If you can set up a deadman's switch and there's no way to figure out who it belongs to, that should make it significantly harder to find out which publisher to attack.
Contrast that against 'every day at 5, I publish a signed checkin to Facebook, Twitter, Reddit, Dropbox, my blog, and a hundred other sites simultaneously.'
In that scenario, blocking or faking the trigger isn't the attack vector. The attack vector is that it's really obvious who the trigger belongs to, so to find the publishing IP an attacker can just monitor who connects to those domains.
I guess the trick is actually getting Ether anonymously, but that's not the hardest problem in the world to solve.
My understanding was that the decentralization of Etherium would mean that everyone watching the contract would need a copy of the decryption key. If that's the case, what prevents someone from publishing keys early? Or is it that the key isn't stored in Etherium, and Etherium is only being used as the consent to publish?
If the key is being stored somewhere else and just waiting for the contract to validate, how do we prevent a censor from just attacking that system?
If the key is being stored somewhere else and just waiting for the contract to validate, why not also store the contract on the same machine and do checkins directly into that? Would that be significantly less secure/reliable?