Hacker News new | past | comments | ask | show | jobs | submit login
AWS bastions and assume-role (coinbase.com)
157 points by grahar64 on Oct 23, 2017 | hide | past | favorite | 66 comments



Also take a look at aws-vault [1]. This not only assumes roles but also helps you store your original credentials in an encrypted form rather than a plain text ~/.aws/credentials file. You do have to configure all assumed roles in ~/.aws/config

They have an exec command but you can also export your credentials to env variables with somethings like

    aws-vault exec "$AWS_PROFILE" -- env | egrep '^AWS' | awk '{print "export " $1}'
[1] https://github.com/99designs/aws-vault


This is the best tool I've found so far to manage profiles securely and switching roles between the ~35 accounts im working with right now.


I'm about to evaluate it for my needs. Do you run into any issues with the temporary tokens expiring, e.g., with developers working locally on a web app over a period of a couple hours?


Not really. Its quite straight forward and really just makes it easier to store your creds (on any platform) securely, and enables you to switch roles via profiles in your ~/.aws/config.

One thing I would point out is that by default it will timeout the session in 4h and the role in 15m. This means that every 15m you will need to exit your bash shell that aws-vault exec created, or replace the env vars you generated. Set the two env vars for session and role ttl to your desired values in your bash profile to avoid setting them on the cli on every invocation.

You just want to look at this file, as the env vars are not documented. https://github.com/99designs/aws-vault/blob/4acbb48b48b90555...


Appreciate the link. Until now, I've used aws-vault with the `exec` command to run CLI tools with a quick return, so the role TTL isn't much of an issue.

I'd like to employ the tool to support local development on a containerized web application. The assume role TTL may prove to be the real issue, so I need to weight the tradeoff of allowing a longer life on the STS keys. I suppose I could set up a profile without a role and override the AWS_PROFILE environment variable within docker-compose. And I know that aws-vault also supports the virtual meta-data service.

Either way, the benefit of something like aws-vault extends beyond security. We've discovered numerous inconsistencies with the profile-based credential handlers in the Java API (https://github.com/aws/aws-sdk-java/issues/803) which beg for a simpler solution in terms of supporting developers.


I know of no service that is more complex and off putting to newbies than AWS. I mean, wait, I need multiple accounts? Getting my team access to the one account we have took me 3 hours already!

No wait, I need a design pattern for how to manage accounts of a SaaS service?

I'm probably not the target audience here but I strongly get the impression that these patterns would not be necessary if AWS would get their shit together in terms of AWS Console UX design.


AWS is not really SaaS or PaaS although they do offer some of those (eg. Elastic Transcoder, Elastic Beanstalk), they aren't best of breed. Rather, AWS is IaaS, and hence it has to remain extremely broad, flexible and scalable. The first thing to understand is AWS is defined by its APIs, the console is a sprinkle of convenience, but understand that it is intentionally a second-class citizen.

As far as multiple accounts are concerned, this is a tradeoff you make for isolation and security. Knowing that fat fingering a staging change can not possibly impact production because the credential used in that case is literally for a different account is very comforting when you're running devops in an environment managed by multiple people.

While a lot of the stuff Amazon gives you may seem complicated, or overkill, if it's something you need it tends to be quite a bit simpler than the off-the-shelf alternative (see: VPC versus private networking).


As it was pointed out AWS is IaaS.

I think the reason it might seem hard is because you probably don't have operational experience. You need to have that knowledge if you have your own data center.

Sure, this all could be abstracted, and it kind of is if you use things like Beanstalk, but AWS provides great flexibility which can make difference how your application performs in the end and how easy it is to maintain it.


AWS is a critical service for most companies and access needs to be carefully managed. There are just natural complexities in managing access to sensitive info and controls in any company and probably needs to be solved at that level rather than a specific cloud vendor.

Arguably Google Cloud is pretty well done, especially if you're already using their G-Suite service. What would make AWS easier for you and why did it take 3 hours to create accounts for your team?


Use the Console to spot check things. But beyond that, the majority of your work in AWS should be scripted using the various APIs. At that point, using federated access or a proper IAM strategy makes that fairly painless.


AWS is not for newbies, they released a paired-down version: https://amazonlightsail.com/


AWS is totally for newbies, so long as you aren't afraid of performing sysadmin work.

You don't need to have some zany serverless infrastructure. The overhead in setting it up isn't worth the trouble.


Sure you can use AWS as newbie but I wonder how much money you burn because of not clicking one checkbox.


Checkboxes can be tricky! Why, I unmounted several local volumes when adding a Windows server to a failover cluster just last month...


> but I wonder how much money you burn because of not clicking one checkbox.

Abstraction is no replacement for knowledge.


In Google Cloud, something like this is mostly unnecessary. The project model scopes resources to a particular project within an organisation, rather than all resources being global to the account. This gives a really good first cut at isolating different environments and projects.


This goes beyond just prod/staging. It allows you to easily manage fine-grained roles down to e.g. particular microservices and also control how users can access those resources as minimally as is necessary.

So you can recreate the project-model here, and also refine it beyond that.


Google Cloud does not yet have Org level quotas. Projects can therefore be limiting if powerful building blocks.


I like AWS multiple accounts support it helps securing specific environments, but I don't like that going this route increases the cost.

Here are some things I don't like:

1. if you want to use AWS support, you need to purchase it per account, otherwise support will refuse any help that involves anything specific to the account (they will only respond with generic documents)

2. with separate account you need to recreate the same components (and therefore pay more) for example if you want internet access on your VPC over IPv4, you need to set up a NAT instance per account, you can't for example use VPC peering and use NAT instance on another account

3. you are being charged for any data going between accounts even if same AZ is used. Yes, I understand that one can't easily tell which AZ is which across accounts since they are randomized per account but still...


If you use the same billing account I'm pretty sure 1 is not true. I only know at higher support levels though it is not a problem for sure.


Well, I don't have access to the root account to see if these accounts are under the same billing account (but best of my knowledge it is), and the support we purchased is Business.

When I had question related to one of account I was told that I will need to open a support ticket on that account, but I couldn't open because they had basic support.

We contacted our TAM and he just shrugged. If this is true that would be great.

Edit: I checked and looks like our account are consolidated until single master one, although we did not purchase support for the master account since nothing is running on it. If there is a way to not have to purchase separate support I would like to know since it could save some money.


Great article and nice tool. Switching role and profile with multiple organizations is indeed cumbersome with AWS.

We are also developing an open source CLI for AWS named awless (cf. https://github.com/wallix/awless). We currently support easy MFA, profile switch and role assuming in CLI with the '-p' flag and are working on extending these features to support multiple organizations. We had multiple issues filed on GitHub which are closely related to this.


2018 Hottest AWS Job Postings:

  - Identity-Access-Management Manager
  - Pricing Oracle
  - Sysadmin we fired when Moving To The Cloud(tm)


Poor choice of name, as the term "bastion" is already commonly used in AWS to describe a bastion host for a VPC.


That's exactly the reason for this naming choice.


Now that I'm comfortable with the IAM side of things, I will always use multiple accounts. I'm also beginning to think it's valuable to not even limit yourself to a single account per environment. Tools like Terraform become essential, though.

I've written about using multiple accounts in combination with CodePipeline to manage Lambda deployments here: https://medium.com/statics-and-dynamics/automated-lambda-dep...


Like everyone else, I also wrote a CLI login util in GoLang for multiple AWS account with this "bastion/main" account setup: https://github.com/lencap/awslogin . Simplicity is the main driver. I welcome constructive input.


This is nice, but having the ability to get prompted for all the authorized roles for a user in an account is nice, as is done by the aws-login node app.


We took this approach at 99designs with aws-vault and bastion accounts: https://99designs.com.au/tech-blog/blog/2015/10/26/aws-vault...


Very well written article with some good advice. We found very early on the need for multiple AWS accounts and managing varying levels of access to all of them has been challenging.

I also recommend looking into using SAML with your own login provider, if you have one, to assume individual roles in AWS accounts.


Would you mind describing your needs? I read the article and I still don't fully understand the need for multiple accounts -- seems the article is more about the tool. I would understand the need when it comes to API rate limiting, but I have never ran across this on a million+ session/day website.


Cool! We do the same but we call it root account instead of bastion account, since bastion is an overloaded term in the AWS universe.


We call ours the "hub" account, since we picture the relationships between our accounts as a hub-and-spoke model.

"Root account" is also a bit misleading, because the root user is the initial non-IAM login associated with the account (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user...).


I think naming things is difficult, and the best name for a thing is the most meaningful to you and your company.


This is what we do. Each product has it's own prod and qa account.


Unless I'm missing something, isn't this whole process made a lot simpler just by using STS?

It also works incredibly well with Vault's STS backend.


Isn't STS exactly what the article is describing? This is just an sts usage pattern



As coinbase is a Bitcoin wallet and they transact a lot of money it suprises me that they reveal details of their implementation publicly.

Edit - Getting downvoted a lot.

Seems that some people think that the expression 'You shouldn't rely on security through obscurity' means that it's OK to publish your backend infrastructure.

Best practice is defence in depth.

That means you secure everything including your implementation details.

If a zero day is found in any of their stack, they're a google search away from being found for that.


This is, maybe counterintuitively, not true. If anything, Coinbase is more secure than before by publishing this.

For a great and well-reasoned argument see the Gov.UK guidelines, which state that all of their new code has to be open source. (yes, that's the UK government, not some startup) They even specifically mention security-enforcing code![0]

> Code that contributes to your service’s security does not need to be kept closed. Many security-enforcing functions, such as cryptographic algorithms, are provably better when openly examined and understood while the keys are kept private.

They also have another guide and a blog post specifically about security considerations with open source code.[1][2]

> Doesn’t it give attackers an advantage?

> Although there’s a common concern that coding in the open could give an advantage to an attacker, we believe that only a negligible advantage exists. [...] In fact there is no evidence to suggest that being open source makes software more susceptible to exploitation.

I would highly recommend reading through those if you really think Coinbase is now under a higher risk of attack due to this article. They aren't.

[0]: https://www.gov.uk/service-manual/technology/making-source-c... [1]: https://www.gov.uk/government/publications/open-source-guida... [2]: https://mojdigital.blog.gov.uk/2017/02/21/why-we-code-in-the...


There's no proof there that they are now at less risk.

I've looked at the code and I can see already some potential supply channel attacks as they are not hash protecting their incoming libraries.


It seems to be extrapolated from open crypto code, which is a unique example. Nobody goes around volunteering security reviews of CRUD apps like they do for interesting crypto.

If you're open sourcing stuff that other people use, it makes sense, because people fix security issues from using software, fixing bugs and witnessing failures, and needing the fixes for themselves. Not from going 'wow, that's 500 repositories, most of which I don't care about at all'. You're not going to go fix the issues you just saw, are you?

In this case, Coinbase open-sourced some useful code (with an install-this-one-liner), and details of a systematic security method that other people can also use themselves, critique and improve on. The difficulty of breaking their setup hasn't changed, simply by knowing that the keys/MFA combo you really want is a different one. You'd still need to steal it. Perfect example of helping security by transparency.


Sure, but that can easily be turned back around on you:

There's no proof there that they are now at more risk.

> I've looked at the code and I can see already some potential supply channel attacks as they are not hash protecting their incoming libraries.

And if you, or anyone else who sees that sends them a notification about a potential vulnerability and they investigate it they're probably better off than they were before.


This is actually a best practice and I believe Amazon also advises this. So it's not really something secret. For me as a customer it is good to know they follow these practices and communicate their understanding of it outwards.

For private projects I have been experimenten with how far I can go in open sourcing everything (including server configuration) and where you hit limits. Example project: https://gitlab.com/failmap/server

Diving into a project with a open source mindset really makes me think more about security topics from a different angle and find better solutions. Like reducing the secrets that must be known (and thus can leak), like user/database passwords. Instead of security by obscurity there is nothing to obscure.


The first step of an attack is reconnaissance.

How is making that step very easy for an attacker best practice?


So you save the attacker a day by explicitly telling them what software stack you use. They still have to exploit that stack.

It doesn't really change anything, just possibly saves the attacker some time if they can exploit the remaining layers.


If the only thing keeping your attacker out is that they can't do recon then you're kinda boned.

Security through obscurity and all that.


Part of that reconnaissance is running vulnerability testing suites. I think that they will give a quicker answer to most of the questions than having to read through the configuration code and piecing it together. Often I myself even prefer running nmap instead of looking at the code to check some configuration.


So in a scenario where you can breach their AWS accounts, it would be a noticeable difficulty for you to discover that they use this pattern to work? I'm not very familiar with AWS, but that seems odd.


How does releasing this information change that in any way to benefit the attacker?


Coinbase is not the only company doing this. This is actually best practice.


Leave your house's front door unlocked and keep a camera outside trained on the door. For the first week, tell nobody that it is unlocked. For the second week, tell everyone you meet that it is unlocked, and provide a map. On the third week your camera records someone going into the house.

Did telling people your house was unlocked make your house less secure?

Your house was exactly the same in the first week as the second week. Telling people it was insecure did not make it insecure - it was already insecure. You left it unlocked.

So it follows that if knowing some piece of information does not reduce the security of your system, then it is not sensitive information, and can be released publicly. On the other hand, if you don't know whether a piece of information reduces your security, you should probably find out. And finally, even if you never release information, that does not mean your house is locked up. But it may lead to a false sense of security.


> Telling people it was insecure did not make it insecure - it was already insecure.

It's a common tactic for burglars to look for when homeowners are going on holidays. The homeowner being on holiday makes the house less secure, but the information that that particular house is less secure is of benefit to the burglar.

Security isn't a binary state.


If you're a single zero day away from a crippling attack, you're not practising defence in depth, are you?


I was literally going to say the exact same thing. +1


Good security doesn't require obscurity.


what then is an 'information disclosure vulnerability'?


It’s not what you think. It’s more about disclosing business-related data or metadata (e.g. customer names, last time a customer logged in) than technical information (e.g. internal service names, minor debug information).


Good security means defence in depth. You secure everything including your backend implementation details.


Good security means not kidding ourselves that backend implementation details are providing us with any defense.

They're not, and if we base our security strategy on the assumption that they are then we have at least one weakness in that strategy.


If your entire security can be circumvented with a single zero day it is called coconut security. The whole point of having multiple layers is to prevent a single zero day able to take down your entire infra.


People say that security trough obscurity is not a good idea.


Keeping the details of your security setup secret or obscure is not security through obscurity.

Security trough obscurity is using the secrecy of your security setup as a pillar of that security.

It's perfectly sane to keep the details secret even if those details themselves don't form part of your security.


I'd argue that the reasons for wanting to keep implementation details secret would be more political than for security. eg wanting to exaggerate your DR capabilities to would-be customers. Or using tech that works but has a bad image (like Perl). Or perhaps you're just releasing a commercial solution that leverages a lot of open source components and thus could easily be replicated by anyone else. I'm not saying these are good reasons, but they make more sense than the "security" argument because if your infrastructure isn't secure to begin with then it's pretty trivial to find out what is running even without the tech being published / open sourced; and if it is secure then it doesn't really matter if the details were published in the first place.


and yet, there are 'information disclosure' vulnerabilities..


There are different degrees of sensitivity though. Infrastructure, eg what Coinbase published, are pretty vague in the real terms. A hacker would still need to approach their target in the same way even with knowing this information. But exploits like Heartbleed could leak information about what users are in /etc/passwd, which is a lot more sensitive since that would significantly reduce the entropy needed for a brute force attack against the host.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: