Hacker News new | past | comments | ask | show | jobs | submit login

As a thought exercise it is interesting to think through what we would 'trade' though. For the S3 case there could be a hard cap of allocation, but once it is reached then does the service just switch off? Could it just respond to, say, 100 requests per second, as a ratio of how much money you want to spend in some time period?

For Lambda style services it's odd because if I was hosting them in my own container (or an AWS one even) then they would still accept requests but their responses would start to slow (but still work). Trading through-put for cost/scalability?

The trouble with AWS services for that 'fear of success' type of budgeting (different from losing a key or malicious calls) is they are either on or off, with no in-between cost/latency/resource allocation ratio.

I'm surprised this hasn't come up more actually (or I just missed it), considering the overlap between devs that would like less devops and devs with limited budgets..




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: