Hacker News new | past | comments | ask | show | jobs | submit login

About 10 years ago, I worked at a company that really wanted to use Hadoop for some reason, so I was forced to use it for a project. The amount of data we were processing was minuscule (a few hundred megabytes per run) It could've been done with a simple script on a single EC2 instance for the entire duration of the project without any scalability issues. Instead, I had to provision Hadoop clusters (dev, staging, production), fit the script into the map-reduce paradigm, write another script to kick off the job and process the results, etc. At least we were using Hadoop.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: