Hacker News new | past | comments | ask | show | jobs | submit login

My team saw a 40% CPU usage increase on all of our EC2 instances and even our RDS instances. We were shocked since the media was downplaying the performance impact.

I tried to start a poll but it seems as though my team was just the unlucky one: https://news.ycombinator.com/item?id=16109036




> 40% CPU usage increase

When I read this I thought how weird. It is usually claimed that max performance drop is around 30%. Then I remembered that we are talking in ratios, so a 30% drop in performance(0.7) means a 42% increase in CPU usage(1/0.7=1.42).


Hi,

How are you trying to measure the performance impact? Are you checking the cloudwatch data or running any specific test?

I am interested to assess the baseline statistics (unpatched so far) and after patching. Any suggestions?


my teams usage is pretty even so the cloudwatch graph is pretty obvious: https://m.imgur.com/a/khGxU


That is the clearest graph ever to show the degradation in performance. How do you get your usage so flat?


Are your ec2 instances PV? They are known to suffer perf issues more than the other type which you can switch to.

Not sure about RDS


Do you have premium support with AWS? Maybe try opening a ticket.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: