Hacker News new | past | comments | ask | show | jobs | submit login

That doesn't match my experience at all. If anything, Google suffers from the opposite problem: engineering teams spend enormous resources on migrations and rewrites that make their systems cleaner/simpler/more general, but have little or no business value. There are tons of teams that could be making incremental user-facing improvements, but are instead spending their cycles on projects that are mostly about internal development velocity.



I think it's changed, dramatically. I was there from 09-14 and then just rejoined last week. In 09-10 there was a huge problem of "launch and run" - people would launch, they'd get promoted, then they'd be reassigned to other higher-priority projects to get them launched too, and their previous project would get shut down rather than maintained. Nowadays there seems to be a lot more emphasis on stability and long-term code-health, people are getting rewarded for internal cleanups rather than launches, and many people have multi-year tenures on the same team rather than lots of single-quarter launches.


are they getting _promoted_ though. successful promo packet for sustaining work would be a very big change.


I've seen a bunch of successful promo packets for work on code health, system reliability, etc. - including promos to L6.

The "trick" to successful L5 and L6 promos (and above) for this sort of work is to have credible estimates of the actual impact of your work. Way too many engineers spend quarters or years refactoring or rewriting systems, and then go for promo with a case that's basically: "System X was kludgy, crufty, and engineers complained about it. I designed and lead implementation of a clean-up effort, and now people say it's nicer."

That's generally not going to cut it. You might get lucky, or you might get bailed out by a peer reviewer that provides solid data about the impact of your work, but you can't count on this, and you should be doing the legwork yourself.

You need impact estimates. That generally means you need some measurements, although the measurements don't have to be perfect. What metrics measure the pain that the existing system is causing? Some examples might be:

- Average # of hours required to push a release

- Average SWE-days/SWE-quarters/etc. required to develop a representative feature change

- Average monthly user-reported bugs

Spend some time actually measuring this stuff. Write some queries, run an internal survey of the developers on affected teams, etc. Take it seriously. Ideally, do all this before you've actually started work on the cleanup you want to do. Write a proposal doc, get it reviewed by others, and make sure they find your estimates credible. If the numbers are smaller than you expected, reconsider whether the clean-up is worth the time you're considering investing in it.

After you've completed your project, measure again. If your project is amenable to an experiment-style launch, that's ideal, but pre- and post- measurements are fine too. Share the stats - advertise your team's success! Package them up in a nice doc you can link to in your promo case.

"How am I supposed to find the time to do this while I'm doing my normal job?" This is your normal job. The opportunity cost of your time is $XXXk/quarter. The opportunity cost of your team's time is $X million-$XX million/quarter. The single most important thing you can do is make sure that time is being invested in a high value way. In your personal life, you would look for a lot of data before you invested that kind of money - you should be doing the same at work.

In general, as you move up you should expect to spend a higher percentage of your time figuring out what problems you're going to work on. As a result, the fraction of your time that you spend actually designing and implementing the solutions to those problems will decrease, but it's still a net win because you and your team will be focused on tackling the really important problems.

In summary, it's possible to be really successful and quickly move up the ladder by doing "sustaining work". The catch is that you have to be rigorous about choosing the specific "sustaining work" to spend your time on.


Another ex-Googler here. This seems like a viable strategy, but it seems rather inefficient for the organization, don't you think? E.g.

> Write some queries, run an internal survey

> Write a proposal doc, get it reviewed by others

Even if one could convincingly claim a causal effect on those noisy metrics, that sounds like quite a bit of overhead. IMO the majority of cleanup or bug fixing work shouldn't require any sort of formal planning or justification. At other companies, we would just go ahead and do the work. Then if an IC's work shows a pattern of code quality improvements, the manager should take notice and make sure it's considered in any promotion decisions.

Moreover, the strategy you mention requires a concerted effort to improve the quality of a particular component in a short period of time. Ideally one should evaluate bugs and quality issues as they arise, and most of them should be fixed promptly. But if a Googler is optimizing for promotions, it seems better to let code rot for months or years, then fix many issues in a short sprint to create a nice dip in the metrics.


These "objective" estimates are usually nonsense, or deceptively only show one side of the story. Not everything can be easily quantified.


Sounds like a different symptom of the same illness.

The team rushes at the problem, has a honeymoon period where things look nice and progress is fast but... it all then goes to shit. If your reaction is to blame technologies or the previous generation of staff, dump the code and repeat the cycle, you end up in the same place.


I love the arm chair analysis of every external person here. Code quality is extremely important at Google. That doesn't mean people don't write any features.


each unit of additional reliability and supportability is probably a bigger impact for google than any given user feature.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: