Hacker News new | past | comments | ask | show | jobs | submit login
Data behind high-functioning engineering organizations [pdf] (datocms-assets.com)
144 points by rnjailamba on Dec 13, 2022 | hide | past | favorite | 65 comments



"Most organizations have shifted to organize their engineering teams by product"

We have done the same. The great thing about it is locating decision makers and customer insight close to developers. The bad thing is that what is perceived to be a "product" can be very different from the natural division of technical work to be done.

I sometimes joke that our Head of Product ended up involuntarily being our Chief Architect. The master plan of the tech has to be divided along these fault lines between Product A and Product B.

In Sales these may be sold as separate products. But the reality is that 80% of functio ality is shared between them, and these 80% do not get a good organization developed around it, since the A vs B split is at the core of the company chart.


Would it work to denote that 80 % as a separate product X that only has internal customers?

On the one hand, it's the obvious solution. On the other hand, that would move the developers of X further away from the customers of A and B.


Also the internal product is, in effect, a compulsory purchase by teams A and B, therein lacking the selection pressure that normally drives product development.

What other approaches to shared-something cross functional product teams are there?


Perhaps its my disdain for metrics and "efficiency process" in general, but skimming through these slides made me want to run for the hills. A high functioning team, in my experience, is one that has clear direction, autonomy, communicates well, and has enough trust to operate as efficiently as possible to meet the company's needs.


I have been on projects that go nowhere when started, but as soon as the politics are settled down and everyone is rowing in the same direction productivity is unlocked. Then, a reorg happens. But yes, please measure how many lines of code I write.


Bingo. This has been my experience as well.

It's like once you reach a certain level of management mirrors don't exist and problems only exist elsewhere in the org.


Agree. This feels like 90s era buffoonery. Slick slides with nice pantone selection and gradients. Entirely based on data, yet no data is ever provided - because confidentiality. Jabba says is not how science works.


> Developer productivity can be compared to a sales funnel, with key metrics that can be tracked at each stage

I dont like the idea of ruthless efficiency by tracking metrics similar to a sales funnel. It gives me the mental image of everyone competing and inflating numbers or gaming the system.

https://en.wikipedia.org/wiki/Goodhart%27s_law


Managers have become obsessed with being "agile" and "data driven" and yet never seem to notice how much useful engineering time and attention is being lost to all of those finely detailed breakdowns and regular meetings and tracking and monitoring processes. A common failure mode now seems to be that managers focus on metrics that don't really represent any useful reality to begin with but persist in trying to optimise those metrics even when their engineering team is telling them plainly that they're looking at garbage in and garbage out. The one metric no-one in management ever seems to consider is how much all of the management overheads themselves cost.


There is a lack of mutual understanding that shipping code and closing a deal have a lot of similarities. A Pull Request is basically the same as an Opportunity in your JIRA/SFDC. We can estimate when the PR will be pushed to prod or the contract gets signed. Generally, the more experience you have, the better you’re at setting expectations on when a PR/opp will close. The PR or opp can stall, but it’s up to the person responsible for the PR or opp to notify the rest of the team that there will be a delay or figure out a way to unblock.

The big difference is the lack of accountability. AFAIK if a developer misses shipping during a sprint, they’ll just add it to the next sprint. If a salesperson does this, after 2 quarters of missing your quota, you’re fired. Doesn’t matter if you had monster quarters before that, if you’re not hitting your numbers, GTFO.

I think this is because sales have very clear metrics that everyone can agree upon: revenue, number of demos, cold outreach, etc. We’ve been tracking these types of metrics in cuneiform. Comp sci wasn’t a thing until the 1960s. It’s still too new and everyone has an opinion on what metrics to focus on without angering a developer for micromanaging them. This is where the friction lies; until that’s fixed, you will not have a cohesive team. I hope that AI takes over and we don’t need devs or salespeople anymore.

context: I’m a sales guy that sold software that tracks “engineering performance” to VPs of Eng and CTOs depending on the size of your company. It was shocking to see the level of variance in measuring performance metrics for eng teams.


I’ve seen this a lot in the industry - sales people selling stuff they don’t really understand; engineering perf for example. Sales is simple. It requires very little knowledge - as reflected by your “GTFO”. Had there been knowledge involved people would have been treated differently. I’m not saying it’s easy, it’s just simple.

As it is such a meat grinder I can only hope it gets replaced with AI as I’ve seen many in sales not feeling well.

Trying to imprint sales principles on engineering or any other knowledge profession is what leads to horrible organizations filled with unhealthy practices and people.


To be fair, these "sales principles" leads to horrible sales organisations filled with unhealthy sales practises and people too. There's a huge difference between good salespeople and bad ones, and it's not just about closing deals. It's about finding prospects that challenge the organisation in the right way. The practises you speak of beat any salesperson into becoming a bad one focused only on closing deals.


Of course! There was a time when LOC was the measure of productivity just like the number of cold touches you make to convert to a discovery call. I've worked for public companies where you needed to make 300 cold calls a day and they are listed on NASDAQ. What I'm trying to say is that sales metrics has had time to evolve since 3500 BC. Meanwhile Comp Sci is still in it's infancy. The ironic part is that for an industry that is obsessed with data and metrics, engineers abhor any kind of metric to track performance because it's perceived more of an art. Meanwhile salespeople are under a microscope and are totally okay with it because if you're good, you're generally the highest paid person in the org.


There’s a body of work studying metrics and performance in software development/engineering.

The push for Lean (by way of Agile) has been more pronounced within IT/tech than any other non-manufacturing industry that I know. We’ve been collecting metrics for as long as I can remember, and reported them.

What is interesting here is that studies out there on knowledge work indicate that most of it is wasteful.

What you need to have for a team to perform is clarity, relevant competency and freedom to do the work - all of which are managements real job to put in place. This can be hard to grasp coming from sales; the value of flow, the detrimental effect deadlines and hard estimates have on team performance.

The individual team need to identify what they can do to become better and they should measure this. But not in any way should we put this relative to other teams metrics.

For example: one team might frequently end up in wait state due to a stakeholder or customer, when touching specific parts of the code/system. This usually leads to the team picking up new work while waiting, increasing WIP and context switching.

Another team have to deal with a lot of legacy code within their domain, that doesn’t build or test well. This have to be prioritized if this is important software or else this team will be a hog for eternity.

Measuring these things is important, and most definitely possible - you just have to understand the profession.


The way you describe a technical team is how high-performing sales teams operate. Millions of dollars are at stake, we will rape every angle till it's raw to make sure we close the deal. I know a customer's birthday, their children's sports, their hobbies, and what they give a fuck about in a non-invasive way because at the end of the day a customer buys because they trust that you will deliver and not because you have a bunch of PhDs in Comp Sci. I closed a multi-million dollar deal because my competitor didn't pick up his phone at 8pm when his car was towed after a day of talking to press. He left his wallet in his car and couldn't get it out of the impound lot. Luckily he had my number and I bailed out his car with my corp CC. Meanwhile, our competitor thought the deal was sealed, but I fucking snatched that from their mouth and ran the most successful campaigns in company history. Sondors e-bikes if anyone is curious.


This is good sales work, but far removed from knowledge work.

They need to be treated differently as your driver probably differs by miles from what drives my team of system developers. For someone in sales, maybe with compensation in the form of commission you have clear metrics every month. I can have guys sit around and think for a couple of months with the benefit of us staying agile and malleable two years from now. This is where the disciplines differ by a lot and must be treated and measured differently.


Totally agree - I'm challenging the sales dept over here to actually work with process goals instead of results. For example amount of meetings might be an interesting metric, but what makes for a GOOD meeting?

Can we identify a good meeting and "turn up the good"? Much harder, and requires actual leadership.


you can generally measure that with churn in the enterprise Saas world. But that's a lag indicator so it's tough to see into the future. To address what a good meeting is, I worked for a startup where a salesperson closed Netflix as an account, we all gave each other high fives but they never used the product due to security concerns and churned a year later. Meanwhile I understood that we were still in early stages of product development and closed winery clients and they are still customers after three years because what we had at the time is what they needed. Netflix wasn't a bad customer, but the salesperson set expectations too high and the eng team got crushed by the demands.


> If a salesperson does this, after 2 quarters of missing your quota, you’re fired. Doesn’t matter if you had monster quarters before that, if you’re not hitting your numbers, GTFO.

Did I get it right that you think this is the right approach for dealing with your salespeople, that it improves long-term productivity of the organisation?


It depends. But I'd challenge you to ask your own sales leader at your company what their policy is and their reasoning.


> A Pull Request is basically the same as an Opportunity in your JIRA/SFDC

What?

For an Opportunity, it's usually pretty clear and very very easy to measure, how much the company gains from that and when.

How do you measure the gains from a PR? If you can't than literally everything else is completely meaningless. Don't forget that PRs can also be net negative to the company.


It's a myth that it's "very very easy" to measure how much the company gains from a sale. The base revenue is clear as day, yes. But then there's publicity, word-of-mouth, usually extra revenue streams, and the cost of the customer is often not at all clear until they've been onboarded.

One of the big problems in most sales organisations I've seen is that they think that the base revenue is all that matters.


True, but still. Unless we talk about highly custom enterprise contracts or so, the unit economics is much easier. Maybe not very very easy, I take that one back, haha. But estimating the marketing impact per contract signed is still more or less a one time thing. Not so with PRs.


This is a good point. The sale is not equal to net present value. However, at least it is quantifiable in some way and somewhat easy to incentivize for. Much easier than attempting to assign net present value to a patch.


On this.

There are two major issues with "missing shipping"

1. More often than not the feature or the dev that holds thing back. It's the unknown, or didn't think of side effects. Like walking the coast of California in 2 weeks. If you don't hit a river or bit of beach that's been eroded away, or a cliff, you're good.

2. As you said metric take a while to figure out. Lets say we give Comp Sci a 10th of the time to figure out metrics as we've had at sales. So.. we should be all set in say 500 years :)


This is in some ways deliberate from management. They’re often not really needed and are making busy work for themselves at the expense of the company’s goals


Right.

First question should be: what does your data say about how much it cost to have an agile / data driven process compared to not having one? :)


"When a measure becomes a target, it ceases to be a good measure"

Unless, of course, the measure was good to begin with - such as cash or bars of gold. The problem with most measurable measures in software is they are gameable proxy-non-grata.

The whole thing depends on being able to compute the net present value of a patch on a given code base. Then pay the developer/team some percentage of this. What a beautiful reality that would be for developers and businesses alike. To bad there is no conceivable way to do it. I'd guess computing this is 1E12X harder than protein folding.


> Unless, of course, the measure was good to begin with - such as cash or bars of gold.

Unfortunately, this often leads to corporate shortsightedness (e.g. "This will destroy our customer goodwill and eliminate our competitiveness in the coming decade, but it will make our quarterly revenue targets and our jobs depend on that, so we're doing it.")


I did say "net present value" - cash or bars of gold which represent the NPV of a pull request/patch.


Which, once again introduces a perverse incentive to produce a high local value contribution that hurts the long term good, over a lower present value contribution that promotes a better long term result.

Any metric you set for comparing contributions, no matter how good you think it is, WILL produce a cobra effect.

This is because metrics are a model, which by definition can't capture the entirety of reality, and so any attempt to enforce based on that model is doomed to failure.

How much would you value a maintenance task? Or a process change? A better air conditioning system? Game night to let off steam and improve morale? How do you calculate what its long term worth is from the productivity change in order to assign a present value to that contributor? How can you even know what kind of an effect it will have long term? You'd need a pretty damn sophisticated model, and even then it would still only be a model, missing many points of the reality hurtling towards you at the speed of light.

And meanwhile, the little things that improve long term prospects simply aren't done, because the metric doesn't value them. Thus, metrics, even NPV, cash, bars of gold, are bad policy enforcement vehicles.


No, because the NPV includes all cash flows (see https://en.wikipedia.org/wiki/Net_present_value).

I completely agree with you that it is probably impossible to compute (per my original comment). But at least understanding what the target is helps imo.


Using metrics to gauge how well the enterprise is doing on the whole and to influence policy decisions and strategy is absolutely good and desirable. It's only when metrics become rules or quotas or measures to compare people or departments as a policy that things go bad.


Using good metrics is good, using dumb metrics is dumb. Most metrics are probably dumb if we dig deeply enough.


"Unless, of course, the measure was good to begin with - such as cash or bars of gold."

IMHO this is a bad measure, its how you end up poluting oand poisoning etc. fro profit no?


That's because these are also just proxy metrics for "net value provided to society", and heavily distorted at that.


I'd say highlighting environmental issues doesn't make it a bad metric

It highlights missing rules/regulations


Gosh, what a tangential poke - gold bad because cyanide, argument invalid because said gold. Fine, exclude gold from the argument and just use cash / e-transfer.


Working with the iconiq capacity allocation framework was just awful.

It was really frustrating because more time was spent about how work should be coded than the work itself.

Quality didn't end up mattering because there was no accountability for actually delivering good software because following iconiq's standards was considered good enough.

It was weird because management would routinely be surprised that our metrics were good but reality was not. As a grunt, it was frustrating because there was no room for productivity improvements because those were to only come from the top.


> I dont like the idea of ruthless efficiency by tracking metrics similar to a sales funnel.

It's probably quite helpful if you do it yourself. But yeah - definitely something that has historically been easy to game.


> More engineering organizations are starting to track developer productivity, with the top metrics reported on being number of bugs, %of committed software, working software, and PR to release time

Sounds like an absolute dystopia.


What aspect of the surveyed companies suggests that they are "high functioning"?


FTX is on the list. Gotta be on to something.


That you too can have a highly functional Ponzi scheme if you follow these practices?


Would love to see FTX's Jira admitted to evidence.


I chuckled at this. Then I became mildly terrified. Please never mention that anywhere a lawyer can hear.


The press made a big thing of them using "emojis in a chat app" (Presumably Slack) for approving expenses. I wonder if they actually used Jira?


This doesn’t seem like they figured out anything about “high functioning teams”, they just performed a survey.

Like, do these companies actually ship faster, have lower rates of issues, better MTTR, etc? This just says things like “there are more full stack engineers”.

At least the Puppet state of DevOps report gives you a “low, medium, high” categorization. This doesn’t seem to be describing much other than some trends they saw this year.


Learning is a lot more about observing and surveying than folks typically acknowledge. Indeed, one of the more dangerous things I notice, on a regular basis, is a heavy set of beliefs built up on reasoning without observation. They typically betray far more of the personal wants of the reasoner than they do of reality.

Of course, that way is another strawman. One of only observations with no reasoning on top.


Shameless plug: In case you want to get similar data for your company and you are using github for issues management, please give my product https://app.gitimprove.com a try its in alpha right now. Website is https://gitimprove.com.


Good luck with your product! Hope you have some great success


wow 89% use Atlassian tools..


Aka 89% are clueless and just pay for the next shiny tool that has no real value


Atlassian tools are extremely boring but maintain a core set of features that many projects use.

Their vanilla status is bordering on Microsoft Office levels of drudgery. What on earth are you comparing it to?


I wouldn't exactly call Jira or Confluence shiny.


If you think atlassian tools have no value you haven't been in software development for long.

They might suck, but they suck substantially less than anything else.


They suck because they attempt to add feature to something that doesn't need more features.

They can probably be a small, healthy org but they insist on this cloud "solution". Now their customer are leaving them and they are paying so much more for all the extra fluff.

Well deserved loss.


Where do the customers leave to?

We are currently giving up our self hosted Redmine instance as nobody is willing to take ownership on maintenance.

We switch to Jira cloud.


github projects does the same job.



Browsed through the slides but didn’t encounter any lightbulb moments or any specific enlightenment.

What are the key takeaways or lessons here?


Fill slides with figures and paragraphs of text and people will pay you lots of money to do the work of their existing teams.


Does anyone have experience working within a matrix team? It seems like an interesting idea.


Yep, it was a way of getting more managers to the same number of engineers. Granted it was only 1 org so maybe they were just bad, but I had one line manager and one (Or sometimes multiple) project managers. The project managers compete for engineer time, which is supposed to be managed by the line manager allocating you to different projects. In reality nobody wants to say no and it was very easy for project managers to lowball project estimates thereby squeezing people in on 2 or sometimes even 3 projects at the same time.

"How you manage your time was up to you but the spreadsheet says these 3 projects will fill 40 hours a week, if it actually takes 100 hours maybe you're bad at time management" was a pretty common attitude from managers. I didn't stay long and neither did anyone else who had even the slightest talent.

The company was a major telco in a European country, so not a small startup with growth pains but a proper enterprise org with several thousands employees.


Ah, I can totally see how that can happen. Thanks for the insight!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: