The ironic thing is that this "cost-center culture" has even affected the mindset of individual software developers and how we think of our own work.
Most developers ONLY look at their work in terms of cost. ALL pros and cons arguments are really about ways to reduce cost. Consider the term "tech debt" -- nothing could be clearer. Yet discussions rarely ever take place among engineers at a company about how to maximize profit or increase revenue. In fact, I would argue there is a heavy cultural bias on most engineering AGAINST optimizing for revenue, as it is seen as something that motivates sales or product people to rush timelines and sabotage sound engineering practices.
The obsession with cost is baked into programmer culture in such a deep and foundational way, I'm skeptical will ever change, even if the accounting of software projects changes. If you perform a search for the word "cost" on the eBook for the Principles of Object-Oriented Design in Ruby by Sandi Metz, you will find it dozens of times in various sentences about the urgency and critical importance of reducing costs, but the words "profit," "income," or "revenue" appear not even once. I'm not trying to make a point about POODR in particular, because this bias is only reflective of the culture as a whole.
You can certainly make the argument that it has to be that way, and engineering should be separated from revenue. I find it very interesting that, as a culture, we are only willing to speak about the monetary value of our work in negative terms.
Something I've realized, tho it's, as you write, seemingly against my nature as an engineer, is that there are lots of situations where it's reasonable to 'assume' technical debt to reach a more important goal.
We absolutely should continue discussing technical debt, but also in the context of 'technical revenue' and 'technical profit'.
Interestingly, 'technical debt' works amazingly well as a concept. I suspect it's not even really a metaphor. Consider that probably the most important pragmatic consideration about whether you should assume a debt (of any form) is whether you can expect to make all of the 'scheduled payments' that it requires. And even if you can, will you still have sufficient slack to be able to pay-off or pay-down other currently un-expected debts or expenses that you might incur between now and whenever it is you finally pay off all of the original debt?
Tech debt works great when it's done on purpose, and when the people/management who take it on are the same one who eventually deal with it. It takes a level of organization maturity to do that though.
What unfortunately happens a lot is orgs taking on tech debt without realizing it (devs are too junior, or management doesn't listen to devs, or various other reasons). Then 1-2 years later, the system is falling apart and the old engineers are gone. The new ones just call it tech debt as this kind of unavoidable force of nature that was dropped on them (because it kind of was).
Or worse, the devs of old ARE still there, but pretend that after 1-2 years, its inevitable for a system to be falling apart and needing to be rewritten from scratch.
This level of industry immaturity prevents the tech debt concept from fully working.
In organization where tech debt IS a fully thought out and planned out compromise to get value today at some cost to be paid off later, then yeah, it not only works: it's often the right thing to do.
In most orgs like this that I've seen, it leads to a fundamental misunderstanding of tech debt.
Messy code is painful so the answer is sought, when we don't have much experience, in what we learned in classes and books: elegant abstractions. But we end up abstracting business logic in such a way that when the business needs to change, our code can't change with it, and that's when we have true tech debt of the "let's rewrite the world!" sort. It might be beautifully designed abstraction, but if it's tied to the features of yesterday too much, it's going to become increasingly brittle and painful.
Now when someone says "we can ship this sooner but it'll be hacky and introduce tech debt to make the code re-usable" I'm much happier, and would rather have the code not be intended to be re-used until we prove that we understand how we'll want to use it next time.
Not just that, source code should really be viewed as a toxic byproduct, the more of it you generate, the more you're going to be paying in maintaining and managing it. The best code you can write, is the code that meets your current needs in as succinct a way as possible. Further anything that's internet facing is even more critical to minimize, failure to properly deal with it (keep it up to date and stay on top of known vulnerabilities) can and will sink a business when that bit rot eventually leads to an Experian type situation.
I don't think there is a "technical profit" to be had, though.
The output of the effort put into technology is a new process or workflow, rather than raw "tech stuff". It's a multiplier to a different input, some other task or application, and so the creators of the technology are frequently disconnected from feedback about its actual use.
For tech to be useful it has to succeed at creating enough leverage that you would use it over what was there previously, and plenty of technical projects get stuck on that point: they show an immense engineering effort but are solving the wrong problem because they don't optimize the desired workflow, or they expect everything to change around the product's workflow.
In that light, in a universe where most of these designs are bad and misguided and deserve to be put out to pasture, the cost-center mentality makes total sense because it asks the design to prove itself as early as possible. And the outcome of that in turn is the familiar rushed, unmaintainable prototype-in-production scenario.
New technology efforts always have this element of being a judo match: if you can find just the right time and place to strike and take the problem off-balance you can get an instant win, but the same strategy at a different moment doesn't work at all, no matter how hard you flex your coding muscles, and would leave you better off not attempting any new tech.
The balance-sheet model doesn't explain when and where to strike, though. It just says whether or not you're trying to do so this quarter, and any benefit will most likely show up on a different business unit's balance sheet.
> I don't think there is a "technical profit" to be had, though.
Ostensibly, every net-new feature increases the pool of people for whom the software, as a whole, solves a problem. The performance and usability of that feature further adds to that.* I'd argue that is technical profit, and is realized into real profit when it converts to purchases/subscriptions/whatnot.
[1] For the sake of argument I'm assuming a scenario where the feature and its performance/usability is positively received.
I never thought about the idea of "technical profit," but I think it is insightful. There is, indeed technical profit. When you have a quality design that fits the system's space really well, then you can maintain and add features really quickly and safely. This is technical profit.
When Paul Graham talks about how writing Lisp enabled his startup implement features quicker than the competition, that's technical profit. Google's internal systems that allow them to maintain thousands of machines; create large, distributed filesystems; and who know what else are technical profit.
There was an article recently about Boeing retiring the 747, and it commented how pilots would take a picture of the plane after flying it. (Passengers, too) In aerodynamics form is function, and I suspect that the outward beauty reflected excellence of design. The design served them so well they just now retired it after 40 years, despite all the advances since it was originally designed. If I'm right, this is technical profit.
Perhaps the idea of technical profit would be an easier sell to management, especially as management types see debt as useful, but programmers see it as a liability.
> The output of the effort put into technology is a new process or workflow, rather than raw "tech stuff". It's a multiplier to a different input, some other task or application, and so the creators of the technology are frequently disconnected from feedback about its actual use.
Technical profit is a feature or capability really that's possible because of some technical thing (e.g. code). But the 'revenue' of the capability has to exceed the cost of designing, developing – and maintaining – the tech for a profit to be realized.
I like to think there's a hidden mirror analogy of "tech debt," which is the "time discount," or the notion that the longer you wait to actualize a monetary gain, the less valuable it is. Both concepts are drawn from finance, and if one is applicable to software, so is the other.
I don't dare speak of this in engineering circles, though. I'd be chased out of the office with pitchforks...
The "monetary gain" I'm referring to here would be the software itself. So it's the idea of software being intrinsically more valuable the sooner you get it out the door. ::ducks::
I always thought of technical debt as something that sloppy engineers knowingly or unknowingly assume on behalf of clueless managers. Kind of like a company accountant assuming debt without company officers knowing or caring. I guess I've never had the chance to see competent people make a knowledgable decision to write lousy software.
Really? You've never had a manager tell you do this or it's your job? There are tons of places that do not give a single shit about good software and not even because they don't realize it's going to hurt later.
When a bosses bonus is tied to quarterly results then quality goes out the window every time immediate revenue or cost cutting comes up
But it's not just applicable to lousy software. Even great software is never perfect and making tradeoffs about the design or architecture, or even just the timing of when different components, refactorings, etc. are implemented can be reasonably considered tech debt.
Debt is a liability that you typically need to make payments against; preferably at predictable intervals and of predictable amounts.
Every known (and significant) bug, that isn't fixed, is tech debt. Presumably something has to be done about the effects of the bug and that something has a cost. Fixing the bug then is paying down the associated debt.
Talk to any electrical, mechanical, or civil engineer and they will be focusing on getting this on time and under budget, not trying to increase "income" or "revenue".
It's far more prevalent in software though. Those other fields realize that incurring any technical debt is going to involve very expensive repayment.
One of Boehm's "fundamental problems of software" is that it's so easy to change, that it's easy to assume that you can always change it later at no cost. This is really why the problem is far greater in software.
I would agree with you if the cost bias were only found in her book. But the bias is literally everywhere you go, on every engineering team. It's all we ever talk about.
At the CEO level, the money-making potential of software is clearly understood, so why, at the ground level, is it the exact opposite? Every programmer is paranoid about the costs they're introducing to an organization while totally ignoring the financial upsides. Every discussion is "cost now vs. cost later."
If our only goal at work is to minimize the cost to our companies, then our managers can do that better than us: he can just fire us and cancel the project. There, cost is now zero.
I'm not sure your thinking applies to the term "technical debt". It's simply a way of phrasing the downsides of crufty old code in a way everybody can understand. If you really want to stretch the analogy, you take on debt to get a tangible immediate benefit - wouldn't the benefit be maximized profit or increased revenue?
Technical debt isn't debt, it's an uncovered short sale. The thing is, debt is controllable; it has interest, you can pay it down. When "tech debt" becomes payable and due, it's payable and due now; at any cost, or the project fails.
This whole thing goes back to the accounting process of measuring "throughput" rather than some fixed number such as inventory.
This book "The Goal" was incredibly beneficial for me in understanding the concept of throughput. While the book is a parable focused on manufacturing, it can be easily applied to companies in tech.
https://www.amazon.com/Goal-Process-Ongoing-Improvement/dp/0...
That book is next on my reading list, but also worth checking out is "The Phoenix Project". It applies the concepts of "The Goal" to IT (operations to start, but it gets extended to development through the book).
Was just going to recommend reading this book, glad someone did already. Being in tech/IT, you will be able to identify each of the characters in your work environment at one point or another.
One of the best things about reading these books, IMO, is that I finally have the vocabulary to speak to management. Before I'd say the exact same thing, but I didn't know they had a term for what I wanted (value stream maps, for instance) and we couldn't communicate. Now that I can speak in management terms, they're listening to my inputs because we understand each other.
"However, accounting systems count inventory as an asset, so and any significant reduction in inventory had a negative impact on the balance sheet...successful JIT efforts tended to make senior managers look bad"
This shows a poor understanding of managerial accounting.
High "working capital" requirements, of which inventory is a big part, are a huge drain on free cash flow. Any company that sees its inventory as a % of assets go DOWN would view it as a positive by both its management and investors.
Were you working in managerial accounting in the 1980s?
My own arms-length exposure at that time suggests that, however simplistic it might seem, the assessment in the article is correct. It might seem obvious that lowering inventory as a percentage of assets is good, but only if you look at that, rather than just seeing the top-line assets total go down.
This article seems to very accurately represent my experience.
To be fair, no I wasn't working in managerial accounting in the 1980s.
Maybe what would help is if you could explain what metric a reasonable manager would measure that would get worse under JIT versus better.
I guess maybe if your assets go down you could look more highly levered, but financial leverage isn't really something that a manager can affect anyway (more of a CFO level metric).
All of the asset-oriented measures I can think of -- like asset turnover, working capital as % of sales, working capital as % of assets, WIP inventory as a % of total inventory, etc. -- would all improve.
Mary is talking about the transition from cost accounting to throughput accounting, which happened several decades ago. And yes, cost accounting really did resist attempts to decrease machine utilization, which resulted in increased inventory.
Look in the mirror before throwing stones at others' knowledge.
You are confusing what the business leaders of today think with what they thought in the 1920s. In the 1920s and before nobody could think of a reason why large inventories would be bad: they were a buffer against surges in demand, and you got to build them at today's prices. A large inventory just meant you need to hire more salesmen (literally men, it was a sexist time), and/or hold a sale.
Of course now management theory knows of many reasons the above is wrong, it is entirely accepted that you want inventories low because of the advantages it brings.
In the 1980s it was well understood in universities. However senior management in real companies were catching up. Some companies understood at and were executing well (mostly Japanese companies who started early). other companies had figured out how important it was, but were still trying to figure out how to apply it. Other companies were just waking up to the fact that their competition doing something else was beating them, without knowing why.
Not to mention K-Factor (carrying cost) and that 200K in stock sat in your yard is 200K you aren't earning interest on/able to re-invest into you business.
It's a double whammy, you pay (in terms of utilities, rent etc) to hold stock that means you can't use the money elsewhere and for a lot of stock you also have depreciation (though I work for a company that makes stone products so at least our raw materials don't expire).
IT being a cost center at a modern information/product company is similar to doctors being a cost center for a hospital.
Only a deranged accountant thinks the cost center is the scarce talent that delivers outsize customer value, and that the profit center is the back-office filled with billers who "monetize" a relationship that wouldn't otherwise exist.
In tech, this flipped around in 2004-2008, I think. From 2001-2004, people thought IT was "over". Now, they realize software is "eating the world".
I am hopeful a similar realization will eventually come to medicine. Doctors deserve Google-style work perks and competition for their scarce talent. Insurance, back-office, and billing deserve to be turned into the AWS of medicine -- outsourced commodity.
I think it will actually reverse in the coming years ( 2-3 decades ) as the education programs that mold programers over compensate for supply and people in medical fields decrease to where they are valued again.
Let’s face it, unless there are mitigating circumstances, IT departments that started out as cost centers are going to remain cost centers even when the company attempts a digital transformation
AWS and the Google Cloud are evidence of the possibility of escape. Are there any writeups for how they came about? (Particularly in the case of AWS.)
They came about because any fortune 500 has thousands of developers and critical software to run and they have to manage the hardware to run it.
You will find an in house equivalent of the cloud in every single big company. Except, the software and the UI will be terrible because they have much less resources and competence than Amazon/Google.
I don’t know any write ups, but in short, AWS was a solution to the variable demand for Amazon’s, well, web services. Demand peaked considerably during holiday shopping months but otherwise remained far too low to justify keeping all of those peak demand servers running Amazon.com full time, so they were rented out to companies that actually had similar problems of peak demand. In particular, the Victoria’s Secret fashion show is a great example of extreme peak demand and scalability for a web service that otherwise attracts relatively little traffic most of the year.
I'm not sure many IT functions are not cost center functions. Almost none of them, including software development, are revenue drivers for most companies.
Like anything else in business this seems like a case where some hard, one size fits all rule is occasionally misapplied.
It's not a clear profit center like sales or direct marketing. But if you can clearly point to good engineering work leading to more sales (release happens, it's really good, lots of people buy the product clearly because of the good work) then it's more on the profit than cost side of the spectrum.
It's really just a matter of how easily can you see that A led to B.
I'm not convinced. Using your description even high quality software releases are still "just" products: products that cost the company resources to build. The technical skills required doesn't change that.
If you're a restaurant chain and you have your own IT, presumably it's for a reason. You're providing your own POS systems, you're providing a website for customers to order from. A new release can improve efficiencies in a restaurant, or wreck them. A new release can mean the difference between me being able to order from Papa Johns (literally 1 mile from my apartment) or not (because of my zip code, they think I'm in an area they don't service).
For banks, it's the same thing. It can be external: my bank added a feature that let me enter and see projected expenditures and incomes so I can see my future balance, it's an effective aid to my budgeting. It can be internal: Like a POS system, making it easier or harder for tellers or lenging agents or others to do their job. Giving them more accurate and up to date information with reports generated daily or down to the minute, instead of the historic weekly or monthly batch reporting.
Car manufacturers have control systems developed by IT, same results as above on a new release.
Nobody says that what the IT department does cannot be useful. And it is not about being internal or user-facing. Marketing and customer support are also considered cost centers.
Which is not reasonable in either case. Also, is marketing really considered a cost center in most businesses? I would see that put more in line with sales.
Customer support is certainly treated as a cost center, but it oughtn't be. Mary Poppendieck (author of this article) has a good example in one of her books where the customer support center improved profits by being able to aggregate and identify the majority of customer complaints. This led directly to improvements in infrastructure and delivered systems that reduced overall corporate costs (less rework, customers were happier, more sales).
Those are valid points. My intention was not to discuss the subject but to reframe the discussion. I can see advantages and disadvantages in considering some departments in the company to be providers of services to the rest of the business. But I guess an appropriate, but cynical, definition would be that cost centers are the departments where you can slash costs while hoping there will be no impact on the business...
A bank can release a new well made app that lets people apply for mortgages easier, which gets them 10% new mortgages worth $100 billion, which can easily be tracked back to the dev team that made this amazing app.
Zara. A lot of it's growth can be attributed to fast flow of information from retail stores back to design team and the operational excellence due to a tech savvy management.
patio11 covered this in his career advice article.[0] I read that article around the time it was first published. The tidbit about cost centers and profit centers has stuck with me since. I have, for the most part, been able to avoid being attached to a cost center. Primarily by working at "tech" companies rather than more conventional companies which put software engineers in "IT" departments.
Don't see the point of this article. There are accounting mechanisms that allow to activate money put into IT infrastructure as an asset in the balance sheet, at least in my cold and dark part of the world.
Some IT expenses are indeed cost centres and should be just deducted from the profits immediately.
However, other "IT expenses", like creating "back office cost reduction software" and generally IT infrastructure are no different than acquiring some fancy machinery that enhances an industrial process. This also applies to companies whose main revenue model is building software and cloud services.
The point of the article is "accounting metrics drive company culture, which leads to damage." It's an important point that senior-ish managers need to understand.
The given “best” solution - change accounting/metrics - is also one that is most difficult to meaningfully change.
Everybody has a disagreement with metrics until it suits them. Metrics justify mid-level management to upper management, which then justify themselves to top management, then the board of directors, then the shareholders. “Look at the numbers!”
It’s similar to a grumbling slogan of fund managers in the 1980s about the then-consistently poor stock performance of IBM: “Nobody gets fired for buying Big Blue.”
Does anyone know of any further reading about how "software capitalization" works, with the audience of either a layman or somebody who is only cursorily familiar with real double-entry accounting? Also, do these rules vary significantly by country? It seems like the right way of accounting for software will be a huge structural advantage for countries as software becomes and increasing percentage of the value produced by the world economy.
A good read. I believe the basics (internet, email, connectivity) of IT infrastructure should be a cost center, development of tools that assist the company in accomplishing their goals should not be.
The "cost centre" concept is so silly, and it only ever seems to be applied to the IT department. Why isn't HR a "cost center"? They don't generate any revenue. Why isn't Accounting a cost centre? How many sales do they bring in? It's just a stick other departments use to beat IT with.
They are, actually. All your operationally necessary but not directly revenue generating business units/functions are cost centers, in this model. And they get beaten with the same stick.
This is why employees of "cost centers" should just go on occasional strikes. Just don't work for 3 days, let the execs understand what the cost is of not having these "cost centers".
The ironic thing is that this "cost-center culture" has even affected the mindset of individual software developers and how we think of our own work.
Most developers ONLY look at their work in terms of cost. ALL pros and cons arguments are really about ways to reduce cost. Consider the term "tech debt" -- nothing could be clearer. Yet discussions rarely ever take place among engineers at a company about how to maximize profit or increase revenue. In fact, I would argue there is a heavy cultural bias on most engineering AGAINST optimizing for revenue, as it is seen as something that motivates sales or product people to rush timelines and sabotage sound engineering practices.
The obsession with cost is baked into programmer culture in such a deep and foundational way, I'm skeptical will ever change, even if the accounting of software projects changes. If you perform a search for the word "cost" on the eBook for the Principles of Object-Oriented Design in Ruby by Sandi Metz, you will find it dozens of times in various sentences about the urgency and critical importance of reducing costs, but the words "profit," "income," or "revenue" appear not even once. I'm not trying to make a point about POODR in particular, because this bias is only reflective of the culture as a whole.
You can certainly make the argument that it has to be that way, and engineering should be separated from revenue. I find it very interesting that, as a culture, we are only willing to speak about the monetary value of our work in negative terms.