Wow, the author of this seems to have an incredibly inflated sense of the value of his or her opinion, and seems to have very hostile preconceived notions of teammates’ errors and shallowness of their reasons for defending their choices.
Not to mention that this part, at least, seems completely wrong,
> “But an even slower way to uncover problems than argument is implementation.”
Nope. Argument and architectural documents invite all manner of bikeshedding, often wasting time when you could just implement things and adjust the parts that end up causing problems.
This is especially critical for taking a YAGNI approach. Architectural designs are such a slow waste of time that ends up leading you down a path where you don’t need various abstractions or extensibility hooks for reasons that only become clear after getting a minimal prototype going, looping in business team feedback, and seeing what fails or works.
Basically, most arguments (in the way the author means it) are a really irritating form of premature optimization, and just implementing a first bad version is way more efficient.
This reasoning works until you need security, privacy, or any cryptography in the solution at all.
Then it is farcically irresponsible.
There is a technical play I have seen over and over again, which is just this: create a terrible prototype, add stakeholders with promises of functionality - use this momentum to bulldoze appeals to quality and common sense.
It is a way certain kinds of engineers create a crisis that has them in the middle of it, and ensures they maintain control and indispensability in the ensuing chaos.
I've certainly seen this behaviour, years wasted on things that could never work in the long run, then rinse and repeat.
I've only been on for one cycle, but the pattern is quite clear, even if it might not be completely intentional, the result is the same.
It begins with something like a sunk cost fallacy, and then it just rolls on.
There is a group of people where architecture is a synonym for "too much architecture". But if you know it or not, you are always doing architecture as soon as start doing anything non-trival. Only difference is that if you aren't doing in explicitly, then you are doing it implicitly and subconsciously, and if you are doing something trivial, usually someone else has already done the architecture work for you and embedded it in the tools you use.
Not that I need to tell you this, just venting a bit :)
> “This reasoning works until you need security, privacy, or any cryptography in the solution at all.”
This has not been true for me and my team generally, where privacy and security are up-front, primary concerns given the customer data that powers several machine learning features and data ingestion services we maintain. Treating these aspects of the problem as just yet another thing to throw into the prototype and iterate on has worked extremely well. Meanwhile other teams in my company have been caught out by nasty over-committment to certain security patterns that were then totally taken away from possibility when devops changed container-based deployments. Had they just not over-committed to a fixed, ahead-of-time design, they might not have been set back as many months of reworking it.
> “create a terrible prototype, add stakeholders with promises of functionality - use this momentum to bulldoze appeals to quality and common sense.”
Well I think you falsley mention quality and common sense at the end in a way that makes it seem like you’ve got an ax to grind against approaches that don’t prioritize long-term design, and that last sentence doesn’t fit in with the rest.
Apart from that, I actually think the approach you describe is great and works really well to create robust and safe software, because it actually allows the team to generate buy-in from other business teams and product teams, and creates chances for the team to solve things in a high-quality and best practices focused way.
If instead you slowly start out with architectural debates, product or business people assume the solution can’t be done on their unrealistic timeline and just force the team to pivot to the next unrealistic project, and you just flop around endlessly managing the tension between product people who want crappy prototypes that can be diff’d against changing requirements, and engineers who want stable requirements to pin down best-practices-compliant architectural documentation.
In the end, overpromising on a threadbare prototype and working iteratively on it is way better, and is not at all some disingenuous attempt to create a fiefdom or a personal walled garden sort of project. It’s a technique for actually delivering the business result.
I often encounter engineering managers who make appeals to the business, as a way to sidestep their accountability to a direct internal customer. Mainly in orgs transitioning from waterfall to iterative, so I hope you will forgive any excessive skepticism.
I've found that either engineering can iterate to find fit, or product can iterate to find fit. If both are iterating to find fit, without either vision or direction, the wheels will and do come off. Architecture provides that direction for engineering if Product is still figuring out market fit.
If Product has fit (even customer traction), engineering can iterate its way forward. The alternative is "game of aeron chairs," where everyone is competing to hitch their wagon to the most prestigious customer they can to drive their internal pet initiatives.
When you deal with security, and in particular cryptography without a clear architecture, you are taking on risk of being spike-stripped because you aren't playing at an enterprise level.
Architecture isn't compliance, it's design, so it's not about best-practices and exogenous check boxes - it's about building something with an eye to maintainability, market fit, and stakeholder acceptance.
I do think the Lean approach with validated learning over iterations is very useful. I also think Agile is applicable in a lot of development. What iterative development has not figured out in general is how to integrate security for complex solutions, and these days security and privacy are the dealbreakers.
So here I will admit one bias, because instead of merely grinding an axe I figured I would bet the farm on a solution to this precise problem of how to do security in iterative development environments, where the value of architects and security analysts has been marginalized by a fundamental change in development culture.
Hustling on an MVP isn't bad at all, but with new freedom comes some anti-patterns that it would be to the benefit of all to articulate and recognize.
> I often encounter engineering managers who make appeals to the business, as a way to sidestep their accountability to a direct internal customer.
I cant help but feel like this is not a fault of being “business focussed” but instead of misunderstanding what or who the business is. Business focussed (imo) is management speak for a focus on solving the right problem for the right customer within the constraints of that customer.
In the case you describe the customer is internal. There seems to be an implicit misconnect there though. Perhaps this is a example of a lack of information usually communicated through a sales process with a customer?
Similar to the issues that arise when you ask a friend for a favour and they do a bad job / ghost you / bail. Its hard to release them of the obligation they signed up for. There is little to no explicit skin in the game (money?). In a similar way the development team building for an internal customer have an unclear, poorly defined relationship with a convoluted accountability process.
Perhaps it can also be modelled in a game theoretic way? e.g:
Lets have 2 sets of players:
A. (c) Customer <-> (p) service provider
B. (c) Internal customer <-> (p) internal service provider
Each set of players plays a set of potentially unbounded games which represent projects. The moves available for each game are:
1. Continue working together
2. Stop working together
Each move has an associated reward and cost which has a simple numeric value but represents the aggregate of things both monetary and socially valuable / costly. This means that they are not of the same type though, so a single score for a move cant be produced just by subtracting the cost from the reward. (This probably needs modelling better).
Moves are represented in the form:
A.c.1 which means the customer in set A chooses to work together.
A.c.1 and A.p.1 has a cost of 2 and a reward of 3. Im assuming that working on the project only costs the time and money of the participants and rewards them with a product, money and a happy relationship (socially valuable).
A.c.1 and A.p.2 has a cost of 1 and a reward of 2. Im assuming it costs a little socially to the relationship, especially if previous games have been played (the project is running) and the reward is the time and money saved.
A.c.2 and A.p.1 is the same as above.
A.c.2 and A.p.2 has a cost of 0 and a reward of 2. Im assuming that both players agreeing to not work together is amicable and so not costing anything socially or monetarily.
B.c.1 and B.p.1 has a cost of 1 and a reward of 3. Im assuming that the cost is just time spent on the project and that the reward is a better relationship between the players, and job security of both players.
B.c.1 and B.p.2 has a cost of 3 and a reward of 1. Im assuming the cost is to the relationships and the job security of B.p and the reward is for time saved.
B.c.2 and B.p.1 is the same as above except there is extra cost to B.c
B.c.2 and B.p.2 has a cost of 2 and a reward of 1. Im assuming that the cost of deciding not to work together internally is a risk to both player’s job security and that the reward is time saved.
The goal is to maximise reward while minimising cost.
This is a pretty poor model I think but it goes some way to expressing my thoughts on issues with internal projects and potentially why using a third “arbiter of vision” e.g the business, to justify things in the project may be motivated.
I composed this on my phone on my hour and a half journey to work. I wish it were easier to compose meaningful responses to comments online. It likely puts other people pff even replying.
Although that's a bit different, because it usually refers to it happening by accident (create a prototype best of intentions to throw it away later, but that never happens).
This also changes with more contact points to end users. good discussions about ux can simplify it and uncover e.g. difficult states that need to be handled. If it proofs to be non trivial a mockup or clickdummy also helps.
in general writing a high level concept and discussing it with stakeholders and developers i find super crucial for a smooth implementation.
Does this approach work for all software? I’m trying to think of whether I’d trust an autonomous vehicle if the authors followed this practice.
There are methods to testing out designs that point us away from the diagrams and bikesheds. I’m drawing on my own experiences with various levels of formal methods of software construction.
The first bad version need not be production-facing or operating in a situation with the same costs or risks. In fact, ideally it isn’t, and the feedback of the bad versions that get iterated is what actually ends up saving you from releasing a production beta version or something that injures somebody because you thought you spec’d everything in the architectural documents and the test plan, but of course you didn’t.
I’m not saying arch planning has no place, only that it’s not a substitute for iterating on prototypes, and arguing about arch plans is often a waste of time.
Thank you for the clarification. I think I understand what you mean.
Where it makes sense I work from prototypes up refining the specification as I go. Often a decent type system helps in this regard as it works as a loose, informal specification language. However for some problems I find programming languages incapable of expressing sufficiently high-level concepts and turn to model checking to assist me in shooting holes in my ideas.
Either way; a working model seems more useful than arm-chair debates, hand-waving, and glorious boxes with arrows.
It's an argument for vigorous, not-personal, dialectic in design and planning meetings. It should be OK to make your best case. And it should be OK to get skeptical questions about your case. Or to hear an equivalent case for an alternative.
It's an aggressive and competitive take on what a healthy collaborative environment should look like. I've seen other approaches, but, to be honest, they sort of exchange the aggressive for the passive-aggressive.
A key line in the post:
> ...an even slower way to uncover problems than argument is implementation.
The thesis is that it is expensive to avoid argument or doing it in an incomplete or lackluster way.
yes, peers ideally. I actually edited my post to clarify that the ideal is to be a team member of both groups, but than discarded it to keep the comment short.
My question remains whether there are more conflicts with those who have the ultimate say in high level things or with those who handle or do the implementation.
Another way to put it might be creating a functional, collective "truth." We are really good at X. Y is the future. We are going to do ABC. 1...2...3... Go Team! It doesn't need to be true, but it does need to create a unified mission, an esprit de corps.
92.6% of all leadership advice is about creating this sort of common truth, so that everyone can cooperate: Team building. Mission statements. Voices of authority. Core values. Crisis of leadership. Everyone sweeps the shed. Extreme ownership...
There is a lot of worry these days, largely from the idealogical centre that we achieve a (public-political) consensus on "basic truths" anymore. There is no ground to stand on, in any kind d of divisided discussion. What they're refering to is an absence of "basic truths."
Anyway, truths are "expensive." We have 3 scalable, but highly imperfect ways ways of creating truth in modern society: democracy, justice & science.
Democracy is supposed to settle debates with the subjective will of the people. But, it's divisive as well as irrational and convinces half the population that the other half is evil.
Science is genuinely good at determining truth, where the truth is scientifically determinable. But, science has also produced pseudo-science. Basically bad science that can answer questions which good science cannot: Freudianism/psychology, Marxism, liberal economics/metaphysics and all the "social sciences" claiming to have discovered the truth about women's issues, colonialism or whatnot.
Law... Well... This is the most ancient and least believable source of truth. Common law and tiered court systems are almost modelled on cognitive dissonance, making sure new truths do not contradict old ones.
Not to mention that this part, at least, seems completely wrong,
> “But an even slower way to uncover problems than argument is implementation.”
Nope. Argument and architectural documents invite all manner of bikeshedding, often wasting time when you could just implement things and adjust the parts that end up causing problems.
This is especially critical for taking a YAGNI approach. Architectural designs are such a slow waste of time that ends up leading you down a path where you don’t need various abstractions or extensibility hooks for reasons that only become clear after getting a minimal prototype going, looping in business team feedback, and seeing what fails or works.
Basically, most arguments (in the way the author means it) are a really irritating form of premature optimization, and just implementing a first bad version is way more efficient.