1. This idea is probably bad, but if someone wants to put together a more compelling argument we will discuss it at a future meeting.
2. This idea needs to be more fully developed before we can decide whether it is good or bad.
3. This idea is probably good, but it will remain in backlog limbo until someone makes a compelling argument that it is a priority.
4. This idea is good, and while it is not a high-enough priority to displace our current tasks, we will actively discuss including it when we plan our next sprint/release.
Depending on who you work with these may need to be gussied up with manager-speak to let people save face or to prevent people from hijacking the agenda to turn the meeting into a brainstorming session. But treating all of them as synonymous with "no" loses useful nuance.
In most companies I've worked, in order to actually implement an idea, you need to prove a few things, whether the person proposing it is the PM, an engineer, or any other person involved in the product:
1. The idea is technically feasible
2. The idea aligns with company's business goals
3. The idea is our team's responsibility and cannot be done by another team
4. The idea is more important than the other things our team plans to work on in the future
5. The idea is more time critical than the other things our team is working on now
If any of these cannot be proven, then it goes on the backlog as a P4 and nobody realistically will ever look at it. It's just the reality of corporate software building. There are always 10-50x more ideas than there are staff/time to work on.
Of course, all five of those can be, and often are, overridden by the Prime Directive:
0. One of the executives (often one of your grand-bosses high up on the totem pole) wants it.
As a PM, I factor this into prioritization. An engineer passionate for a product will lead to better engineering output, increased morale, and feeling of being heard. A motivated, bought-in engineer team is important when it comes to building the ‘high impact’ products.
Prioritization isn’t always black and white.
These qualitative factors matter and shouldn’t be ignored. As always, you weight it against other trade offs.
I’ve been a pm Eng and designer and this sort of patronizing attitude sucks.
Look at the end of the day you should be cultivating fellow thought leaders because when you grow up you learn your priorities are more often than not just your own egotistical nonsense and wrong. But you have a lot of phrases to cut others down.
Sure there were some buzz words in there... But the actual core of the post wasn't patronising at all. Might need to take a step back and ask why your response to that was so strong.
Some people don't realize the value of something unless you show it to them. It's a risk for sure but honestly it keeps me sane vs trying to get 10 people aligned before starting something and then running out of time.
People will happily take credit for your work after it works.
yep, an engineer has the power to directly influence the code. This is a strong power.
Sometimes just making a PR is enough and a good convincer in and of itself.
Use sparingly of course, weigh in time for making the argument, but this is an artifact just as a convincing research, text or a plot. code can be part of the argument.
I once worked on an application that integrated with a third party api. The way it did this was with a large and horrible client library that used a separate db to cache the data.
The data was then fetched from the main application and used to rebuild the pages (in the main db) based on this data once a day.
The library had lots of problems, and one day it stopped working. I was tasked with fixing it - we had the source code, it was purchased from someone and copied into the repo. I spent most of the week if not more trying to figure out what was wrong, but I couldn't. What I did learn was that this library was some of the worst most pointless code I'd ever seen.
So I told the team that I think I can rebuild this thing from scratch faster than I can fix this bug. The intermediate db is pointless and most of the library code is dead, the rest is garbage. I can make a simple little thing that does exactly what we need better, faster and easier.
Nope. No bueno, fix what we have. So I spent a few hours over the weekend, less than a workday, building the new solution. Come Monday I had it pretty much working, a few things needed to be done but it already supported the use case. The pages were built correctly, they had the necessary content but some things were a bit messed up, nothing difficult to fix.
Showed it to the team, said I want to use this and delete the old stuff - nope.
The only half-decent explanation I got was that the client had paid a way too high amount of money for this garbage library and I guess the team lead didn't want to tell them we wanted to throw it out or something like that.
Sigh. I worked at a shop that was spending months waterfalling a frontend to some background API calls. I finally got annoyed enough to spend a weekend actually implementing the thing as a Django app. There. Done.
I got my ass handed to me by management for not going through the proper processes.
I learned something that day: I never want to work somewhere that engineers serve the processes and not the reverse. There are some that are good and necessary: like “thou shalt deploy via CD and not SSH into prod to edit code”. There are others that only exist to serve bureaucracy, and those try my patience.
Yeah, depending on the particulars of a system. If you're at a startup and report to the CTO, that might be perfectly fine in an emergency. At a company with a few million users, almost certainly not. There's a spectrum of possibilities.
In an emergency, that sort of thing even happens at Google (more for their smaller services, and almost always in the form of auto-LGTM hot-fixes bypassing the normal checks rather than actual live-editing of a script or binary, but even that latter thing happens occasionally). There are checks and controls, but an emergency for a billion users is a big deal.
"I spent most of the week if not more trying to figure out what was wrong, but I couldn't. What I did learn was that this library was some of the worst most pointless code I'd ever seen."
I would probably be skeptical if somebody made these statements. You don't know what's wrong and you declare the code to be pointless. Maybe you put a good effort into it but I have heard it too many times that somebody declared "this is all crap. we need a rewrite". Most times they just didn't put the effort into understanding the current code. And usually the time to get to "pretty much working" is often only a fraction of what it takes to "totally done".
The problem was not that I did not understand the code. I understood it just fine, it wasn't complicated it was just bad and old. All it did was get some data from an api, change it somewhat, store it in a db. Then a scheduled job would call a method which would get that data, change it a bit more and return it where it would be changed yet a bit more and stored as pages for the main web app.
There was no reason all these data mutations couldn't have been in one place instead of all over the place. There was no reason to store it in one db then get it from that one just to store it in another db. Someone said the third party api was slow and unreliable but I don't see how that's relevant - if the api is down then you don't get updated data, it doesn't matter if we have outdated data in an intermediate db. We already have that outdated data in our main db and we'll get updates when the api starts working again. During testing I had absolutely no issues with the performance of the api, it transferred all the data we needed in a completely acceptable amount of time, and this was just in a nightly scheduled job anyway so if it had taken a minute that would have been fine as well. But it didn't, it responded in milliseconds. I never noticed any unreliability on their side either, but if it had been unreliable that would have been totally fine. The app just wouldn't have gotten updates until it started responding again. Nothing can solve that problem.
I honestly can't remember what the actual problem was or how I fixed it in the end. The code had been in production for years and only received the minimum necessary amount of changes. Some dependency or something probably broke from years of nobody wanting to touch that huge piece of crap.
But that's not why I say it was bad and pointless. It was bad because whoever wrote it didn't know about libraries for xml parsing and had implemented all of the parsing from scratch with string operations. We're not talking about real parsing here with lexers and tokenizers and stuff. We're talking about what you might expect if you gave a mediocre first year CS student the task of parsing some specific xml. The db interaction was similarly overcomplicated and outdated, and the code itself was sloppy and full of old messes nobody had bothered to clean up.
All it did was get some data, store it and make it available through some method calls, and for that there were like 50k loc most of which was dead and most of the code still in use was that monstrosity of a homerolled xml parser.
The things left to do on my new solution were trivial. Some of the columns had html tags and stuff like that in them, it just needed to be cleaned out where necessary. Some other stuff needed to be modified a bit. I did not skip it because it was hard, I skipped it because it was tedious and I didn't want to spend all the effort before I got the green light, which turned out to be a great decision because it didn't get the green light. And they probably still to this day waste man-hours on keeping that piece of crap running.
I guess the correct way to present this is something like "I know how to fix this in the short term but we should consider simplifying things because as far as I can tell the current code is much more complex than it needs to be".
I don't know the exact situation but I just wanted to point out not to fall into the "I have looked at the thing for a little. I don't understand it and I can't be bothered to understand it because whoever write it, was an idiot. We need a full rewrite with my favorite shiny tool. The rewrite will be easy" trap. I think that triggers a lot experienced people.
But maybe you are right. That's also very possible
I've been on both sides of this table multiple times, as the IC and as the Manager of an eager IC. Here's a list of all the reasons why I as your manager would also flat-out say No to this situation. (These are of course heavily tainted by my own recent experience of trying to coach a mid-level dev through a very similar problem)
- "Pretty much working" means all the fun stuff is done and the actual hard thing is left to wrap up. It's a useless estimate that only accounts for your coding work, which is usually the smallest amount of work performed on an integration feature like this.
- It's a rewrite so we've gotta do a full regression test on every piece of data that thing pulls back. Since it's old functionality it's not fully covered by our automated tests, so this goes to QA. Our QA team is overloaded so this unauthorized, not on the roadmap project now needs to jockey for priority with things that Marketing is literally making artifacts for _today_.
- "It's already built" isn't really a justification for a priority change, so now I'm in the awkward position of changing priorities for a non-roadmap task and justifying this to every single stakeholder who is respecting the process, or telling you it'll be 2 months minimum before QA can even think about it. Either way no one is happy and now I have to worry about you going rogue again and trying to work channels around me to get this thing shipped out of band.
- It's a full rewrite and going through manual QA, so it's nearly guaranteed that critical, but undocumented business rule fixes were missed. Somewhere in that library is a weird function holding up the world, but it was "obviously cruft" and left out. There's a good chance we won't find the issue until it has already polluted a ton of Prod data. That's why I won't let you do Developer QA. You've only been here a year and this service predates you, me, and the rest of the team, we literally have no context.
- If the client finds out we did a full rewrite, they too are going to do a full regression test on their end. Do you know the size of the shitstorm this is going to bring on us? Every single question, problem, feature change, bug, enhancement, communication, _everything_ we went through over the last XX years since we built this integration is going to resurface. I get re-litigate every. single. thing. "Since you're working on our integration can we get XX, XX, and XXXX?" (each is a sprints worth of dev time minimum), "YYY isn't working, did you guys break it again?" (it's always been broken but now someone gets to spend 3 hours in Datadog pulling logs to prove this).
- I've been using the "Rewrite This Library" and "Refactor That Service" projects as leverage to negotiate for more budget to bring on 2 more headcount so that we could actually do those rewrites with proper time and space. You talking about getting 80% done over a weekend has completely undermined the work I've put into this effort, and at the same time didn't remove the Refactor issue from my backlog. Now I will essentially have to shit-talk you in my own 1-on-1s in order to regain lost ground. "sfn42 is a decent developer but he just doesn't have a lot of context to what's happening outside his role. Needs more time in the oven before he gets the bump to Sr. Maybe I can pull him into more planning meetings so he can start growing in this area" -- congrats you just got invited to 6 hours of meetings a month regarding work you won't perform.
- In 6 months when our team is planning out some future work that's just way too much for the headcount & timeline we have, and you bring up "we could really use another Sr. Dev or two, any word on our headcount request?", I might reply politely with a "still no word if we can pull that off this quarter", but internally I'm wondering if the pain of bringing a new dev up to speed is less than the pain of working with you.
- Lastly, the most petulant reason, you were told No last week. I'm sorry you lost a weekend to this, but a No is a No and I need you to understand that. Other things are happening at this company outside the scope of your purview.
Again, this is all drawn from my own experience. I had a mid-level dev show me a huge refactor he started on the weekend. He was convinced it was almost done, "just a few small things left" is an exact quote. However I knew that this part was literally the smallest bit of the effort. I was seeing at least 3 months of work across 4 departments before it would actually be Done, in Production, and working to our satisfaction.
If I had the space I would normally be just fine letting the young fella just experience that pain. Make him do the scheduling, put him on point for everything, and just let him spin on it for a month or so. I did not have that time and space, so instead we spent a few hours white boarding out the rest of what needed to happen, and thankfully he mothballed his project of his own volition.
This reply exudes professionalism and experience in the real world of development where it's not just code leaping from a developer's fingertips into prod. I was going to reply myself, but you covered nearly everything I was going to. Cowboy Coders, please read it carefully and reflect on it seriously.
You could also ask the developer to write comprehensive documentation and test cases, not only for the new code but also for the older code, to ensure the new one can replicate the bugs higher level systems depend upon.
You have a lot of good points and some of it may have been applicable in my case.
But this was not complicated. I have underestimated refactors before, this was not one of those times. This was a simple little thing, just getting some data from point A to point B. It would have been easy to verify that the new solution generated the same pages (data in db tables) as the old one.
I didnt undermine anyone. I brought it up in a team meeting, I didn't take it to the department head. Sure I had been told no, but that no was based on the assumption that I was wrong about being able to replace it easily. My weekend coding was simply to prove that it could be done, which I did whether anyone believes me or not.
I really like your last paragraph. You didn't just say no, you walked through it with them and helped them see the problem. I am convinced that the only real reason this did not go through was because nobody else understood the problem. None of our team members had worked with that particular component, everyone were about as new as myself and dismissing my concerns without consideration. Most of all the team lead who hadn't written a line of code in decades and had absolutely no concern for code quality. The review mechanic in that team was push to test, have lead click through website to see if it seems to work, push to prod. Lead did not give a shit what the code was like. The quality of their projects reflected that.
We had over a dozen different apps and pretty much all of them were chock full of bad code written by unsupervised juniors on a tight deadline. All the apps used the same CMS in the backend but nearly all of them had a different frontend approach because they just let people pick whatever - one day you're working in Vue, today it's react, now it's angular, here's a svelte app, this ones just using jQuery and here's one just using vanilla JS. While I was there they let another guy start using a different CMS for another new app, because we didn't have enough problems with all the different js frameworks already, let's start using different backend frameworks too!
Hardly a single test suite anywhere except what I'd made. Everywhere I looked I found bugs and terrible code, every task I got I had to start by figuring out today's flavour of JS framework then try to understand how some junior using this project to pad their CV with the newest JS framework had mangled it together into a somewhat usable website and then how to make the changes I needed to make which 90% of the time was 29 times harder than it should have been because the entire thing was a complete mess hacked together asap and then duct taped, Jerry rigged and beaten with a hammer periodically over it's years of service.
I moved on from that team pretty quickly, and got into a different team much more in tune with my views. About a year later I was talking with two of my old team mates who had been somewhat annoyed with me and all my nagging about testing and code quality back then, at that point they had worked on some of my solutions and felt the benefits of the tests I'd created and the way I actually organized my code to make it easier to understand and work with. It took me by surprise when they flat out, unprompted just told me I was right. When I was working there I had a hard time because everyone disagreed with things I considered basic facts, I started to doubt myself. Luckily the next team was already doing all those things I wanted to do and more. Now I know that good code does exist, the methods I advocate do work, I wasn't just imagining things. That other team was just badly managed.
That's not to say everything I've ever done has been gold, I've made bad decisions and learned from them. But I stand by replacing that old integration library and I still don't believe in this "legacy code" mindset where changing some old pile of crap requires buy-in from multiple different stakeholders and so on.
I might get it when we're talking about large complex, business critical systems that really do require weeks or months of work just to replace a small part. But what I'm talking about is a small website developed in a matter of weeks that's hardly much more than a glorified pdf and where the code behind it has absolutely no business being as complex as it is. Even if my suggested change had broken some requirements they would have been quick to fix because the new code was clear and simple. And the worst case scenario would probably be some messed up formatting in a small article that hardly anyone is going to read anyway.
I guess it also depends on the size of the company and how big of an existing system you are working within. If you are at a decent sized company, then there is no such thing as "not a big deal.' I posted this[1] a while ago in response to a similar complaint about how difficult it is to just wing it in a big company, and I think it's also relevant to this thread.
I am very thankful that over the last few years I've built out the headroom on our team to chase "shiny" things that we know customers will want and that we (engineering) want but aren't exactly cookie cutter for our usual planning flow.
A lot of my biggest political successes as an engineer are just building something that I know is important and finding someone higher up who has always wanted it done but everyone tells them it's going to take multiple quarters and it never gets planned.
We need to do something, my manager thinks it is too complex and we do not have the time, I have not been able to convince him (I am another manager), and yesterday I told my guy ... if it takes you X days, just do it and we will tell him later. He will find out after the coup and post-facto I can always justify it "oh we had so many other things going on, we never got to talk about this".
And my goal is to show that its value is more than the effort we spend with the workaround.
In my experience the more fine grained an organizations issue tracking/planning is the more this is a problem vs a reasonable process.
If you have to convince someone of all of those things in order to build some reasonably large thing over the space of a few weeks, that's probably reasonable.
If you have to convince someone of all of those things in order to allocate a few hours to fixing some tech debt or minor bug then your codebase is going to slowly deteriorate until the same someone is asking you why there's so many bugs and everything takes so long to develop.
Usually most engineers have some slack time and can pick things up to fix. The problem is not in them doing that. The issues arrises in one of two things—
[1] They either use the fact that they are fixing that thing as an excuse to not work on or deliver on time a different, more important task. If that happens, obvious questions about prioritization occur.
[2] They want a substantial amount of credit or recognition for doing it. Usually such fixes don't receive exec attention (since execs are tracking more important projects) and so don't get the same due as a properly tracked project does.
In my experience this aspect is chronically underestimated by devs. The change needs testing. Change Management might need to create artifacts. Help articles might need updating. Certain clients may need a heads-up. All the PMs need to be briefed (this change is outside of their roadmap so it'll be a fun surprise).
As an org grows the piece of the Effort Pie that is Development gets smaller and smaller. It's not that development gets easier, it's that every other part of the process grows in size and importance faster than the Development piece grows.
It takes about an hour of developer time to incidentally produce 20 hours of work elsewhere in the company.
Or you work at a non-software company where the technical folks ultimately report to a non-technical boss, or are outnumbered by nontechnical executives. In which case, there's the real danger of a bunch of 0 getting shoved down your collective throat to the tune of "it's all a priority, get it done."
Reminds me of a product owner I had who abused priority categories by insisting that the majority of his tasks were "top priorities" because he had discovered that any time he didn't mark a task as a top priority it wouldn't get done.
Every team ended up sorting his tasks as a flat list so that when he asked people to "make this a top priority" it was up to him to decide where it went in the list and which of his other requests would get bumped.
That reminds me of a technique in the Slow Productivity book by Carl Newport. The gist is, if you cannot control the flow of work, the best thing to do is to create a buffer and make the workload visible. So they can visualize how any new task is going to disrupt it.
Probably the person who came up with the idea remains the most invested in it, and they should to watch for number 5 “the right moment” to bring it up again.
If more than one team could reasonably be the owner of something, -and- the ownership isn't going to get a manager promoted, there needs to be a showdown to see what team takes the impact to their roadmap
At a higher level, does the revenue it could result in be enough to move the needle and therefore be worth the attention up and down and across the management chain (to the degree it's a discrete program)?
A good notion for value is that of option value. Not a lot of product managers understand this notion very well. Engineers tend to intuitively get this but they can't articulate it. You do work now to give you the option to do something later. Very simple really. PMs really don't get this. All this refactoring, over engineering, and whatever you want to call it. They'd prefer you to not do any of it. But of course engineers understand that those things can pay back later.
Option value is a notion that Don Reinertsen promotes in his Lean 2.0 notion (google that and his name, mandatory stuff for wannabe PMs IMHO). Very simply put, he draws an analogy with stock options. They give you an option in the future to get some value. But there's a chance they'll be worthless and that you lose your money. The point is that the payoff can be much larger than the value loss. That's why they are popular tools for stock traders. There's a non linear relation loss-profit function. Which means you only need a few of your options to convert to profit to finance all of them. VC capital is based on this notion as well. Most startups are a write off. But a handful turn into unicorns and pay for the rest.
In product management, option value is the notion that some idea might be worthless but could end up being worth a lot. Doing a lot of work on something that's just not worth a lot is probably wrong. Doing a little bit of work on something that might have a huge payoff is probably smart. Even if it's slightly risky or uncertain. Doing a lot of work on a lot of things that might be valuable like that at the cost of stuff that you actually should be doing is probably not optimal and very risky. But you should be taking some calculated risks at least part of the time just in case. Worst case you lose some time. Best case you create a lot of value in a way that you never planned to. Balancing risks is your job as a PM.
The point here is that if you only do planned things and don't even entertain doing things that might be valuable outside of that scope, you are probably destroying a lot of value and you are placing a risky bet on your plan instead. If the plan is wrong, you might lose everything.
Which is one of Reinertsen's key arguments against the original lean movement. You throw out the baby with the bath water if you do that. Bad idea for startups because you now make a risky bet on your plan being correct. Which of course it often isn't in startups. Pivoting is the notion of being able to turn a bad plan around. That gets easier if you invested in some option value that gives you the option to do so. A lot of unicorns emerged out of the ashes of failed startups.
Big organizations are notorious for being risk averse and not having the internal capability to innovate even after they've identified the need to do so. As soon as management chains get involved, that's what happens.
>hijacking the agenda to turn the meeting into a brainstorming session
As someone who has reason to call technical meetings on a regular basis, I've always had trouble with this. Do you mention 'brainstorming' in the agenda? Do you use a different header?
Most of the meetings I end up having are only vehicles to get necessary players into the same room and engaging in dialogue about a problem with a technical angle.
There are almost certainly different approaches depending on the environment and situation. But, especially if this is a "culture" problem one of the best fixes I've found is to make shorter meetings with a defined agenda, such that you can always pull out a, "I'd love to take this offline, but we need to get back to..."
I've personally found you should almost never need more than 30 minutes unless you specifically want to get into rabbit holes. And if you do need more than 30 minutes, it's probably better to split it into multiple sessions of no more than 30 minutes anyways to prevent this from happening. If you still have this problem at 30 minutes, shave 5 from either side (or both), which you can even use the excuse of giving time to transition between meetings.
That's not to say you shouldn't genuinely allow room for brainstorming, but if you're going to take an entire room of peoples time, make sure it's something the room agrees is worth discussing and find another time to do it instead of getting sidetracked now. If not, offer some 1-on-1 time, and move on.
I like turning negatives into positives, as a way of providing constructive feedback. I'm doing product management on the side (I'm the CTO and also do a lot of the engineering) and the key issue I face is dealing with a large influx of good ideas and dealing with that in a way that minimizes my time having to evaluate things in our backlog. Saying no in a constructive way without pissing people off is key to being a good product manager. You need to be firm and decisive. But also very clear.
The way I deal with this is several custom fields in our issue tracker kanban board that qualify any idea, no matter how good, crazy, in-feasible, etc.
The most important ones are:
- Value & Effort (one field) this is a quadrant of HighLow, HighHigh, LowLow, LowHigh. It's a reflection on what we would get out of doing something with the idea. High value and Low effort means you need a good reason not to kick an idea further. Low value and High effort kind of means a no, unless there's a really good reason. Anything in between can be decided on a case by case basis. I like the Low effort ones. They may not be that valuable. But sometimes they are nice to do. And you can just squeeze them in.
- Next Step: This is a range of values that provide me an indication of when I should look at it again. Some things need to be fast tracked. Some things need more discussion/elaboration. And the rest is stuff that we might revisit or reject right now. I don't tend to spend time on rejected ideas unless somebody brings those to the table again. Things that linger too long without being actioned are going to end up labeled as rejected. Which just means they stop consuming my time.
I have a few other fields (tags, priority, etc.) that are a bit more standard. But those two are the primary tools I use for deciding where to bucket ideas and how often to look at them. I should spend more time on high value ideas than on low value ones. And I should prioritize actioning items that have a next step that says I should do so. Anything else I can safely ignore indefinitely. If somebody doesn't agree, we can always discuss and change it. But there needs to be a good reason.
This isn't perfect but it's a good mental model of dealing with incoming ideas in a bit structured way. I hate overly long and poorly organized backlogs because they suck up a lot of time and energy without delivering much value. And the longer and more unwieldy they get, the less likely it is anything will happen. I sometimes refer to these things as idea shredders. Good stuff goes in, and then nothing happens.
Anyway, I'd be curious to learn what others are doing with their backlogs.
"Add it to the backlog for review, we're not saying it'll be done, but it will at least be looked at an considered when we have bandwidth"
Just be direct and realistic. If it's to a customer, "we'll add it to our backlog for review" and tag it as customer suggestion so it doesn't just sit there forever.
0. This idea is bad.
1. This idea is probably bad, but if someone wants to put together a more compelling argument we will discuss it at a future meeting.
2. This idea needs to be more fully developed before we can decide whether it is good or bad.
3. This idea is probably good, but it will remain in backlog limbo until someone makes a compelling argument that it is a priority.
4. This idea is good, and while it is not a high-enough priority to displace our current tasks, we will actively discuss including it when we plan our next sprint/release.
Depending on who you work with these may need to be gussied up with manager-speak to let people save face or to prevent people from hijacking the agenda to turn the meeting into a brainstorming session. But treating all of them as synonymous with "no" loses useful nuance.