Don't disagree with the article, but to play Devil's Advocate, here are some examples of when IME the cost IS worth it:
1) there are old 3rd party dependency incompatibilities that you can spin off and let live separately instead of doing a painful refactor, rebuilding in house, or kludgy gluing
2) there are deploy limitations on mission critical high available systems that should not hold up other systems deployment that have different priorities/sensitive business hours time windows
3) system design decisions that cannot be abstracted away are at the mercy of some large clients of the company that are unable or unwilling to change their way of doing things - you can silo the pain.
And to be clear, it's not that these things are "cost free". It's just a cost that is worth paying to protect the simpler monolith from becoming crap encrusted, disrupted with risky deploys, or constrained by business partners with worse tech stacks.
Wouldn't this just be "having one or two services"? I don't think that's the same as "microservices".
Correct me if I'm wrong, but isn't "microservices" when you make internal components into services by default, instead of defaulting to making a library or class?
This is why I prefer the old term "service oriented architecture" over "microservices". "Microservices" implies any time a service gets too big, you split it apart and introduce a network dependency even if it introduces more problems than it solves.
It's a pretty common issue. If you have 2-3 services, it's pretty easy to manage. And if you have 1000, you likely have the infra to manage them and get the full benefit.
But if you have 20 engineers and 60 services, you're likely in a world of pain. That's not microservices, it's distributed monolith and it's the one model that doesn't work (but everyone seems to do)
A "microservice" solves scaling issues for huge companies. If you have 60 microservices, you should probably have 600 engineers (10 per) to deal with them. If you're completely underwater and have 10 services per engineer, you're 100% absolutely play-acting "web-scale" for an audience of really dumb managers/investors.
With proper devops tooling and a half decent design, even a junior engineer can manage several microservices without issues. Since microservices are about scaling people as much as they are about scaling tech, 10 people in one service is a lot to me in that world.
The best company I worked at had about 5-10 deployable per engineers on average and it worked really well. They were small, deployed almost instantly, dependencies were straightforward, etc.
Monoliths work fine too, it's just different tradeoffs.
I ended up getting into a few arguments at work with the over excited engineer in my last place. He wanted microservices. I said it was just going to add complexity. The app was already a mess, adding network calls rather than function calls wasn't going to help. We had a small teas - 3 backend devs, one of them doing mostly devops and two frontend.
It's not clear to me what you mean by dealing here. Do you mean developing? If so, I completely agree. If you mean deployments, a small number of engineers can manage hundreds of them easily.
It depends on how you do it. We have 5 engineers and around 50 services and it’s much easier for us to maintain that than it was when it was monolith with a couple of services on top.
Though to understand why this is, you would have to know just how poorly our monolith was designed. That’s sometimes the issue with monoliths though, they allow you to “cut corners” and suddenly you end up with this huge spiderweb mess of a database that nobody knows who pulls what from because everyone has connected some sort of thing to it and now your monolith isn’t really a monolith because of it. Which isn’t how a monolith is supposed to work, but is somehow always how it ends up working anyway.
I do agree though, that the “DevOps” space for “medium-non-software-development” IT departments in larger companies is just terrible. We ended up outsourcing it, sort of, so that our regular IT operations partner (the ones which also help with networking, storage, backups, security and so on) also handle the management part of our managed Kubernetes cluster. So that once something leaves the build pipeline, it’s theirs. Which was surprisingly cheap by the way.
I do get where you’re coming from of course. If we had wanted to do it ourselves, we’d likely need to write “infrastructure as code” that was twice the size of the actual services we deploy.
> designed. That’s sometimes the issue with monoliths though, they allow you to “cut corners”
I find this hard to relate to, the idea you have the discipline and culture to do microservices well if you can't do it with a monolith.
More likely is you migrate away from the monolith you never invested in fixing, and once you get to microservices you either call it a mistake and migrate back, or you have to eventually finally invest in fixing things.
Perhaps your microservice rewrite goes well because you now know your domain after building the monolith, that is another option.
With the microservice architecture it’s easier to lock things down. You can’t have someone outside of your team just give access to a dataset or similar because excel can’t get a connection directly into your DB. Which is an argument you could rightfully make for monoliths, except in my experience, someone always finds a sneaky way into the data on monoliths, but it’s too hard for them to do so with MicroServices.
If you gave me total control over everything, I’d probably build a couple of monoliths with some shared modules. But every time the data is centralised, it always somehow ends up being a total mess. With MicroServices you’ll still end up with a total mess in parts of the organisation, but at least it’ll be in something like PowerBI or even your datawarehouse and not directly in your master data.
Or to put it differently, for me MicroServices vs monoliths is all most completely an organisational question and not a technical one.
It's not like microservices don't also give you chances to mess your data up. It's hard to do transactions across boundaries, you have to deal with eventual consistency, sometimes there is no single source of truth.
I struggle to see how microservices fix this for people; having worked primarily with them for the past 6 years.
The thing with microservices is that shit doesn't have to infect everything. If someone in another team is clueless, they'll mess up their microservices but not anyone else's. If they are in a monolith, it's 50/50 based on how much clout they have (either they mess it up for everyone, or they get talked down and don't get to mess up their stuff either).
Unless it's a shared library, which is why good microservices architecture limit the shared surface as much as possible.
> it's 50/50 based on how much clout they have (either they mess it up for everyone, or they get talked down and don't get to mess up their stuff either
This still happens with microservices though; people can still make terrible architecture decisions and standup terrible services you depend on.
I have worked with a company that had around 8 developers and 30 'microservices'. They wanted the front end team (fully remote, overseas, different language, culture) to go micro front end. They are awesome at presentations and getting funded tho. A common theme in European startups.
A distributed monolith isn't based on how many services you have, a better question is, how many services do you need to redeploy/update to make a change.
Yes, by the time you get to thousands of services you hopefully have moved past the distributed monololith, if you built one.
I don't want to get too tied up in the terminology, but "microservices-first" does not seem to be the problem the post is describing:
One way to mitigate the growing pains of a monolithic backend is to split it into a set of independently deployable services that communicate via APIs. The APIs decouple the services from each other by creating boundaries that are hard to violate, unlike the ones between components running in the same process
Without getting tied up in the whole "every language except assembly, Python and possibly JavaScript already solved this problem by forcing people to adhere to module-level APIs" argument, I think the crux of the issue is that the article just defines microservice architecture as any architecture consisting of multiple services, and explicitly states "there doesn’t have to be anything micro about the services". Which imho is watering down the term microservices too much. You don't have microservices as soon as you add a second or third service.
I think we should start calling this Pendulum Blindness.
We just go from 'one' to 'too many' as equally unworkable solutions to all of our problems, and each side (and each person currently subscribed to that 'side') knows the other side is wrong. The assumption is that this means their side is right instead of reality, which is nobody is right.
The moderates are right, but their answers are wiggly and they're too busy getting stuff done to argue with the purists. But the optics are bad so here we go again on another swing of the swingset.
'Some Services' is probably right, but it varies with domain, company size, and company structure (Conway's Law). And honestly developer maturity, so while 7 may be right for me and my peers today, in a year it might be 6, or 9. There's no clever soundbite to parrot.
Introducing the Revolutionary "Ten Service Applications" – because Ten is the Magic Number!
Tired of the endless debates about how many services your applications should have? Frustrated with the constant struggle to find the "Goldilocks" number of services? Look no further! The future of software design is here, and it's as easy as 1, 2, 3, 4, 5, 6, 7, 8, 9, and 10!
The "Ten Service Applications" model is here to rescue you from your software design woes. We're not messing around with random numbers like 7 or 12 services. No, we've cracked the code, and it's all about that perfect "ten." You don't need any more services, and you definitely don't need any less. Ten is the answer to all your architectural problems!
So, are you ready to embrace the simplicity, predictability, and coolness of the "Ten Service Applications" model? Join the revolution today and experience software development like never before!
Act now, and we'll even throw in a bonus "Top 10 Services" list to inspire your next project. But remember, you only get 10 services, no more, no less—because why mess with perfection?
You might like "Software Architecture: The Hard Parts." Though you already describe some of the points of the book. There isn't a magic bullet and every decision to split something apart or which parts to combine has various trade-offs.
The book isn't perfect. The use of afferent and efferent terminology and some of the arbitrary methods to put numbers on decisions weren't ideal. Most of the concepts are sound. The fact that almost every decision has cost/benefit and real world implications for a living product was refreshing. That a monolith can't be cut over instantly with zero effort to a perfect system is absolutely true.
It's good food for thought for anyone considering slicing up a monolith, but maybe don't follow it to the letter.
Exactly, just use well-factored services of any size and smack anyone saying "micro..." with a wet towel for they are just parroting some barely-profitable silicon valley money sinks.
For some reason, most of the people I've worked with recently are either fully into monoliths or lots of fine grained, interdependent microservices.
They don't seem to understand there's a useful middleground of adding fewer, larger data services, etc. It's like SOA isn't a hot topic so people aren't aware of it.
Actually, you are wrong. Microservices are surely not about defaulting to new microservices, but to capture a specific context into one service. There is no rule about how big a context is. A context can contain other context's. There can be technical reasons to split deployments into different microservices, but that's not the norm.
What you describe is what happens, when people get microservices wrong.
In the end, i like the viewpoint that microservices are a deployment pattern, not so much an architecture pattern. Usually, you can draw a component diagram (containing an OrderService and a DeliveryService, etc.) and without technical details (execution environment, protocols), you couldn't tell if it's describing multiple microservices or multiple components in one service.
Being able to easily use different programming languages. Not every language is a good fit for every problem. Being able to write your machine learning deduction services in Python, your server side rendered UI in Rails and your IO and concurrency heavy services in Go might justify the additional overhead of having separate services for these three.
Yes, but the choice to add a new programming language to your company's profile has to be taken with care and due diligence; you should make sure to have a number of developers that know the new language, offer training, incorporate it into your hiring, etc. It's an added node to your dependency graph, which can quickly become unmanageable.
You should always look into existing languages first. There's a lot of "I rewrote this thing into $language for a 100x performance boost" posts, in a lot of cases the comments are like "Yeah but if you rewrote it in the original language you could make it a lot faster too".
These are almost never pragmatic decisions. Giving teams independence over the stack usually results in resume-driven development, and now your JS developers are forced to maintain a Go server because some jock thought it was a cool thing to do .
I've seen this play out for reals at a place a few years ago. Every team used a different tech, and all of them selected because of resume-driven development. People moving teams to get a particular tech on their resume. No common method for deployment, and endless issues getting things deployed. Everyone's a newbie because we were all learning a new, cool, stack. And everyone making stupid newbie mistakes.
Never again. When I build teams forevermore, I pick the tech stack and I recruit people who know that tech stack, or want to learn that tech stack. And we stick to that tech stack whenever possible.
The worst part about this type of org (speaking as an SRE / DBRE) is that they also don’t do SRE correctly, so these piles of shit get yeeted into prod, fall apart, and then the already over-burdened SREs and DBREs have to figure out how it works so they can fix it.
I didn’t know how to troubleshoot Node, but I did know how to read docs. Suddenly I can troubleshoot Node. Hooray.
I concur with your sentiment, and would also be an absolute dictator for tech stack and workload management. We’re not using Node, we’re avoiding React if at all possible, every dev touching the DB in any way will know SQL, and no one is using Scrum.
Indeed, I am also an advocate of having the organization define a specific set of languages.
Even with just one, it isn't really a single one.
To pick on JS as example, not only there is JavaScript to learn, TypeScript might also be part of the stack, then there is the whole browser stack, and eventually node ecosystem as well.
Take the remaining UNIX/Windows related knowledge to get development and deployment into production going, SQL and the related stored procedures syntax for RDMS backends.
Eventually the need to know either C or C++, to contribute to V8 native extensions or WASM.
Now those folks need to learn about Go, Go's standard tooling, Go's ecosystem, IDE or programmer's editor customizations for Go code, and how to approach all the development workflows they are confortable of from the point of view of a Go developer.
I don't believe a team should have that much say in their technology, unless they themselves are also responsible for hiring, training, etcetera - so it kinda depends on how autonomous a team is.
That said, as the article also mentions, "micro" can be a bit of a misnomer; you could have a 50 person team working on a single "micro" service, in which case things like hiring and training are much more natural. (to add, I've never worked in a team of 50 on one product; 50 people is A Lot)
There are many ways to architecture well that doesn’t mean prematurely introducing micro services.
I’m a fan of microservices btw.
Premature optimization and scaling is almost as bad of a form of technical debt as others when you have to optimize in a completely different manner and direction.
Because of dependency issues like he mentioned. If I am using Library A which depends on version 1 of Library C and I need to start using Library B which depends on version 2 of Library C then I have a clear problem because most popular programming languages don't support referencing multiple different versions of the same library.
Too few developers use the facilities available for that kind of in-process isolation, even when it is possible. (Don't tell me Java isn't popular... It may be the new COBOL, but it's still mainstream.)
Good show for mentioning that. If it becomes commonplace for popular runtimes and languages to be able to load modules and their dependencies at that level, then a lot of arguments for service encapsulation go away.
I think in these discussions, a lot of times people are taking past one another. If I start putting a JRE-targeted application together, I know I can eventually reach for isolated modules if I follow good internal practices, whereas if I'm in Python land it's pretty unattainable.
To be fair the problem is that its not baked into the language.
If you have to do X,Y,Z to get A done, then you'll do X,Y,Z.
If your language gives you a shortcut and now you can get away with X,Y then you'll drop Z.
And even if your language doesn't give you those shortcuts, if Cool Language lets you do just X or X,Y, you'll naturally want to use Cool Language. So, its a losing game...
KISS is great when applied correctly, but you have to be able to know the level of complexity you'll actually need for the problem at hand, and it is so so very easy to over or under engineer something...
Hold on - the way you are phrasing this implies that Java Modules allow you to solve the version problem mentioned above. As in, your comment implies that Java Modules have a concept of version, and thus, allow you to pick the correct version of a dependency when defining relationships.
Java Modules explicitly DO NOT have that ability. In fact, the creators of modules went out of their way to discourage the idea of using modules this way. They literally introduced a warning that flags your module if you add a number at the end of it - specifically so that they can discourage the concept of introducing versions into the module system.
The way Java Modules (and Java Classpath before it) works is like this - if you have a module ABC, then that is the only version of ABC as far as Java is concerned. The correct version should be decided upon and provided, but once you hand that version over to Java and press compile/link/run/etc, the concept of version is not existent anymore, as far as Java is concerned.
To better explain this, every Java class has a unique identifier -- the module name + the package name + the class name. That's it. So, if I have version 1 of ModuleName.PackageName.ABC and version 2 of ModuleName.PackageName.ABC, there is no way for Java to disambiguate them, and thus, will throw a compilation/linking/runtime error, saying that you have 2 versions of ABC.
And to further expand on the warning point above, some clever developers tried to work around this by putting the version number in the name somehow (for example, module name = ModuleName1). To firmly discourage this behaviour, the Java developers who made modules released the abovementioned warning, so that this problem could be nipped in the bud.
To summarize, dependency versioning is a problem that Java (currently) does not attempt to solve. It's considered an extra-linguistic concern that is left to the ecosystem to solve (which you should interpret it to mean, they're letting Maven/Gradle/etc deal with this problem (for now)).
Finally, a few members of the Java team are giving some thought to maybe dealing with this problem in the language, maybe with a build tool. There is absolutely 0 confirmation that this will even be given serious effort, let alone released as a feature/tool. But the problem is definitely being considered by some members of the Java Team. In fact, a user on this site (pron) may be able to give some helpful context on this. This is my first comment on this site, and I don't know how to use it, so someone else can try and link him.
Your dependencies might be tied to different JVM versions. Less likely but happened few times, especially during 8 to 9+ transition. You would've been stuck with your entire system on Java 8 for ages.
Also JNI won't be isolated which occasionally might be a factor (more likely for stability).
On the other hand one could make colocated multi-process services to work around this while avoiding all the other complexities of microservices.
OSGi is the one that's deprecated and Java Modules is the one that can't actually provide that functionality yet, right? Or is it the other way round? Either way you get the point.
> Java Modules is the one that can't actually provide that functionality yet, right?
You are correct. As of now, Java Modules have no way to disambiguate versions of the same dependency. This is intentional and by design.
As far as Java is concerned, each class is uniquely identified by ModuleName.PackageName.ClassName. So, if there are 2 versions of the same class that have the same identifier, Java will give you an error at compile/link/run-time.
And if you try to be clever and slap a number at the end of the module name (in hopes of side-stepping this), Java will throw a warning at you, saying that you are likely trying to misuse modules by trying to use them to do dependency management.
That's right. OSGi predated the Java Module system, and in many cases, informed its design. I think it's unfortunate that when they were including the Module system into Java, they didn't just import OSGi whole.
OSGi has an in-VM service registry which allows late binding of "services" by interface. This means you can do a lot of sophisticated mixing and matching of capabilities with OSGi, including the ability to load and use incompatible versions of the same library (if need be) in different parts of your code.
More than a decade ago, I built an app server system that used an OSGi base for server-side products and agent systems. I even had a flavor of JBoss running on OSGi (with dramatically faster start up times) before JBoss went OSGi on its own.
But now my teams do their work in Node. Isolation is by process, hopefully, maybe, and we're right smack back in version hell.
Can you actually use them to use two different versions of library C in the same application yet though?
> Eclipse keeps using OSGi just fine
Wasn't the impression I had the last time I tried to fix a bug in an eclipse plugin that touched on the OSGi parts. The codebase felt like a ghost town and I couldn't find any documentation for how it all worked or anyone who knew about it.
You’re not wrong here but usually this comes down to having one or a small number of version specific container processors (and I do not mean container in the docker sense) to host those functions. Not microservices.
That said, almost always, if you are seriously trying to keep old unsupported dependencies running, you’re violating any reasonable security stance.
I’m not saying it never happens, but often the issues that the code is not understood, and no one is capable of supporting it, and so you are deep in the shit already.
If you have two separate processes running in two separate containers... those are two separate services. You need to solve the same problems that would come with running them on two different EC2 instances: what's the method for communicating with the other container? What happens if the other container I'm calling is down? If the other container is down or slow to respond to my API calls, am I dealing with backpressure gracefully or will the system start to experiencing a cascade of failures?
In the Java world, we've developed record/replay across microservice boundaries, so it's effectively possible to step from a failure in one microservice and into the service that sent it bad data.
More moving part. A network call can fail in more ways than a function call. Also something no one has mentioned until now, the more "services" you have the more of a pain in the arse it is to get a development environment running.
The right time to extract something into a separate service is when there's a problem that you can't tractably solve without doing so.
Increasing architectural complexity to enforce boundaries is never a solution to a lack of organizational discipline, but midsize tech companies _incessantly_ treat it like one. If you're having trouble because your domains lack good boundaries, then extracting services _is not going to go well_.
"The last responsible moment (LRM) is the strategy of delaying a decision until the moment when the cost of not making the decision is greater than the cost of making it."
Still trying to unlearn that one. Turns out, most decisions are cheap to revert or backtrack on, while delaying them until Last Responsible Moment often ends in shooting past that moment.
Depends who you are, mostly the ones making the decisions aren't usually listening to their developers (maybe by choice maybe because they are at the whim of a customer), so their cost functions are calibrated towards course changing being more expensive than less.
By the time your devs are saying "this sucks" you've long overshot.
That's a good point. The (estimate of the) cost function is key, it determines whether delaying decision is better or worse than making it eagerly and reverting if it turns out to be wrong. You give a good case for when delaying is a better choice.
In my case however, I ended up applying the "LRM" strategy to my own work, where I'm both the decision maker and the sole implementer. This is where I see my mistake. In my defense, the software development books that argued for delaying decisions did not warn that this applies to larger decisions in projects developed by teams, and may not apply to small-scale design decisions made by an individual contributor or a small team in the scope of a small piece of work. It took me way too long to realize that, for most of my day-to-day choices, the cost function is pretty much flat.
We're moving out of business context and to more general one, as I have more experience there - I learned the idea of postponing decisions from software design, and mistakenly started to apply it to my life in general :).
It's absolutely true that postponing a decision lets you take advantage of more data and experience. But making a decision now also generates new data - and often does it better and faster: you're directly observing how your choice plays out in the real world. For cheaply reversible decisions it means that, when things go bad, you can go back and make a new decision, this time informed by knowledge of what went wrong with your previous choice. If you squint, it almost looks like time travel :).
Not every decision is going to be like that, but e.g. in software design, I would often delay deciding between several possible approaches to seek out more information (and/or wait for it to come from customer side), only to later realize that in that time, I could've prototyped most or all of the options, or I could've picked one and made progress and undo it fast when more information came - and that either of those two approaches would've let me to gain the missing data faster, while also keeping forward momentum.
There's a reason many programmers (myself included) need to repeatedly hear the mantra that goes: "make it work, then make it right, then make it fast". In my case, postponing decisions is often a kind of analysis paralysis, and I'm slowly learning that in many cases, it's better to just pick any option, "make it work" in the dumbest, most straightforward way possible, and then reevaluate.
But as I said, I am learning this slowly. My mind knows better, but my heart still wants to delay by default :).
Agree with all of that, and want to add a shoutout to the Cult of Done [0].
Very few things need to be perfect, or right first try. Waiting until the last possible moment is a version of perfectionism that is often counter-productive. Trying something out before it's needed gives room to experiment and discover new approaches.
> Increasing architectural complexity to enforce boundaries is never a solution to a lack of organizational discipline,
And yet we do this all the time. Your CI/CD blocking your PRs until tests pass? That's a costly technical solution to solve an issue of organizational discipline.
That's technical, and not architectural. I'm _all about_ technical solutions to lack of discipline, and in fact I think technical and process solutions are the only immediate way to create cultural solutions (which are the long-term ones). I'd even consider minor increases to architectural complexity for that purpose justifiable - it's a real problem, and trading to solve it is reasonable.
But architectural complexity has outsized long-term cost, and service-orientation in particular has a _lot_ of it. And in this particular case, it doesn't actually solve the problem, since you _can't_ successfully enforce those domain boundaries unless you already have them well-defined.
Can you explain the salient distinction between a "technical" versus "architectural" solution? Candidly, I'm not convinced that there is one.
> But architectural complexity has outsized long-term cost
As do technical solutions, of course. CI/CD systems are very expensive, just from a monetary perspective, but also impose significant burdens to developers in terms of blocking PRs, especially if there are flaky or expensive tests.
> And in this particular case, it doesn't actually solve the problem, since you _can't_ successfully enforce those domain boundaries unless you already have them well-defined.
Ignoring microservices, just focusing on underlying SoA for a moment, the boundary is the process. That is an enforceable boundary. I think what you're saying amounts to, in microservice parlance, that there is no way to prevent a single microservice from crossing multiple bounded contexts, that it ultimately relies on developers. This is true, but it's also just as true for good monolithic designs around modules - there is no technical constraint for a module to not expand into domains, becoming cluttered and overly complex.
Microservices do not make that problem harder, but SoA does give you a powerful technical tool for isolation.
> Can you explain the salient distinction between a "technical" versus "architectural" solution? Candidly, I'm not convinced that there is one.
Not concisely in the general case, but in this case the difference is fairly straightforward - CI/CD doesn't affect the structure of your executing application at all, only the surrounding context. I don't want to spend the hours it would take to characterize architecture as distinct from implementation, but the vast number of textbooks on the topic all generally agree that there is one, though they draw the lines in slightly different places.
> I think what you're saying amounts to,
Very much no - my point is about the process of implementation. The services.. _do_ enforce boundaries, but the boundaries they enforce may not be good ones.
In order to successfully extract services from a monolith, you have to go through a process that includes finding and creating those domain boundaries for the domain being extracted. If it's your first time, you might be doing that implicitly and without realizing it's what you're doing, but under the hood it's the bulk of the mental effort.
The part where you actually _introduce a service_ can be anywhere from a tenth to half of the work (that fraction varies a lot by technical stack and depending on how coupled the domain in question is to the central behavioral tangle in the monolith), but by the time you've gotten the domain boundary created you've _already solved_ the original problem. Now you're facing a trade of "extract this well-factored domain out to a separate service application, to prevent its now-clear boundaries from being violated in the future". And I contend that that's a trade that should rarely be made.
Which is exactly why the real challenges around (micro) services is to a large extent an organisational challenge mixed how those boundaries are belonging to which business capabilities.
The technical part of services is easy enough really but if the organisation leaks its boundaries, which always flow downstream into the actual teams, you are proper fucked.
It takes a different level of maturity, both in organisation and team, to build a service oriented software.
Right! What I'm fundamentally saying is that the majority of orgs trying to adopt SOA are doing it for the wrong reasons, and will deeply regret it. In general, the "right way" to adopt SOA is to extract services one at a time because you have to, to solve various problems you've encountered (scaling problems, technical problems, stability issues), and then realize that you are in fact currently using a service-oriented architecture, and have been for several years.
I would hope that there is more process in place protecting against downtime than code review - for example automated tests across several levels, burn-in testing, etc.
People are not reliable enough to leave them as the only protection against system failure...
Did you mean to reply to somebody else? I'm a huge believer in automated testing, and if I said something that can be interpreted otherwise I'd like to clarify it.
I guess the GP's issue is because automated tests (and every other kind of validation) imposes architectural constraints on your system, and thus are an exception to your rule.
I don't think that rule can be applied as universally as you stated it. But then, I have never seen anybody breaking it in a bad way that did also break it in a good way, so the people that need to hear it will have no problem with the simplified version until they grow a bit.
Anyway, that problem is very general of software development methods. Almost every one of them is contextual. And people start without the maturity to discern the context from the advice, so they tend to overgeneralize what they see.
Hm. I think maybe you're using "system" to mean a different thing than I am? I thinking of "the system" as the thing that is executing in production - it provides the business behavior we are trying to provide; there is a larger "system" surrounding it, that includes the processes and engineers, and the CI/CD pipelines - it too has an "architecture", and _that_ architecture gets (moderately) more complex when you add CI/CD. Is that where our communication is clashing?
Because the complexity of that outer system is also important, but there are a few very major differences between the two that are probably too obvious to belabor. But in general, architectural complexity in the inner system costs a lot more than it does in the outer system, because it's both higher churn (development practices change much slower than most products) and higher risk (taking production systems offline is much less permissible than freezing deployments)
> I think maybe you're using "system" to mean a different thing than I am?
No, I'm not. Are you overlooking some of the impacts of your tests and most of the impact of static verification?
Those do absolutely impact your system, not only your environment. For tests it's good to keep those impacts at a minimum (for static verification you want to maximize them), but they still have some.
I don't think I'm overlooking any major ones, but we are probably working in quite different types of systems - I'm not aware of any type of static verification I'd use in a rails application that would affect _architecture_ in a meaningful way (unless I would write quite terrifying code without the verifier I suppose).
I'm not sure about the tests - it depends on what you'd consider an impact possibly; I've been trained by my context to design code to be decomposable in a way that _is_ easily testable, but I'm not sure that is really an 'impact of the tests' (and I'm fairly confident that it makes the abstractions less complex instead of more).
Would you mind explaining what you mean in more detail?
No, not at all. CI/CD blocking pull requests is in place because large systems have large test suites and challenging dependencies which mean that individual developers literally can't run every test on their local machine and can often break things without realising it. It's not about organisational discipline, it's about ensuring correctness.
I can run every test on my machine if I want. It would be a manual effort, but wouldn't be hard to automate if I cared to try. However it would take about 5 days to finish. It isn't worth it when such tests rarely fail - the CI system just spin them off to many AWS nodes and if something fails then run just that test locally and I get results in a few hours (some of the tests are end to end integration that need more than half an hour).
Like any good test system I have a large suite of "unit" tests that run quick that I run locally before committing code - it takes a few minutes to get high code coverage if you care about that metric. Even then I just run the tests for x86-64, if they fail on arm that is something for my CI system to figure out for me.
The other problem is that these self-imposed roadblocks are so engrained in the modern SDLC that developers literally cannot imagine a world where they do not exist. I got _reamed_ by some "senior" engineers for merging a small PR without an approval recently. And we're not some megacorp, we're a 12 person engineering startup! We can make our own rules! We don't even have any customers...
Your 'senior' engineer is likely right: they are trying to get some kind of process going and you are actively sabotaging that. This could come back to haunt you later on when you by your lonesome decide to merge a 'small PR' with massive downtime as a result of not having your code reviewed. Ok, you say, I'm perfect. And I believe you. But now you have another problem: the other junior devs on your team who see vrosas commit and merge stuff by themselves will see you as their shining example. And as a result they by their lonesomes decide to merge 'small PR's with massive downtime as a result.
If you got _reamed_ you got off lucky: in plenty of places you'd be out on the street.
It may well be that you had it right but from context as given I hope this shows you some alternative perspective that might give you pause the next time you decide to throw out the rulebook, even in emergencies - especially in emergencies - these rules are there to keep you, your team and the company safe. In regulated industries you can multiply all of that by a factor of five or so.
> Your 'senior' engineer is likely right: they are trying to get some kind of process going and you are actively sabotaging that
Why? Because it's a "good practice"? They have 12 people and no customers, they can almost certainly adopt a very aggressive developer cycle that optimizes almost exclusively for happy-path velocity. You'd never do that at 50+ engineers with customers but for 12 engineers who have no customers? It's fine, in fact it's ideal.
> with massive downtime as a result.
They have no customers, downtime literally does not exist for them. You are following a dogmatic practice that is optimizing for a situation that literally does not exist within their company.
You establish a process before you need it, and code review, especially when starting up is a fantastic way to make sure that everybody is on the same page and that you don't end up with a bunch of latent issues further down the line. The fact that they have no customers today doesn't mean that they won't have any in the future and mistakes made today can cause downtime further down the line.
If you're wondering why software is crap: it is because every new generation of coders insists on making all the same mistakes all over again. Learn from the past, understand that 'good practice' has been established over many years of very expensive mistakes. 12 engineers is already a nice little recipe for pulling in 12 directions at once and even if they're all perfect they can still learn from looking at each others code and it will ensure that there are no critical dependencies on single individuals (which can bite you hard if one of them decides to leave, not unheard of in a startup) and that if need be labor can be re-divided without too much hassle.
I'm not advocating for having no processes, I'm advocating for a process that matches their situation. A company with no customers should not be worrying about causing a production outage, they should be worried about getting a demoable product out.
Dogmatic adherence to a process that limits developer velocity and optimizes for correct code is very likely the wrong call when you have no customers.
If it is dogmatic, then yes: but you have no knowledge of that and besides there are always people who believe there is too much process and there are people that there is too little. If you want to challenge the process you do that by talking about it not by breaking the process on purpose. That's an excellent way to get fired.
I don't know the context and I don't know the particular business the OP is talking about. What I do know is that if you feel that your management is cargo culting development methodology (which really does happen) you can either engage them constructively or you can leave for a better company. Going in with a confrontational mindset isn't going to be a good experience for anybody involved. Case in point: the OP is still upset enough that he feels it necessary to vent about this in an online forum.
Note that this is the same person who in another comment wrote:
"On the flip side I’m trying to convince my CTO to fire half our engineering team - a group of jokers he hired during the run-up who are now wildly overpaid and massively under-delivering. With all the tech talent out there I’m convinced we’d replace them all within a week."
Heh, both can be true. Process doesn't make good engineers any better. Bad code gets approved and merged every day. I'd rather have a team I could trust to merge and steward their code to production on their own instead of bureaucracy giving people a false sense of security.
> Case in point: the OP is still upset enough that he feels it necessary to vent about this in an online forum.
With no customers, one of the purposes of code-review is removed, but it's the lesser one anyway. The primary goal of code-review should _not_ be to "catch mistakes" in a well-functioning engineering team - that's a thing that happens, but mostly your CI handles that. Code-review is about unifying approaches, cross-pollinating strategies and techniques, and helping each other to improve as engineers.
Your attitude towards code-review on the other hand is one I've seen before several times, and I was glad when each of those people were fired.
We did post-merge code reviews. But half our team was on the other side of the planet from the other half (4 people on the team, US, EU. APAC, and AU).
If we waited for reviews before merging, we’d be waiting weeks to merge a single PR. Thus, you wrote your code, opened a PR, did a self-review, then deployed it. We had millions of customers, downtime was a real possibility. So you’d watch metrics and revert if anything looked slightly off.
You would wake up to your PR being reviewed. Sometimes there would be mistakes pointed out, suggestions to improve it, etc. Sometimes it was just a thumbs up emoji.
The point is, there are many ways to skin this cat and to “ream” someone for merging without deploying is incredibly immature and uncreative. You can still review a merged PR.
That process sounds fine to me, especially in a context with either good integration coverage or low downtime cost.
> to “ream” someone for merging without deploying is incredibly immature and uncreative.
I'd agree, but I _highly_ doubt that description was an accurate one. Read through the other comments by the same person and you'll get a picture of their personality pretty quickly.
It's likely that there was already an ongoing conflict either in general or specifically between them about this issue. They probably got a moderately harsh comment to the effect of "hey, you're expected to wait for code-reviews now, knock it off"
I suggested it's possible to write, commit and own code without others' approval to increase productivity and people get _extremely_ defensive about it. It's so odd. It happened in real life and it's happening in this thread now, too. They even attack your character over it.
Yes. Some people get personally attached to code. It’s incredibly frustrating. Some people use reviews to push dogmatic approaches to architecture and/or exert some kind of control over things. Whenever I meet these people in a code review, and they make unnecessary suggestions or whatever, my favorite phrase to say is, “I can get behind that, but I don’t think it’s worth the time to do that right now,” or, “I disagree, can you give an argument grounded in computer science.” With the latter only being used twice in my career, when someone left a shitload of comments suggesting variable name changes, and then again, when someone suggested rewriting something that was O(n) to O(n^2) and claimed it was better and wouldn’t give up.
You want to get the team to a point where you can disagree and commit, no code will ever be perfect and there is no reason spending 3-4 rounds of change requests trying. I think the worst code review I ever had, ended with me saying, “if you’re going to be this nitpicky, why don’t you take the ticket?” (It was extremely complex and hard to read — and there wasn’t any getting around it, lots of math, bit shifting, and other shenanigans. The reviewer kept making suggestions that would result in bugs, and then make more suggestions…)
He came back the next day and approved my PR once he understood the problem I was trying to solve.
Even these days, where I work on a close team IRL, I’ve been known to say, “if there are no objections, I’m merging this unreviewed code.” And then I usually get a thumbs up from the team, or they say something like “oh, I wanted to take a look at that. Give me a few mins I got sidetracked!” And I’ve even heard, “I already reviewed it, I just forgot to push approve!”
Communication is key in a team. Often, if the team is taking a long time to review, give them the benefit of the doubt, but don’t let yourself get blocked by a review.
If the code work/it's tested, review is for sanity checking/looking for obvious bugs.
Anything else is un-needed grooming that's more about the other developer's ego, not about good code (sometimes its to follow some other constraint, but its a good sign the person has a personality issue).
Well, partly that was a mistaken impression because I thought that your comment was also from vrosas. But I think there's enough in there to assess your attitude toward code-review at least a _bit_:
> They have 12 people and no customers, they can almost certainly adopt a very aggressive developer cycle that optimizes almost exclusively for happy-path velocity. You'd never do that at 50+ engineers with customers but for 12 engineers who have no customers? It's fine, in fact it's ideal.
12 engineers churning out code with no code-review at all? That'll produce velocity, for sure. It'll also produce not just an unmaintainable mess, but an interesting experiment, in which you get to find out which of your engineers are socially capable enough to initiate technical communication independently and construct technical rapport _without_ that process helping them to do so. Hope none of them hold strong technical opinions that clash!
No, because you're not infallible, you'll merge some crap, some things that are outright wrong, that your reviewer might have caught and that slight delay is less painful than dealing with that commited mistake - whether it be an incident in production or 'just' confusion when the next person in that area has to work out if your bug was for some reason intentional and what might break if they fix it.
I'm challenging my team to actually think about that process, why it's in place, how it's helping (or actively hurting!) us. Comparing ourselves to companies that have regulatory requirements (spoiler: we don't and likely won't for a long, long time) just furthers my point that no one really thinks about these things. They just cargo cult how everyone else does it.
You can challenge them without actually violating established process. I wasn't comparing you to companies that have regulatory requirements, I was merely saying that all of the above will factor in much, much stronger still in a regulated industry.
But not being in a regulated industry doesn't mean there isn't a very good reason to have a code review in your process, assuming it is used effectively and not for nitpicking.
Not having a code review step is usually a bad idea, unless everybody on your team is of absolutely amazing quality and they never make silly mistakes. I've yet to come across a team like that, but maybe you are the exception to the rule.
Then... the people responsible for this should have blocked PRs without a review. Or protected the target branch. Or... something. If it's sacrosanct to do what OP did, but the 'senior' folks didn't put in actual guardrails to prevent it... OP is not entirely at fault.
Really man. I have almost two decades developing software and yet, I feel a lot more comfortable having all my code reviewed. If anything I get annoyed by junior developers in my team when they just rub-stamp my PRs because supposedly I am this super senior guy that can't err. Code Reviews are supposed to give you peace of mind, not being a hassle.
During all this time, I've seen plenty of "small changes" having completely unexpected consequences, and sometimes all it would take to avoid would someone else seeing it from another perspective.
At 30 years of coding professionally in great engineering-focused organizations, and that on top of nearly a lifetime of having coded for myself, I’ve concluded code reviews barely work.
I agree with everything you say here, but honestly it’s quite ineffective. I wish we had found something better than unit tests (often misleading and fragile) and the ability of a qualified reviewer to maintain attention and context that a real review entails.
IMO catching bugs is a nice side-effect of code reviews.
The primary value I've seen across teams has been more on having shared team context across a codebase, if something goes bump in the night you've got a sense on how that part of the codebase works. It's also a great opportunity for other engineers to ask "why" and explain parts that aren't obvious or other context that's relevant. We'll find the occasional architectural mismatch(although we like to catch those much earlier in the design process) and certainly prevented bugs from shipping but if that's the primary focus I think a team is missing a lot of the value from regular code reviews.
Yes, I don’t disagree, I just think in practice they do the job very poorly. I’ve tried many things over the years trying to make this work but honestly have found nothing that works at the high end.
At the low end of engineering, yeah, code reviews matter a ton and do catch bugs even if they’re basically just peephole inspections.
I’m not convinced bad code would get merged more often if we didn’t require approvals. I am convinced we’d deliver code faster though, and that’s what I’m trying to optimize for. Your company and engineering problems are not the same as mine.
Indeed, dogmatic adherence to arbitrary patterns is a huge problem in our field. People have strong beliefs, "X good" or "X bad", with almost no idea of what X even is, what the alternatives are, why X was something people did or did not like, etc.
What you think is usually the limit of your experience, which effectively makes it anecdata. I've looked at enough companies and dealt with the aftermath of enough such instances that I beg to differ. It's possible that due to the nature of my business my experience is skewed in the other direction which makes that anecdata as well. But it is probably more important than you think (and less important than I think).
I find it odd that there is this widespread meme—on HN, not in the industry—that microservices are never justified. I think everyone recognizes that it makes sense that domain name resolution is performed by an external service, and very few people are out there integrating a recursive DNS resolver and cache into their monolith. And yet, this long-standing division of responsibility never seems to count as an example.
You're certainly misunderstanding me. Microservices are definitely justifiable in plenty of cases, and _services_ even more often. But they _need to be technically justified_ - that's the point I'm making.
The majority of SOA adoption in small-to-medium tech companies is driven by the wrong type of pain, by technical leaders that can see that if they had their domains already split out into services, their problems would not exist, but don't understand that reaching that point involves _solving their problems first_.
Whenever someone on the projects, I’m attached to tries to create a new service, first I asked them what data it works with, and then I ask them what the alternative to making this a service would be. Usually, by the time we start answering the second question, they realize that, actually, adding the service is just more work.
To me, what’s amazing is that in almost no organization is there a requirement to justify technically the addition of a new service, despite the cost and administrative and cognitive overhead of doing so.
_Services_ are obviously a good idea (nobody is arguing something like PostgreSQL or Redis or DNS or what have you should all run in the same process as the web server).
_Microservices_ attract the criticism. It seems to assume something about the optimal size of services ("micro") that probably isn't optimal for all kinds of service you can think of.
It's funny because the term "microservices" picked up in popularity because previously, most "service-oriented architecture" (the old term) implementations in large companies had services that were worked on by dozens or hundreds of developers, at least in my experience. So going from that to services that were worked on by a single development team of ~10 people was indeed a "microservice" relatively speaking.
Now, thanks to massive changes in how software is built (cloud, containers et al) it's a lot more standard for a normal "service" with no prefix to be built by a small team of developers, no micro- prefix needed.
Yeah, my last company had 10 microservices (the entirety of their codebase) managed by a single team when I started. Some of them had fewer than 5 API endpoints (and weren't doing anything complex to justify that).
The smaller the service, the more likely that the overhead of having a separate service exceeds the benefit of doing so. It isn't at all normal for a service to have its own database unless it provides a substantial piece of functionality, for example, and there are non-trivial costs to splitting databases unnecessarily. If you are not very careful, it is a good way to make certain things a dozen times slower and a dozen times more expensive to develop.
> I find it odd that there is this widespread meme—on HN, not in the industry—that microservices are never justified.
Many HN patrons are actually working where the rubber meets the road.
DNS is a poor comparison. Pretty much everything, related to your application or not, needs DNS. On the other hand, the only thing WNGMAN[0]may or may not do, is help with finding the user’s DOB.
> I find it odd that there is this widespread meme—on HN, not in the industry—that microservices are never justified.
There's a few things in play IMO.
One is lack of definition -- what's a "microservice" anyhow? Netflix popularized the idea of microservices literally being a few hundred lines of code maintained by a single developer, and some people believe that's what a microservice is. Others are more lax and see microservices as being maintained by small (4-10 person) development teams.
Another is that most people have not worked at a place where microservices were done well, because they were implemented by CTOs and "software architects" with no experience at companies with 10 developers. There are a lot of problems that come from doing microservices poorly, particularly around building distributed monoliths and operational overhead. It's definitely preferable to have a poorly-built monolith than poorly-built microservice architectures.
I've been at 4 companies that did microservices (in my definition, which is essentially one service per dev team). Three were a great development experience and dev/deploy velocity was excellent. One was a total clusterfuck.
It doesn't lack a definition, there's lots of people talking about this. In general you'll find something like "a small service that solves one problem within a single bounded context".
> It's definitely preferable to have a poorly-built monolith than poorly-built microservice architectures.
I don't know about "definitely" at all. Having worked with some horrible monoliths, I really don't think I agree. Microservices can be done poorly but at minimum there's a fundamental isolation of components. If you don't have any isolation of components it was never even close to microservices/SoA, at which point, is it really a fair criticism?
> It doesn't lack a definition, there's lots of people talking about this. In general you'll find something like "a small service that solves one problem within a single bounded context".
How small is small? Even within this comment section there are people talking about a single developer being the sole maintainer of multiple microservices. I'm a strong advocate of (micro?)service architecture but I would never recommend doing the "all behavior is 100-line lambda functions" approach.
A horrible monolith vs horrible microservices is subjective, of course, but IMO having everything self-contained to one repository, one collection of app servers, etc. at least gives you some hope of salvation, often by building new functionality in separate services, ironically. Horrible microservices that violate data boundaries, i.e. multiple services sharing a database which is a sadly common mistake, is a much harder problem to solve. (both are bad, of course!)
"Small" is a relative term, and not an ideal one, but what it generally means is "no larger than is needed" - that is, if you have one concrete solution within a bounded context, "small" is the code necessary to implement that solution. It's not a matter of LOC.
> IMO having everything self-contained to one repository
I highly recommend keeping all microservices in a single repository. It's even more important in a microservice world to ensure that you can update dependencies across your organization atomically.
> Horrible microservices that violate data boundaries, i.e. multiple services sharing a database which is a sadly common mistake, is a much harder problem to solve.
But that's not microservices. Maybe this is in and of itself an issue of microservice architecture, the fact that people think they're implementing microservices when they're actually just doing SoA, but microservice architecture would absolutely not include multiple services with a single database, that would not be microservices.
So I think the criticism would be "people find it hard to actually implement microservices" and not "microservice architecture leads to these problems", because microservice architecture is going to steer you away from multiple services using one database.
A little off topic but there are even more sinister patterns than the shared database which some architects are actively advocating for, like
1. The "data service" layer, which (if done improperly) is basically just a worse SQL implemented on top of HTTP but still centralised. Though now you can claim it's a shared service instead of a DB.
2. The "fat cache" database - especially common in strongly event-based systems. Basically every service decides to store whatever it needs from events to have lower latency access for common data. Sounds great but in practice leads to (undocumented) duplicate data, which theoretically should be synchronised but since those service-local mirror DBs are usually introduced without central coordination it's bound to desync at some point.
DNS resolution is genuninely reusable, though. Perhaps that's the test: is this something that could concievably be used by others, as a product in itself, or is it tied very heavily to the business and the rest of the "microservices"?
Remember this is how AWS was born, as a set of "microservices" which could start being sold to external customers, like "storage".
qmail probably counts as "microservices" as well, yes. In that case, for security separation of authority in a multi-user environment. Nowadays we don't really do multi-user environments and separation would be by container.
"I find it odd that there is this widespread meme—on HN, not in the industry—that microservices are never justified"
I think the problem is the word "micro". At my company I see a lot of projects that are run by three devs that and have 13 microservices. They are easy to develop but the maintenance overhead is enormous. And they never get shared between projects so you have 5 services that do basically the same.
I rarely see anyone claiming that microservices are never justified. I think the general attitude toward them is due to the amount of Resume Driven Development that happens in the real world.
Eh, I think a lot more of it is caused by optimism than RDD - people who haven't _tried to do it_ look at the mess they've got, and they can see that if it were divided into domain-based services it would be less of a mess.
And the process seems almost straightforward until you _actually try to do it_, and find out that it's actually fractally difficult - by that point you've committed your organization and your reputation to the task, and "hey, that was a mistake, oops" _after_ you've sunk that kind of organizational resources into such a project is a kind of professional suicide.
I've always thought of this as the best example of deferred execution. It's surprising how often businesses get it wrong. I think the problem is most "good" workers are over eager to demonstrate value. They do that by over engineering and that leads to hell etc...
grug not experience large teams stepping on each other's domains and data models, locking in a given implementation and requiring large, organizational efforts to to get features out at the team level. Team velocity is empowered via microservices and controlling their own data stores.
"We want to modernize how we access our FOO_TABLE for SCALE_REASONS by moving it to DynamoDB out of MySQL - unfortunately, 32 of our 59 teams are directly accessing the FOO_TABLE or directly accessing private methods on our classes. Due to competing priorities, those teams cannot do the work to move to using our FOO_SERVICE and they can't change their query method to use a sharded table. To scale our FOO_TABLE will now be a multi-quarter effort providing the ability for teams to slow roll their update. After a year or two, we should be able to retire the old method that is on fire right now. In the meanwhile, enjoy oncall."
Compare this to a microservice: Team realizes their table wont scale, but their data is provided via API. They plan and execute the migration next sprint. Users of the API report that it is now much faster.
our team of 300 - we _can't_ enforce the clear abstractions. New dev gets hired, team feels pressured to deliver despite leadership saying to prioritize quality, they are not aware of all the access controls, they push a PR, it gets merged.
We have an org wide push to get more linting and more checks in place. The damage is done and now we have a multi-quarter effort to re-organize all our code.
This _can_ be enforced via well designed modules. I've just not seen that succeed. Anywhere. Microservices are a pain for smaller teams and you have to have CI and observability and your pains shift and are different. But for stepping on eachother? I've found microservices to be a super power for velocity in these cases. Can microservices be a shitshow? Absolutely, esp. when they share data stores or have circular dependencies. They also allow teams to be uncoupled assuming they don't break their API.
My experience is that "leadership" often finds quality to be expensive and unnecessary overhead.
That's one reason that I stayed at a company that took Quality seriously. It introduced many issues that folks, hereabouts would find unbearable, but they consistently shipped some of the highest-Quality (and expensive) kit in the world.
Quality is not cheap, and it is not easy. It is also not really the path to riches, so it is often actively discouraged by managers.
Really, you can't? Then I struggle to see how'll get anything else right. I've done it by using separate build scripts. That way only the interfaces and domain objects are exposed in the libraries. Now you lock down at the repository level access to each sub-project to the team working on it. There you go: modularity, boundaries, all without network hops.
sure - if you do that from the start. Most don't. The codebase organically grows and then lines are blurred and then you have to come in and refactor. When this refactor affects several teams, it gets harder in combinatorial fashion.
With an HTTP API boundary, you don't get to reach into my code - it is literally impossible.
But the work required to factor things out behind an HTTP boundary is a superset of the work required to factor things out into a module as GP describes. So if you were going to do microservices, you could do that same factoring and then just stop when you get to the part where you'd add in networking and deployment scripts.
> They also allow teams to be uncoupled assuming they don't break their API.
Presumably you would still need tooling to enforce teams not breaking their API, and also to prevent people just modifying services to expose private internals that should not be part of the public API?
How the whole open source ecosystem is working fine and delivering software while depending upon each other for almost decades all the while not ever being in the same room and yet having no microservices?
I mean take your pick, anything open source be it desktop or web has a huge and deep dependency tree all the way down to libc.
Someone does the integration work for you, that’s why it works. Try running some distro that just grabs the latest upstream versions and see how often things break.
You can't really retrofit culture and behavior, you can only very gradually move towards a particular goal and usually that's a multi-year effort, if it can be done at all.
If you want to change that model, there's three things that need to happen:
1) engineering needs a reason to embrace abstraction. Because the only thing that stops a new cowboy engineer is their peers explaining to them in what ways this isn't the Wild Wesst. One can assume if rules would benefit the team, they'd be doing them already, so why don't they? Maybe they perceive feature output velocity to be too high to risk changing method. Maybe decisionmaking power rests in the hands of one system architect who is holding the whole machine in their head so things that look complex to others seem simple to them (in that case, the team needs to spread around peer review signoff responsibilities on purpose, so one engineer can't be the decisionmaker and the architecture itself must self-describe). Maybe (this is how I see it usually happen) they were a three-person startup and complexity crept up on them like a boiling frog. Whatever the reason, if you're gonna convince them otherwise, someone's gonna have to generate hard data on how changing the abstraction could make their jobs easier.
2) If management has no idea what "prioritize quality" means (meaning no metrics by which to measure it and no real grasp of the art of software engineering), the engineers will interpret buzzwords as noise and route around them. Management needs to give actionable goals other than "release feature X by Y date" if they want to change engineering culture. That can take many forms (I've seen rewarded fixit weeks and externally-reported issue burndown charts as two good examples).
3) Engineering leadership needs time and bandwidth to do training so they can see outside their prairie-dog hole over to how other teams solve problems. Otherwise, they get locked into the solutions they know, and the only way new approaches ever enter the team is by hires.
And the key thing is: microservices may not be the tool for the job when all is said and done. This is an approach to give your engineering team ways to discover the right patterns, not to trick them into doing microservices. Your engineering team, at the end of the day, is still the best-equipped people to know the machinery of the product and how to shape it.
Found the author of CISCO / Java Spring documentation.
I honestly expected people used passive-aggressive corpo-slang only for work / ironically. But this reads intentionally obfuscated.
But, to answer to the substance: you for some reason assumed that whoever designed first solution was an idiot and whoever designed the second was, at least clairvoyant.
The problem you describe isn't a result of inevitable design decisions. You just described a situation where someone screwed up doing something and didn't screw up doing something else. And that led you to believe that whatever that something else is, it's easier to design.
The reality of the situation is, unfortunately, the reverse. Upgrading microservices is much harder than replacing components in monolithic systems because it's easier to discover all users of the feature. There are, in general, fewer components in monolithic systems, so less things will need to change. Deployment of a monolithic system will be much more likely to discover problems created by incorrectly implemented upgrade.
In my experience of dealing with both worlds, microservices tend to create a maze-like system where nobody can be sure if any change will not adversely affect some other part of the system due to the distributed and highly fragmented nature of such systems. So, your ideas about upgrades are uncorroborated by practice. If you want to be able to update with more ease, you should choose a smaller, more cohesive system.
I think I view the situation in a similar fashion as you. There's absolutely nothing preventing a well architected modular monolith from establishing domain/module-specific persistence that is accessible only through APIs and connectable only through the owning domain/module. To accomplish that requires a decent application designer/architect, and yes, it needs to be considered ahead of time, but it's 100% doable and scalable across teams if needed.
There are definitely reasons why a microservice architecture could make sense for an organization, but "we don't have good application engineers/architects" should not be one of them. Going to a distributed microservice model because your teams aren't good enough to manage a modular monolith sounds like a recipe for disaster.
Microservices don't magically fix shared schema headaches. Getting abstraction and APIs right is the solution regardless of whether it's in-memory or over the network.
Instead of microservices, you could add a static analysis build step that checks if code in packages is calling private or protected interfaces in different packages. That would also help enforce service boundaries without introducing the network as the boundary.
I guess I'm confused by the suggestion - doesn't that static analysis step to check that code isn't calling private interfaces already exist, and is called a "compiler"?
Or how about "we want to update a 3rd party library due to a critical security issue, but we can't because that library is used in 500 different parts of the code and no way in hell can we stop development across the entire org for multiple weeks".
With microservices, you deploy the updates to public facing services first, write any learnings down, pass it along to the next most vulnerable tier.
Heck on multiple occasions I've been part of projects where just updating the build system for a monolithic codebase was a year+ long effort involving dozens of engineers trying to work around commits from everyone else.
Compare this to a microservice model where you just declare all new services get the new build tools. If the new tools come with a large enough carrot (e.g. new version of typescript) teams may very well update their own build tooling to the latest stuff without anyone even asking them to!
As with so many software solutions, the success of microservices is predicated upon having sufficient prognostication about how the system will be used to recognize where the cut-points are.
When I hear success stories like that, I have to ask "Is there some inherent benefit to the abstraction or did you get lucky in picking your cleave-points?"
That comes with experience, but you can let time be the judge if you factor your monolith early enough. If the factorization proves stable, proceed with carving it into microservices.
This applies to performance optimizations which leave the interface untouched, but there are other scenarios, for example:
- Performance optimizations which can't be realized without changing the interface model. For example, FOO_TABLE should actually be BAR and BAZ with different update patterns to allow for efficient caching and querying.
- Domain model updates, adding/updating/removing new properties or entities.
This kind of update will still require the 32 consumers to upgrade. The API-based approach has benefits in terms of the migration process and backwards-compatibility though (a HTTP API is much easier to version than a DB schema, although you can also do this on the DB level by only allowing queries on views).
> Team realizes their table wont scale, but their data is provided via API. They plan and execute the migration next sprint.
... followed by howls of anguish from the rest of the business when it turns out they were relying on reports generated from a data warehouse which incorporated a copy of that MySQL database and was being populated by an undocumented, not-in-version-control cron script running on a PC under a long-departed team member's desk.
(I'm not saying this is good, but it's not an unlikely scenario.)
> they were relying on reports generated from a data warehouse which incorporated a copy of that MySQL database and was being populated by an undocumented, not-in-version-control cron script running on a PC under a long-departed team member's desk.
This definitely happens but at some point someone with authority needs to show technical leadership and say "you cannot do this no matter how desperately you need those reports." If you don't have anyone in your org who can do that, you're screwed regardless.
I do agree with that. Microservices are not a good idea whatsoever for organizations with weak senior technical people. Which is probably 90%+ of businesses.
> ... followed by howls of anguish from the rest of the business when it turns out they were relying on reports generated from a data warehouse which incorporated a copy of that MySQL database and was being populated by an undocumented, not-in-version-control cron script running on a PC under a long-departed team member's desk.
Once you get to this point, there's no path forward. Either you have to making some breaking changes or your product is calcified at that point.
If this is a real concern then you should be asking what you can do to keep from getting into that state, and the answer is encapsulating services in defined interfaces/boundaries that are small enough that the team understands everything going on in the critical database layer.
An approach I like better than "only access my data via API" is this:
The team that maintains the service is also responsible for how that service is represented in the data warehouse.
The data warehouse tables - effectively denormalized copies of the data that the service stores - are treated as another API contract - they are clearly documented and tested as such.
If the team refactors, they also update the scripts that populate the data warehouse.
If that results in specific columns etc becoming invalid they document that in their release notes, and ideally notify other affected teams.
Yeah, having a documented stream of published events in Kafka is a similar API contract the team can be responsible for - it might even double as the channel through which the data warehouse is populated.
Team size is probably the most important factor that should influence the choice about microservices. Unfortunately, there was a period when it looked like every project and every team had to adopt them or be declared a dinosaur.
I just wanted to note that static typing isn't required for autocomplete. JetBrains has IDEs for languages like Ruby and Python that can do it. If you open the REPL in a recent version of Ruby you get much of what you expect from an IDE with a statically typed language (with regards to autocomplete and syntax checking).
What are you replying to? The article is about how Dry shouldn't be over applied.
Dry is literally "Don't Repeat Yourself" and is definitely pushed for cleaning up redundant code, so it's not unreasonable for people to think that's what about. It's only recently that people have pointed out that there's a difference between Duplicated code and Repeated code.
Redundant code is not code that looks the same. It's only reasonable for people to believe it is about code that looks the same if they have never bothered to learn what it means.
You are correct, but static typing does make it a lot easier. Working with Rider feels like working with an IDE that fully understands the code, at least structurally. Working with PyCharm feels like working with an IDE that makes intelligent guesses.
The REPL in current versions of Ruby is probably a better example of how it should be done. Because it is actually running the code it has much better information.
Network calls are a powerful thing to introduce. It means that you have an impassable boundary, one that is actually physically enforced - your two services have to treat each other as if they are isolated.
Isolation is not anything to scoff at, it's one of the most powerful features you can encode into your software. Isolation can improve performance, it can create fault boundaries, it can provide security boundaries, etc.
This is the same foundational concept behind the actor model - instead of two components being able to share and mutate one another's memory, you have two isolated systems (actors, microservices) that can only communicate over a defined protocol.
> Network calls are a powerful thing to introduce. It means that you have an impassable boundary, one that is actually physically enforced - your two services have to treat each other as if they are isolated.
That is not too true at all. I've seen "microservice" setups where one microservice depends on the state within another microservice. And even cases where service A calls into service B which calls back into service A, relying on the state from the initial call being present.
Isolation is good, but microservices are neither necessary nor sufficient to enforce it.
Well, I'd say you've seen SoA setups that do that, maybe. But those don't sound like microservices :) Perhaps that's not a strong point though.
Let me be a bit clearer on my point because I was wrong to say that you have to treat a service as being totally isolated, what I should have said that they are isolated, whether you treat them that way or not. There is a physical boundary between two computers. You can try to ignore that boundary, you can implement distributed transactions, etc, but the boundary is there - if you do the extra work to try to pretend it isn't, that's a lot of extra work to do the wrong thing.
Concretely, you can write:
rpc_call(&mut my_state)
But under the hood what has to happen, physically, is that your state has to be copied to the other service, the service can return a new state (or an update), and the caller can then mutate the state locally. There is no way for you to actually transfer a mutable reference to your own memory to another computer (and a service should be treated as if it may be on another computer, even if it is colocated) without obscene shenanigans. You can try to abstract around that isolation to give the appearance of shared mutable state but it is just an abstraction, it is effectively impossible to implement that directly.
But shared mutable state is trivial without the process boundary. It's just... every function call. Any module can take a mutable pointer and modify it. And that's great for lots of things, of course, you give up isolation sometimes when you need to.
Hmm... I guess I very rarely use shared mutable state in web services anyway. The last job I worked at, all state was either in the database (effectively another service anyway) or stored on the client (e.g. auth tokens). So anything that was mutating a function parameter would already be subject to extra scrutiny during code review (Why is it doing that? What scope / module boundary is the mutation limited to?).
Shared mutable state also goes beyond a mutable reference. If you call a function and that function throws an exception you are tying the caller/callee's states together. In a SoA the callee machine can literally blow up and your caller state is preserved.
If your web service is generally low-state and these problems are manageable for the complexity scale you're solving for, microservices aren't really something to even consider - I mean, you basically have a microservice already, it's solving a single problem within a bounded context, give or take. It's just... one service and one context.
The reality of this situation is that the tool everyone is using to build microservices is Kubernetes. It imposes a huge tax on communication between services. So your aspiration as to improving performance fly out of the window.
On top of this, you need to consider that most of the software you are going to write will be based on existing components. Many of these have no desire to communicate over network, and your micro- or w/e size services will have to cave in to their demands. Simple example: want to use Docker? -- say hello to UNIX sockets. Other components may require communication through shared memory, filesystem, and so on.
Finally, isolation is not a feature of microservices, especially if the emphasis is on micro. You have to be able to control the size and where you want to draw the boundary. If you committed upfront to having your units be as small as possible -- well, you might have function-level isolation, but you won't have class- or module- or program-level isolation, to put it in more understandable terms. This is where your comparison between the actors model and microservices breaks: first doesn't prescribe the size.
Microservices definitely predate k8s, but sure, lots of people use k8s. I don't know what penalty you're referring to. There is a minor impact on network performance for containers measured in microseconds under some configurations. Maybe Kubernetes makes that worse somehow? I think it does some proxying stuff so you probably pay for a local hop to something like Envoy. If Envoy is on your system and you're not double-wrapping your TLS the communication with it should stay entirely in the kernel, afaik.
In no way is this throwing out performance. It's sort of like saying that Kafka is in Java so you're throwing away performance when you use it, when there are massive performance benefits if you leverage partition isolation.
> Many of these have no desire to communicate over network, and your micro- or w/e size services will have to cave in to their demands. Simple example: want to use Docker? -- say hello to UNIX sockets. Other components may require communication through shared memory, filesystem, and so on.
I'm not sure what you're referring to. Why would that matter at all? I mean, ignoring the fact that you can easily talk to Docker over a network.
> Finally, isolation is not a feature of microservices,
Isolation is a feature of any process based architecture, whether it's SoA, actors, or microservices.
> well, you might have function-level isolation, but you won't have class- or module- or program-level isolation,
You get isolation at the service layer. I don't see why that would be contentious, it's obvious. If you're saying you want more isolation, ok, you can write your code to do that if you'd like.
> first doesn't prescribe the size.
Yep, the actor model is very low level. Microservice architecture is far more prescriptive. It's one of the reasons why I think Microservice architecture has been far more successful than actor based systems.
On Kubernetes impact on network performance: well... on one hand, Kubernetes doesn't come with its own networking. So, it's unfair to say that it affects networking, because it simply cannot do it. On the other hand, it requires external networking component to do certain things in certain ways. So, indirectly, it does affect networking.
So, here are some concerns that affect performance, they mostly come from the need of various translations done by either iptables, eBPF analogues, arpatables, DNS server(s). If you read about benchmarks of Calico and Cilius, you'll see that they concentrate on performance of eBPF code necessary to do all these translations. My claim about performance tax is based on the idea that w/o Kubernetes you wouldn't need such translations (but, of course, you could build your own version of Kubernetes networking with a lot of software-defined virtualization, in which case you'd be in the same boat).
> Kafka
Is as horrid as it sounds. It's awful performance-wise on all counts. I'm not sure what point are you trying to make. Can you choose a better example? I mean, Kafka performs very poorly in all configurations, so, to think that it can be a good example of software that wants to achieve good resource utilization is just bound to give you bad results.
> You get isolation at the service layer. I don't see why that would be contentious, it's obvious. If you're saying you want more isolation, ok, you can write your code to do that if you'd like.
You either genuinely didn't understand what this is about, or pretend to not understand something really simple. "Micro" in microservices means that your services are small. There aren't any meaningful isolation tools or approaches when it comes to microservices, because isolation at the level of service doesn't matter / is trivial to achieve by many other means / is not a problem in real-world programs.
> Yep, the actor model is very low level.
This is simply a nonsense statement. Low on what scale? Your answer reads as if it was generated by a chatbot. I.e. words from the same general domain strung together, but make no sense.
> Microservice architecture has been far more successful than actor based systems
How did you count? How do you even tell if something is a microservice-based? This is just as absurd of a claim as saying "78% of enterprises use Kubernetes" (I believe I saw this unfettered inanity on cloud-native foundation's Web site). How do you tell if it's successful? What if actor model is a more generic description which beside other things, also captures microservices?
I mean, in more simple language, this is talking out of your rear. It's not a real argument anyone should pay attention to.
> Kubernetes doesn't come with its own networking. So, it's unfair to say that it affects networking, because it simply cannot do it.
You're the one who brought it up?
> My claim about performance tax is based on the idea that w/o Kubernetes you wouldn't need such translations (but, of course, you could build your own version of Kubernetes networking with a lot of software-defined virtualization, in which case you'd be in the same boat).
You don't need any of those and I don't know why you think otherwise. You can just use the host network and do whatever you want, as with any container.
> I'm not sure what point are you trying to make.
That Kafka's architecture allows you to put data into partitions and route it based on that data, which allows for "shared nothing" architectures. But whatever, you're clearly not going to get this point from this example. To be clearer, your point of "you pay a cost here so you're losing performance" ignores that you can get performance elsewhere.
> There aren't any meaningful isolation tools or approaches when it comes to microservices, because isolation at the level of service doesn't matter / is trivial to achieve by many other means / is not a problem in real-world programs.
Not true at all. Services that own a domain of work are a great place to perform isolation and security boundaries.
> Low on what scale?
As in an actor is a foundational primitive for asynchronous computation. You have to build up protocols on top of actors, hence all of OTP.
> I.e. words from the same general domain strung together, but make no sense.
I think that's because I actually know what I'm talking about and you're finding it hard to keep up?
> How did you count?
Because it's obvious? Like it's not even close. Actor based systems are exceedingly rare, microservices are not.
> I mean, in more simple language, this is talking out of your rear. It's not a real argument anyone should pay attention to.
One of us actually knows what they're talking about and I doubt we'll agree who ti is.
It is trivial to tightly couple two services. They don't have to treat each other as isolated at all. The same people who create tightly coupled code within a single service are likely going to create tightly coupled services.
Sure, but breaking isn't always bad. I realize that sounds a bit crazy, but it's true.
a) Intermittent failures force you to treat systems as if they can fail - and since every system can fail, that's a good thing. This is why chaos engineering is great.
b) Failures across a network are the best kind - they're totally isolated. As I explain elsewhere, it's impossible to share state across a network, you can only send copies of state via message passing.
These things matter more or less depending on what you're doing.
I think people get the modularity wrong. Modularity is important, but I came to conclusion there is another important architectural principle, which I call "single vortex principle".
First, a vortex in a software system is any loop in a data flow. For example, if we send data somewhere, and then we get them back processed, or are in any way influenced by them, we have a vortex. A mutable variable is an example of a really small vortex.
Now, the single vortex principle states that there ideally should be only one vortex in the software system, or, restated, every component should know which way its vortex is going.
The rationale is when we have two vortices, and we want to compose the modules that form them into a single whole. If the vortices are correctly oriented, composition is easy. If they have opposite orientation, the composition is tricky and requires decision on how the new vortex is oriented. Therefore, it is best if all the vortices in all the modules have the same orientation, and thus form a single vortex.
This principle is a generalization of ideas such as Flux pattern, CQRS, event sourcing, and immutability.
This is a very good point, and you could probably write quite a few articles about this particular subject. You may even have a service A that calls service B that calls service C that calls service A. Then you have a problem. Or, you have C get blocked by something happening in A that was unexpected. Ideally, you only have parents calling children without relying on the parents whatsoever, and if you fail in this, you have failed in your architecture.
> And think of the sheer number of libraries - one for each language adopted - that need to be supported to provide common functionality that all services need, like logging.
This is the #1 reason we quit the microservices game. It is simply a complete waste of mental bandwidth to worry about with the kind of tooling we have in 2023 (pure cloud native / infiniscale FaaS), especially when you have a customer base (e.g. banks & financial instutitions) who will rake you over hot coals for every single 3rd party dependency you bring.
We currently operate with one monolithic .NET binary distribution which is around 250 megs (gzipped). Not even the slightest hint of cracks forming. So, if you are sitting there with a 10~100 meg SaaS distribution starting to get nervous about pedantic things like "my exe doesnt fit in L2 anymore", then rest assured - Your monolithic software journey hasn't even begun yet.
God forbid you find yourself with a need to rewrite one of these shitpiles. Wouldn't it be a hell of a lot easier if it was all in one place where each commit is globally consistent?
I think this is a false dichotomy. Most places I've worked with microservices had 2 or 3 approved languages for this reason (and others) and exceptions could be made by leadership if a team could show they had no other options.
Microservices doesn't need to mean it's the wild west and every team can act without considering the larger org. There can and should be rules to keep a certain level of consistency across teams.
Not sure why you’re downvoted - but you’re right. We heavily use microservices but we have a well defined stack. Python/gunicorn/flask/mongodb with k8s. We run these on Kafka or rest api. We even runs jobs and corn jobs in k8s.
Functional decomp is left to different teams. But the libraries for logging, a&a, various utilities etc are common.
No microservices that don’t meet the stack unless they’re already developed/open source - eg open telemetry collectors.
Edit: I think the article is a path to a book written by the author. It’s more of an advertisement than an actual assessment. At least that’s my take on it.
> Most places I've worked with microservices had 2 or 3 approved languages for this reason (and others) and exceptions could be made by leadership if a team could show they had no other options.
This works well if you have knowledge redundancy in your organization, i.e., multiple teams that are experienced in each programming language. This way, if one or more developers experienced in language 'A' quit, you can easily replace them by rearranging developers from other teams.
In small companies, this flexibility of allowing multiple languages can result in a situation in which developers moving to other jobs or companies will leave a significant gap that can only be filled with recruiting (then onboarding), which takes much more time and will significantly impact the product development plan.
More often than not, the choice between Microservices and Monoliths is more of a business decision than a technical one to make.
> More often than not, the choice between Microservices and Monoliths is more of a business decision than a technical one to make.
I think that, technically you can use one or the other and make it work.
However management is very different in the two cases, so I completely agree with you. I hadn't thought of the part about moving people between teams.
It's my first job but I understand why they chose microservices : 6 teams working on 6 "features/apps" can be managed (almost) fully independently of each other if you split your code base.
I think it's fair to say microservices increase the need for governance, whether manual or automated systems. When you start having more than 1 thing, you create the "how do I keep things consistent and what level of consistency do I want" problem
> God forbid you find yourself with a need to rewrite one of these shitpiles.
Actually, this is much easier with micro services as you have a clear interface you need to support and the code is not woven into the rest of the monolith like a French plat. The best code is the easiest to throw away and rewrite, because let's face it, the older the code is, the more hands it's been through, the worse it is, but more importantly the less motivated anyone is in maintaining it.
If the monolithic application is written in a language with sufficient encapsulation and good tooling around multi-module projects, then you can indeed have well known and encapsulated interfaces within the monolith. Within the monolith itself you can create a DAG of enforced interfaces and dependencies that is logically identical to a set of services from different codebases. There are well known design issues in monoliths that can undermine this approach (the biggest one that comes to mind is favoring composition over inheritance, because that's how encapsulation can be most easily broken as messages flow across a single-application interfaces, but then I'd also throw in enforced immutability, and separating data from logic).
It takes effort to keep a monolithic application set up this way, but IMHO the effort is far less than moving to and maintaining microservices. I think a problem is that there's very popular ecosystems that don't have particularly good tooling around this approach, Python being a major example--it can be done, but it's not smooth.
To me the time when you pull the trigger on a different service+codebase should not be code complexity, because that can be best managed in one repo. It is when you need a different platform in your ecosystem (say your API is in Java and you need Python services so they can use X Y or Z packages as dependencies), or when you have enough people involved that multiple teams benefit from owning their own soup-to-nuts code-to-deployment ecosystem, or when, to your point, you have a chunk of code that the team doesn't or can't own/maintain and wants to slowly branch functionality away from.
"composition over inheritance, because that's how encapsulation can be most easily broken as messages flow across a single-application interfaces, but then I'd also throw in enforced immutability, and separating data from logic"
Could you elaborate on this? I see how "separating data from logic" is a problem but what about the other two?
Well now that you mention it I think it does all come down to 'separating data from logic'. I was working backwards from the premise of: "what if we want a monolithic in-process application to have the same cognitive simplicity as an API-based client-server model?"
If you want to enforce a clean interface between an in-process client and server (i.e. a piece of code calling a library interface), then the best model is to think of it as a message passing system, where once a payload is passed from the client to the server and vice versa, the other side should not be able to witness changes to the payload. An immutable payload in this context is the same as the json that goes over the wire between a client and server.
If you really wanted an in-process application to look like a microservice you could take the added step of forcing a serialization / deserialization step on both client and server. I've seen frameworks that do this. But I think immutability, if it can be enforced, is a practical way of solving this problem and far less complex.
Inheritance is a hazier point I was making, in hindsight, because you could use inheritance for data modeling, which is quite fine in some situations... so I think it's effectively subsumed under the "separating data from logic" argument: which is that if you're passing behavioral inheritance from a "server" to a "client", then it gets harder and harder to predict how that client is going to use it and thus it's harder to reason about the functional boundary between two pieces of code. But it's a hazy point because I think the larger and more important point to make, as you point out, is that you simply shouldn't (in most cases) pass any kind of behavior between client and server--just data.
Another approach to look at (besides taking inspiration from Smalltalk) is the actor model, where it's incredibly clear that any communication between modules in an ecosystem is immutable messages. In fact a side effect of the actor model is that it gets pretty easy to move things from in-process to cross-process because most activity in the system is already performed through message passing, so you can just start channeling messages from Actor A to Actor B through a serialization/networking layer rather than in-process if you want to deploy them separately for reasons of convenience or computational needs.
I see, thanks. My original interpretation was that by "separating data from logic" you meant data-oriented programming where you intentionally give up on encapsulation.
"which is that if you're passing behavioral inheritance from a "server" to a "client", then it gets harder and harder to predict how that client is going to use it"
You mean a "server" returning the base class object whose methods are overridden by a successor class known only to the "server"?
> It takes effort to keep a monolithic application set up
this way
Yeah, this is the problem and _why_ I think microservices are the way forward as it doesn't take effort because the programmer is forced into do the right thing. On project with many different types of coders (and lets face it we are all different) consistency drops off fast. Of course you can do it with monoliths but I'm coming from a "real world" scenario where there are many people with different levels of ability and different levels of giving a cr@p about code quality.
Micro services let people who code badly to do it in isolation and let themselves be the only ones who have to suffer under it, and ultimately learn from it (if they are not fired first).
Also decoupling in a monolith vs by deployment is really just which git repo the code lives in, which are next to each other in the same directory on your hard drive. If there is shared code factor it out as a library/module and install it into the projects that need it. Its not a big deal
My instinct from your response is that if we chatted on this for a bit in person we'd see eye to eye and we're probably thinking of different use cases, because I can see how thoughtful you are and usually it comes down to thinking about particular problems in particular companies. So at the risk of going back into crass generalizations, I do persist in thinking that the TCO of a monolith is lower under a lot of the constraints you're describing, assuming (big emphasis) the tooling of the language you're using for the monolith, i.e. a combination of linters and compiler, can enforce the rules. I've been in service-first environments where the equivalent of not caring about code quality in the monolith is firing up a new service without consideration for refactoring the older service, until I am in endless architecture meetings about how to make cross cutting changes around a bunch of services everyone regrets creating.
I suppose in the end it comes down to the classic issue of needing to pay down of tech debt, which is true in any software ecosystem.
> If the monolithic application is written in a language with sufficient encapsulation and good tooling around multi-module projects, then you can indeed have well known and encapsulated interfaces within the monolith. Within the monolith itself you can create a DAG of enforced interfaces and dependencies that is logically identical to a set of services from different codebases.
Yes, a good point, I think the more dynamic the language, the more one gravitates towards microservices as a problem solving tool, because you'd give up a lot of the value of the dynamic environment by enforcing strict rules.
Though I'm seeing a lot of convergence between environments over time, which makes me think we're all headed towards a nicer future.
For example in Scala, which does a lot of type inferencing, it's typical to tell the linter to require that public methods have explicit types, even though the compiler will reify them at compile time anyhow, because that makes the interface much more robust to refactoring. Meanwhile in a more dynamic environment, in Python it's getting more typical to use type annotations, and, similarly, to especially use them on functions that define a reusable interface.
I figure that the ideal languages in the future have module-level systems where they can define strict interfaces across modules, but then get as crazy and dynamic as they want inside of the modules.
With microservices, you can also version them independently. In a monolith you can't roll back "a part" of the app to the latest version if you pushed multiple unrelated features at once.
You can do the same with the approach I described. If you set up the modular DAG as I mentioned above, you can now set up service boundaries between the leaves of the DAG. E.g. parts of the code call other parts of the code as a service. You then version and deploy the same codebase separately.
Say you have Libraries A, B, and C, where B and C depend on A and not one another. You can have B call into C via a service, just as you would in a microservice. Now B and C can be versioned and deployed independency. You can also deploy updates to Library A incrementally, tying it to the B and C deployments.
If you are literally pushing different features that are in fact unrelated, you don't even need to worry about B calling into C, you can just partition your app into different modules, deploy them separately, and use a load balancer with routing rules to arbitrate between the deployments.
I like having this type of environment because you can make fairly quick and easily-resersible decisions about whether or not different parts of the codebase are deployed differently: sometimes it's compute and hardware requirements, sometimes it's because you want parts to be stable and other parts more experimental and volatile.
The microservice argument isn't addressing this type of deployment scenario: it's suggesting a shared-nothing or shared-little architecture across services.
Well yes, but if you used a compiled language you have to make a new release based on this commit.
What I mean is, if you have v2 of your software that introduces a bugfix for module A that's perfectly fine and a bugfix for module B that breaks everything, you can just roll back just module B directly with your deployment pipeline.
There's no need to go back to the code base and make a new release.
I disagree. The older the code is, the more hands it's been through, which generally means it's stronger, hardened code. Each `if` statement added to the method is an if statement to catch a specific bug or esoteric business requirement. If a rewrite happens, all that context is lost and the software is generally worse for it.
That being said, I agree that maintaining legacy systems is far from fun.
> If a rewrite happens, all that context is lost and the software is generally worse for it.
Unless you have a strong test suite which tests the absence of those bugs. OFC you can never prove the absence of an issue just the continued functionality of your codebase, but re-writes are often prompted by weird "in-between" functionality becoming the norm (or slow/buggy behavior).
Of course a lot of test suites are of dubious quality/many devs have no idea what a good test suite looks like (usually unit tests are some combination of throw-away/waste-of-time, acceptance tests are mostly happy-path and due to recent trends integration tests are all but non-existent).
But in theory, re-writes are fine when you do have a test-suite. Even with a bad one, you learn what areas of the application were never properly tested and have opportunities to write good tests.
> The best code is the easiest to throw away and rewrite, because let's face it, the older the code is, the more hands it's been through, the worse it is, but more importantly the less motivated anyone is in maintaining it.
The more testing that's been done and the more stable it should be.
The argument for new can be flipped because new doesn't mean better and old doesn't mean hell.
This assumes a clear interface. Which assumes that you get the interfaces right - but what's the chance of that if the code needs rewriting?
Most substantial rewrites crosses module boundaries. In micro services changing the module boundary is harder than in a monolith, since it can be done in a single commit/deploy.
> Wouldn't it be a hell of a lot easier if it was all in one place where each commit is globally consistent?
I always find this sentence to be a bit of a laugh. It's so commonly said (by either group of people with a dog in this fight) but seemingly so uncommonly thought of from the other group's perspective.
People that prefer microservices say it's easier to change/rewrite code in a microservice because you have a clearly defined contract for how that service needs to operate and a much smaller codebase for the given service. The monolith crowd claims it's easier to change/rewrite code in a monolith because it's all one big pile of yarn and if you want to change out one strand of it, you just need to know each juncture where that strand weaves into other strands.
Who is right? I sure don't know. Probably monoliths for tenured employees that have studied the codebase under a microscope for the past few years already, and microservices for everyone else.
My first gig out of school was a .net monolith with ~14 million lines of code; it's the best dev environment I've ever experienced, even as a newcomer who didn't have a mental map of the system. All the code was right there, all I had to do was embrace "go to definition" to find the answers to like 95% of my questions. I spend the majority of my time debugging distrubuted issues across microservices these days; I miss the simplicity of my monolith years :(
Not so much a ball of yarn but more like the cabling coming out of a network cabinet. It can be bad and a big mess if you let it, but most professionals can organize things in such a way that maintenance isn’t that hard.
Well, the point I was making was that the same can easily be true of microservice architectures. If you have people that don't know what they're doing architecting your microservices, you'll have a difficult time maintaining them without clear and strict service contracts.
It's not clear to me that we're ever comparing apples to apples in these discussions. It would seem to me that everyone arguing left a job where they were doing X architecture the wrong way and now they only advocate for Y architecture online and in future shops.
Both have boxes of stuff and the stuff talks to other stuff.
Logistically its a bit easier to scale the one where the boxes are married to closer to the hardware abstraction (microservices on instances) versus the one where boxes are married to the software abstraction (threads with memory), for the same reason one (monolith) is a lot faster (dev/process/latency) (at small scales) than the other (microservice).
You can scale both, really its mostly about fights about the tooling, process, and where your sec-ops decided to screw you over the most (did they lock down your environments or did they make it impossible to debug ports/get logs).
Practically, AWS is expensive, and they're bloated. Cluster environments that let you merge 1000 computers into 1 big supercomputer and have 1 million cores/terabytes of ram, come with different technical challenges that not as many people know how to overcome, or expensive hardware bills.
So I'd say if someone tells you it "has to be" one or the other they are blowing smoke. Micro-services were recently the hip-new-thing so it makes sense some really really bad nonsense has been written in them so people are rediscovering monoliths (and realizing the microservice people were snake-oil salesmen). In 10 years we'll realize again that some monoliths are really badly written and some people without a clue will re-write them as microservices...
I've noticed that a lot of industry practices demonstrate their value in unexpected ways. Code tests, for instance, train you to think of every piece of code you write as having at minimum two integrations, and that makes developers who write unit tests better at separating concerns. Even if they were to stop writing tests altogether, they would still go on to write better code.
Microservices are a bit like that. They make it extremely difficult to insert cross cutting concerns into a code base. Conditioning yourself to think of how to work within these boundaries means you are going to write monolithic applications that are far easier to understand and maintain.
If an organization can't figure out how to factor out clearly-defined contracts within a single codebase and maintain that over time, adding a network hop and multiple codebases into that will not make it any easier.
> you just need to know each juncture where that strand weaves into other strands.
No I don't, that's what computers are for. It's why static analysis is good. Instead of knowing what calls what, you say, "yo, static analysis tool, what calls this?".
The comment you quoted is talking about the non-monolithic situations where static analysis tools cannot help you, e.g. when the callers are external and difficult to trace.
I don't think so? The full quote that I pulled out a piece of was:
> The monolith crowd claims it's easier to change/rewrite code in a monolith because it's all one big pile of yarn and if you want to change out one strand of it, you just need to know each juncture where that strand weaves into other strands.
I'm saying, in the monolith situation, with proper static analysis tooling - which even languages like python and ruby have nowadays - you don't "need to know" how all the strands weave into the other strands, you rely on the tooling to know for you.
And in my experience, static analysis tooling for navigating across service boundaries is, at the very least, far less mature, if not just entirely non-existent.
Absolutely. This is how we operated when we were at the peak of our tech showmanship phase. We had ~12 different services, all .NET 4.x the exact same way.
This sounds like it could work, but then you start to wonder about how you'd get common code into those 12 services. Our answer at the time was nugets, but we operate in a sensitive domain with proprietary code, so public nuget services were a no go. So, we stood up our own goddamn nuget server just so we could distribute our own stuff to ourselves in the most complicated way possible.
Even if you are using all the same language and patterns everywhere, it still will not spare you from all of the accidental complexity that otherwise becomes essential if you break the solution into arbitrary piles.
It's the difference between drag-dropping a reference between "projects" or whatnot in your monolith project, or coming up with a publish pipeline after which your merged code gets into the repo so that other people can use it, if you have three projects in three checkouts that all have to be coordinated and merged in order so that you don't accidentally break your environment and worry about security and and and....
One is much simpler than the other. Artifact management is surprisingly complex, it only looks like it works great while you aren't managing many versions and acting more like a monolith, just spread across repos/deploy points.
> It's the difference between drag-dropping a reference between "projects" or whatnot in your monolith project, or coming up with a publish pipeline after which your merged code gets into the repo so that other people can use it
You already have that pipeline for release/deployment though, don't you? (And it's already hooked up to your SSO or what have you).
> if you have three projects in three checkouts that all have to be coordinated and merged in order so that you don't accidentally break your environment and worry about security and and and....
It's the same as any other library dependency though, which is a completely normal thing to deal with. The cost of separating your pieces enough that you can use different versions of a library in different services is that you can use different versions of a library in different services.
I'm skeptical about microservices as a deployment model, but I'm absolutely convinced that code-level modularisation and independent release cycles for things that deploy independently are worthwhile, at least if your language ecosystem has decent dependency management.
The only way package feed complexity works -- and really microservices in general -- is to be absolutely fastidious about backwards compatibility of your packages and as open as possible with transitive dependency versions.
Nuget does provide mechanisms for obsoleting packages, so it's reasonable to enforce that new packages should allow for a few versions worth of backwards compatibility before deprecation and finally pulling the plug.
Ideally you would use the same language if it had distributed support. Use Erlang or Elixir for example and you have everything you need for IPC out of the box. Might take a little bit more effort if you're on Kubernetes.
One of my problems with microservices isn't really the services themselves, but the insane amount of tooling that creeps in: GRPC, Kafka, custom JSON APIs, protobufs, etc. etc. and a lot of them exist in some form to communicate between services.
If you do so much IPC that you have to design your language around it you're probably doing it wrong. I don't think that moving to a monolith would really cut down that much on other technologies. Maybe you can do without Kafka, but certainly you will need some other kind of persistent message queue. You will still need API docs and E2E integration tests.
> If you do so much IPC that you have to design your language around it you're probably doing it wrong...
I'm gonna have to disagree with this. Often when languages add features to their core, it allows for a very different (and often better) approach to writing code. Think Lisp with first class functions, or Rust with the borrow checker. Why should concurrency be any different? This feels like a Blub moment and I would highly recommend you give erlang or elixir a try, just to see what it is about.
If you need that much IPC then you aren't doing microservices correctly. Microservices are supposed to be independent, self-contained domains wherever possible. Purely technical boundaries like dedicated caching services that you see in very large companies like Google are more of an exception and should only be used when the non-functional requirements absolutely dictate it. A language that is designed around IPC wants to distribute dynamic parts of one application over multiple interchangeable compute resources (like in an HPC environment). This is a different thing altogether from microservices, which are self-contained applications working independently to each provide a fixed piece of the whole.
The theory is that microservices are supposed to be independent and self-contained, but such a wonderful implementation of DDD is a theoretical fantasy that rarely plays out in practice. It's not just a technical difficulty but an organisational problem where communication between teams also throws a spanner in the works.
If your typical microservice setup is simply distributing your call stack over a network (and oftentimes that's all it is), then you might as well use a language designed to operate in such a manner and reap the benefits of it. That kind of microservice architecture only really exists as a function of the organisation's structure such that teams can work more autonomously.
I can mostly run these on my machine, no need to stand up a cluster to get it running (bonus points if you virt multiple machines on one and still don't need the cluster for complex distributed scenarios).
"you will need some other kind of persistent message queue"
var queue = new Queue();
// then sometime later...
queue.Save();
// or
queue.EmplaceAndSave(); ... queue.Pop();
Trivial and didn't need a server to set it up.
Lots of tech stack stuff simply disappears without server boundaries and the like to get in the way. There is other tech you have to deal with, but at the small scales it mostly doesn't apply. Which, you as a dev are usually only working at small, testing scales, so you don't usually need to support it.
> var queue = new Queue(); // then sometime later... queue.Save(); // or queue.EmplaceAndSave(); ... queue.Pop();
Yeah, no... you'll lose data for sure, you'll have race conditions or no cooperative queuing between multiple instances of the application. This is exactly the kind of half-assery that people resort to when they say monoliths are so much less complex than microservices. A good monolith still requires all the same hard decisions about modularity, high-availability etc.
Default queue implementations come with co-operating queuing out of the box. I suppose you could use some hand-rolled queue with no internal locking, but this is disingenuous.
> you'll have race conditions or no cooperative queuing between multiple instances of the application.
You're thinking micro-service again here. There's no need to have multiple "instances" at all. Any concurrency is handled internal to the application itself with use of e.g. fibers, threads, etc. A good argument to why monoliths have drawbacks can't be "well its not a microservice". :)
> A good monolith still requires all the same hard decisions about modularity, high-availability etc.
Never said it didn't. What it doesn't require is all the tech stack to support that over many instances, because there's one instance. A large majority of µServices is just re-implementing what your runtime gives you for free.
Sure, but even a monolith should store data on redundant hardware before launch, because commodity hardware isn’t completely reliable and neither are datacenters.
I was in a meeting to talk about our logging strategy at an old company that was starting micro services and experiencing this problem. In the meeting I half heartedly suggested we write the main lib in C and write a couple wrapper libs for the various languages we were using. At the time it felt kinda insane but in hindsight it probably would have been better than the logging mess we created for ourselves.
We have 2 supported langs across our cloud teams and our core libraries are dual-lang'd where applicable; meaning they're both Ruby Gems and Pypi packages (Python3) in one repo with unified APIs and backed by a shared static config where applicable (capability definitions etc.) Each dual lib is released simultaneously with matching SemVer versions to our various artifactory instances & S3 buckets (for our lambda "layers"), automatically on every push to mainline by CI/CD.
It works surprisingly well. We're evaluating a 3rd language but won't make that choice lightly (if it happens at all.)
We have 14+ micro services, and it's fairly easy to "rewrite the shit pile" when you actually follow the micro designation. One of our services was originally in perl and we, quite mechanically, rewrote it in ruby in a sprint to align with our other services.
Speaking from personal experience, when monoliths and the teams working on them get big enough, you start having "action at a distance" problems, where seemingly benign changes affect completely unrelated flows, often to catastrophic effect.
You make an innocuous looking resource update in the monolith, like an update to a CSS stylesheet that fixes a bug in a flow your team owns, now breaks 10 others flows owned by teams you never heard of because they were using the existing structure for selenium tests or some js that now fails to traverse the dom because some order of selectors changed, etc.
Microservices are as much a team organizational tool as they are a code one. The idea being those that work on the service "know it", and all it does. They can wrap their head around the whole thing. I think some orgs don't really get this point and _start_ with microservices, completely unnecessarily, for the stage they're at as a company. You always start with a monolith and if you get to the point where everyone is stepping on each other's toes from the lack of enforceable boundaries in the code, you do the obvious and start to create those boundaries.
Microservices aren't the only way to do this of course. Any way of dividing up your service with enforceable contracts will work. Modules get designated with codeowners, assigned to various teams. Resources that were once shared get split up to align with the team structures better. Many frameworks allow multiple "apps" or distinct collections of APIs, so you can still ship your one-binary without splitting out the collections into different services. As soon as you have to independently scale one set of APIs but not another, you can state looking at service boundaries again. For the majority, that day will never come.
Speaking from a Release Management point of view going from a monolith to microservices is often done for the wrong reasons.
The only valid reason for actual doing the change seems to be for scaling reasons due to performance bottlenecks. Everything else is just shifting complexity from software development to system maintenance.
Of course, developers will be happy that they have that huge "alignment with other teams" burden off their shoulders. But the clarity when and how a feature is implemented, properly tested across microservices and then activated and hypercared on production will be much harder to reach if the communication between the development teams is not mature enough (which is often the actual reason from breaking up the monolith).
There are many valid reasons (and many wrong reasons). I would say: If you have multiple stakeholders, evoling business needs and many ( > 10) developers, there might be a good reason to have independent deployable, testable and releaseable units. Having few developers with a well defined context working on multiple microservices is a pain, though.
Regarding "Everything else is just shifting complexity from software development to system maintenance.": This sounds reasonable if your software is actively developed. Development is expensive. It may very well be, that the costs of maintaining a distributed system is lower then the cost of developing a very large monolith with a large team. In the end, it depends.
"There might be a good reason to have independent deployable, testable and releaseable units"
Of course this is the bottom line. But everything you define in the sentence can be achieved with a proper pipeline and repository architecture based on a monolith as well. For example teams could use a branch setup where they own their own team branches capable of merging to master and deploying. Each team could then define their own testing strategy and Definition of Done on their "team master".
Having the ability to release independently is actually a social problem, not a technical one. But the symptom of that social misalignment often shows up as a technical problem (dropping release KPIs, etc.)
So changing from a monolith to microservice will most likely only fight the symptom, not the root cause.
First, you offer technical solutions (pipelines, branching...)..
The cost of having multiple teams branching and merging the same code base can be significant. It's often not as easy as you make it sound.
In the end it is exactly the same technical complexity, it is just shifting complexity from point to another.
A proper branching strategy is not a technical solution but a social alignment of people contributing to common product. And it doesn't matter if they are contributing on different repositories or on the different branches. The technical complexity of aligning interfaces is the same in both cases.
"It's often not as easy as you make it sound."
No, it's of course not easy. But the costs in splitting teams up into different repositories and aligning their interfaces is not easy either. My point is they are both the same - just expressed differently.
This is the sole reason we're considering breaking this out into a separate component of our app. It's become too large to maintain effectively. The rest of the app will remain unchanged
So where's "the costs of monoliths" post? They don't show up here, because everyone is out there cluelessly implementing microservices and only sees those problems. If everyone were out there cluelessly implementing monoliths, we'd see a lot of "monoliths bad" posts.
People don't understand that these systems lead to the same amount of problems. It's like asking an elephant to do a task, or asking 1000 mice. Guess what? Both are going to have problems performing the task - they're just different problems. But you're not going to get away from having to deal with problems.
You can't just pick one or the other and expect your problems to go away. You will have problems either way. You just need to pick one and provide solutions. If 'what kind of architecture' is your biggest hurdle, please let me work there.
The 8 trillion monolith bad posts are why we are now over inundated with microservices. This is the blowback when people are realizing the cost benefit didn't work for them
If a monolith is well-factored, what is the difference between it and co-located microservices?
Probably just the interface - function calls become RPC. You accept some overhead for some benefits of treating components individually (patching!)
What is the difference between distributed microservices v.s. co-located microservices?
Deployment is more complex, but you get to intelligently assign processes to more optimal hardware. Number of failure modes increases, but you get to be more fault tolerant.
There's no blanket answer here. If you need the benefits, you pay the costs. I think a lot of these microservice v.s. monolith arguments come from poor application of one pattern or the other, or using inadequate tooling to make your life easier, or mostly - if your software system is 10 years old and you haven't been refactoring and re-architecting as you go, it sucks to work on no matter the initial architecture.
Monolith->microservice is not a trivial change no matter how well-factored it is to begin with -- though being poorly architected could certainly make the transition more difficult!
> Probably just the interface - function calls become RPC.
This sounds simple, but once "function calls become RPC" then your client app also needs to handle:
* DNS server unreachable
* DNS server reachable but RPC hostname won't resolve
* RPC host resolves but not reachable
* RPC host reachable but refuses connection
* RPC host accepts connection but rejects client authentication
* RPC host presents untrusted TLS cert
* RPC accepts authentication but this client has exceeded the rate limit
* RPC accepts authentication but says 301 moved permanently
* RPC host accepts request but it will be x seconds before result is ready
* RPC host times out
Even for a well-factored app, handling these execution paths robustly probably means rearchitecting to allow you to asynchronously queue and rate limit requests, cache results, handle failure with logarithmic back-off retry, and operate with configurable client credentials, trust store, and resource URLs (so you can honor 301 Moved Permanently) and log all the failures.
You'll also need additional RPC functions and parameters to provide data that had previously been in context for local function calls.
Then the monolith's UI may now need to communicate network delays and failures to the user that were impossible before network segmentation could split the app itself.
Refactoring into microservices will require significant effort even for the most well built monolith.
This is why I saw microservices are "closer to the metal", i.e. they depend more on the physical characteristics of their environment than non-microservices.
A function call in a monolith can:
* segfault
* be called with the wrong number of parameters
* call the wrong function
* go through but never return
* substitute itself with another function
All of which are very similar to the RPC situation. However we practically never see this because of the OS guarantees like memory safety, security, etc, plus there are standardized ways of handling when these problems do occur, notably try / catch patterns.
These issues can* be abstracted as well, but the advantage of scaling (being close to the metal) is the disadvantage as well (being very close to the hardware abstractions that let you scale).
E.g., there is no difference between running a microservice on a scaling computer than guarantees memory access across a cluster with interrupts, etc (some millions of cpus and terabytes of memory), and running it on a bunch of instances, except the hardware. The former is exotic and abstracts the hardware, the later does not and so all these "low level" errors surface with great frequency.
That is all true but most languages have semi-sane libraries with semi-sane defaults that handle most of that. Sure you need some config and tuning but it's not completely uncharted waters
My first Q was re: monolith on one host to many services on one host.
And yes, when going off-host, you'll have these issues. One should not employ network hops unnecessarily. Engineering is hard. Doesn't make it not worth doing.
When discussing microservices, the proponents almost always forget the key aspect of this concept, the micro part. The part that makes this stand out (everyone already heard about services, there's no convincing necessary here, if you feel like you need a service, you just make one, and don't fret about it).
So, whenever someone advocates for this concept, they "forget" to factor in the fragmentation caused by requiring that services be very small. To make this more concrete: if you have a monolith + microservice, you don't have microservices, because the later implies everything is split into tiny services, no monoliths.
Most of the arguments in favor of microservices fall apart as soon as you realize that it has to be micro. And once you back out of the "micro" requirement, you realize that nothing new or nothing deep is being offered.
From the developer POV, the difference is on the interface. And while we have all kinds of tools to help us keeping a monolith in sync and correct, the few tools we have to help with IPC can not do static evaluations and are absolutely not interactive.
From the ops POV, "deploy is more complex" is a large understatement. Each package is one thing that must be managed, with its own specific quirks and issues.
Absolutely true, but also: Usually, your business transactions happen in a business context, which happens to be in a microservice.
It can be a sign of bad design if you happen to have a lot of those transactional problems.
You will have distributed transactions with a distributed microservice setup, but most transactions will still be be contained within a single microservice (and thus be atomic and not distributed).
Similarly with frontend devs thinking a "modern web-app" can only be built with frontend frameworks/libs like React/Vue/Svelte, etc, lately I feel there's an idea floating around that "monolith" equals running that scary big black ball of tar as a single instance and therefore "it doesn't scale", which is insane.
Another observation is the overall amount of code is much bigger and most of these services are ~20% business/domain code and ~80% having to deal with sending and receiving messages from other process over the network. You can hide it all you want, but at the end of the day it's there and you'll have to deal with the network in one way of another.
Just like the frontend madness, this microservice cult will only end once the economy goes to crap and there's no money to support all these Babel Towers of Doom.
PS: microservices have a place, which is inside a select few of companies that get something out of it more than the cost they pay.
I think the thing people normally miss about microservices is that the goal of microservices is usually to solve organization and people problems and not technological ones. There's some tech benefits like allowing services to scale independently, but the biggest benefit is clear ownership boundaries and preventing code from becoming overly coupled where it doesn't need to be.
If you're a team small team working on a single product you probably don't need microservices since you likely don't have the type of organizational problems that microservices solve. Microservices are likely premature optimization and you're paying the price to solve problems you don't yet have.
There are situations where microservices genuinely add net value. In discussions like this one, people make valid points for microservices and for
monoliths. Often, words like “large” and “many” are used without providing a sense of scale.
Hers is a heuristic. It’s not a hard rule. There are likely good counter examples. It does sketch a boundary for the for/against equation for microservices. Would love to hear feedback about whether “100” is that number or a different heuristic would be more accurate.
Engineering departments with fewer than 100 engineers should favor monoliths and seriously question efforts to embrace microservices. Above that, there’s an increasing chance the microservices could provide value, but teams should stick to a monolith as long as possible.
That's why I included "/libs", for people like you :)
To be clear, for me they are libraries, but realistically people use them as frameworks, as in "standard" ways to think, "frame" and implement something.
But because I'm not dying on any hill specially front-end ones, I'll take a right then a left and go on my way.
While I agree with you that a lot of websites overuse javascript and frameworks. Can you tell me what else I'm supposed to use if I'm going to build a desktop class web app without it becoming a huge mess or I having to end up up inventing the same concepts already existing in these frameworks?
The meaning behind these is pretty blurry between whats considered a website vs an app, it's a spectrum. I consider a web app something that has a similar UI experience to a desktop app, as the word "application" came derived from the desktop.
The article sees to take for granted that your development org is completely broken and out of control. They can't decide what to work on during sprints, they furtively introduce third party libraries and unknown languages, they silently ship incompatible changes to prod, etc. I guess microservices are easier if your developers aren't bozos.
Unfortunately there are very often bozos in your team or complete teams of bozos working on the same project as you. Im sure Microservices are easier if you work in a development team of smart, competent, intelligent developers. However Im sure everything would be easier then!
Every developer is a bozo for their first few months in a new job, simply because it takes time to absorb all of the information needed to understand how the existing systems work.
In my experience the bozos were absolutely not the newbies. Maybe you work in a job that is dedicated to engineering only, but what happens is that often in a company of non-engineers, some kinda reorg happens where a person ends up on your team who never studied programming in their life, with the assumption that the person is a go-getter who will be able to pick all this stuff up. The experiment is never called a failure when they consistently fail to learn. 2 years later the same underwhelming "engineer" will still be there getting other people to do their work while desperately trying to introduce bugs into production.
Acculturating new developers is one of the main tasks of an organization. I don't think it's very difficult to communicate that some company uses language[s] X[, Y and Z] only.
That depends on the culture of each specific organization. Are there top-down engineering decisions? Is there a push for more team autonomy?
My experience is that many organizations have something of a pendulum swinging between those two positions, so the current state of that balance may change over time.
Also: many new developers, when they hear "microservices", will jump straight to "that means I can use any language I want, right?"
I recently had some discussions and did some research on this topic and I feel like there is a lot people don't talk about in these articles.
Here are some more considerations between micro services and monolothic tradeoffs. Its also important to consider these two things as a scale and not a binary decision.
1. Isolation. Failure in on service doesn't fail the whole system. Smaller services have better isolation.
2. Capacity management. Its easier to estimate the resource usage of a smaller service because it has less responsibilities. This can result in efficiency gains. Extended to this is you can also give optimized resources to specific services. A prediction service can use GPU while web server can use CPU only. A monolothic may need to use compute with both which could result in less optimized resources.
3. Dev Ops Overhead. In general monolothic services have less management overhead because you only need to manage/deploy one or few services over many.
4. Authorization/Permissions. Smaller services can be given a smaller scope permissions.
5. Locality. Monolothic can share memory and therefore have better data locality. Small services use networks and have higher overhead.
6. Ownership. Smaller services can have more granular ownership. Its easier to transfer ownership.
7. Iteration. Smaller services can move independently of one another and can release at seperate cadences.
I work on low code cloud ETL tools. We provide the flexibility for the customer to do stupid things. This means we have extremely high variance in resource utilization.
An on demand button press can start a processes that runs for multiple days, and this is expected. A job can do 100k API requests or read/transform/write millions of records from a database, this is also expected. Out of memory errors happen often and are expected. It's not our bad code, its the customer's bad code.
Since jobs are run as microservices on isolated machines, this is all fine. A customer(or multiple at once) can set up something badly, run out of resources, and fail or go really slow and nobody is effected but them.
Its not automatic but it has the potential for more isolation by definition.
If your service has memory leak, crash it only takes down the service. It is still up to your system to handle such a failure gracefully. If such a service is a critical dependency then your system fails. But if it is not then your service can still partially function.
If your monolith has memory leak, or crash it takes down the whole monolith.
But not all sequences. It depends on your dependencies. Some services are critical for some processes. In the monoloth design its a critical dependency for all processes.
Every company i have advised jumped on the microservice bandwagon some time ago....
Here is what I tell them:
1. Microservices are a great tool... IF you have a genuine need for them
2. Decoupling in and on itself with services it not a goal
3. Developers who are bad at writing proper modular code in a monolithic setting will not magically write better code in a Microservice environment.. Rather it will get even worse since APIS ( be it GRPC, Restful or whatever ) are even harder to design
4. Most developers have NO clue about consistency or how to achieve certrain gurantees in a microservice setting ( Fun fact: My first question to developers in that area is: Define your notion of consistency, before be get to the fun stuff like RAFT or PAXOS)
5. You don't have the problems where microservices shine ( e.g banks with a few thounsands RPS )
6. Your communication overhead will dramatically increase
7. Application A that does not genuinly need microservices will be much cheaper as a monolith with proper code seperation
Right now we have generation of developers and managers who don'T know any better, and just do what everybody else seems to be doing: Microservices and Scrum ... and then wonder why their costs explode.
People argue for them from two different, drastically different, points of view:
* Microservices become necessary at some point because they allow you to independently scale different parts of the application depending on load.
* Microservices become necessary at some point because they allow you to create hard team boundaries to enforce code boundaries.
Personal opinion: micro services are always thrown out there as a solution to the second problem because it is easier to split a service out of a monolith than it is to put the genie of bad code boundaries back in the box.
An application goes through a few stages of growth:
1) When it starts, good boundaries are really hard to define because no-one knows what the application looks like. So whatever boundaries are defined are constantly violated because they were bad abstraction layers in the first place.
2) After a while, when the institutional knowledge is available to figure out what the boundaries are, it would require significant rewrites/refactoring to enforce them.
3) Since a rewrite/major refactor is necessary either way, everyone pushes to go to micro services because they are a good resume builder for leadership, and "we might need the ability to scale", or people think it will be easier ("we can splinter off this service!", ignoring the fact that they can splinter it off within the monolith without having to deal with networks and REST overhead).
Unfortunately, this means that everyone has this idea that micro services are necessary for code boundaries because so many teams with good code boundaries are using micro services.
Granular performance and team boundaries are both valid points. But, I haven't seen yet (around me) monolith applications so complex to have more teams working on them. I've seen instead applications developed by two persons where some higher-ups requested splitting them into microservices just because (no, scaling wasn't needed).
Last time I did a survey of my peers, companies that were all in on microservices in the cloud spent about 25% of their engineering time on the support systems for microservices. That number is probably a bit lower now since there are some pretty good tools to handle a lot of the basics.
And the author sums up the advice I've been giving for a long time perfectly:
> Splitting an application into services adds a lot of complexity to the overall system. Because of that, it’s generally best to start with a monolith and split it up only when there is a good reason to do so.
And usually that good reason is that you need to optimize a particular part of the system (which often manifests as using another language) or your organization has grown such that the overhead of microservices is cheaper than the management overhead.
But some of the things in this article are not quite right.
> Nothing forbids the use of different languages, libraries, and datastores for each microservice - but doing so transforms the application into an unmaintainable mess.
You build a sidecar in a specific language, and any library you produce for others to consume is for that language/sidecar. Then you can write your service in any language you want. Chances are it will be one of a few languages anyway that have a preferred onramp (as mentioned in the article), and if it's not, then there is probably a good reason another language was chosen, assuming you have strong technical leadership.
> Unlike with a monolith, it’s much more expensive to staff each team responsible for a service with its own operations team. As a result, the team that develops a service is typically also on-call for it. This creates friction between development work and operational toll as the team needs to decide what to prioritize during each sprint.
A well run platform removes most of the ops responsibility from the team. The only thing they really have to worry about is if they have built their system in a way to handle failures well (chaos engineering to the rescue!) and if they have any runaway algorithms. Otherwise it's a good thing that they take on that ops load, because it helps them prioritize fixing things that break a lot.
>A well run platform removes most of the ops responsibility from the team
It might remove most of the responsibility from the original team but it transfers it somewhere else. Making sure the new "platform team" is well staffed is something I don't see mentioned/discussed very often in the context of microservices
Modern micro services are ridiculous, with few exceptions. IIRC the original idea of micro-services was that each team in a large company should provide a documented API for the rest of the company to use instead of just a web page or a manual process. this is in contrast to their being only one development team that decides what gets worked on. Which allows individual employees to automate bigger processes that include steps outside of their own department. Somehow that changed to people spinning up containers for things that could be a function.
In the robotics world it's pretty common to have something that looks a lot like microservices as an organizing principle. ROS[1] would probably the robotics framework that everybody cuts their teeth on and in that you have a number of processes communicating over a pub/sub network. Some of the reasons for organizing this way would be familiar to someone doing websites, but you also have factors like vendors for sensors you need providing libraries that will occasionally segfault, and needing to be able to recover from that.
People really underestimate the eventual consistency thing. It's 100% the case your app will behave weirdly while it's processing some sort of change event. The options for dealing with this are real awkward, because the options for implementing transactions on top of microservices are real awkward. Product will absolutely hate this, and there won't really be anything you can do about it. Also fun fact: this scales in reverse, as in the bigger and better your company gets, the worse this problem becomes, because more services = more brains = more latency.
Relatedly, you can't really do joins. Like, let's say you have an organizations service and a people service. You might want to know what people are in which organizations, and paginate results. Welcome to a couple of unpalatable options:
- Make some real slow API calls to both services (also implement all the filters and ordering, etc).
- Pull all the data you need into this new 3rd service, and justify it by saying you're robust in the face of your dependency services failing, but also be aware you're adding yet another layer of eventual consistency here too.
This is the simplest this problem can be. Imagine needing data from 3 or 13 services. Imagine needing to to synchronize UUIDs from one service to 20 others. Imagine needing to also delete a user/organization from the 30 other services that have "cached" it. Etc etc.
I used to think microservices were an adaptation to Conway's law, but now I really think it's a response to the old days when we didn't have databases that scaled horizontally. But we do now (Cockroach, Big Query, etc) so we really should just move on.
The article touches on it a bit, but in my experience microservices multiply operational problems (especially at startups where you don't have big, dedicated infrastructure teams). All of a sudden you have 5-10x things getting built in CI and deployed. You need some way to debug issues so usually distributed tracing comes up. Now that you have 10x as many of everything, you obviously want to try to centralize things like networking, authn/z, service discovery so you introduce some platforms to help with that... but someone has to run and maintain all that. That's fine if you want platform/infrastructure teams to maintain these but many startups only have a very small handful of people (10% or less of eng) handling CI/infra/release/performance/observability/networking/traffic mgmt with some title like "SRE" or "devops"
A typical startup with 20k DAU: "We need scale, microservices, k8s, CDNs, etc!".
Stackoverflow: "We run a single .NET-based multi-tenant web app running across just nine web servers, at 5% to 10% of capacity" [1].
Hacker News: "HN still runs on one core, at least the part that serves logged-in requests, and yes this will all get better someday...it kills me that this isn't done yet but one day you will all see." [2]. HN had ~12M MAU by the end of 2022.
In all fairness, HN and SO are fairly straight forward CRUD apps (esp HN) with a limited amount of 3rd party integrations.
On the other end, you have companies like Zapier that integrate with 6k+ external services. With something like that in a monolith you'd be constantly redeploying to fix changes and you'd have a pretty wild dependency tree if you wanted to use official SDKs
No. A number of startups don't either. I hazard to say that even giants like Tinder or Uber likely can have a 30 min outage and lose some revenue and goodwill, but not be hit by some exorbitant liabilities.
Also, microservices add both resilience (by running many copies) and fragility (many loosely coupled moving parts). Which effect prevails, depends on many factors.
My current job hasn't been a great experience with microservices. It's a industry where I've worked with a monolith that did a lot more but having everything split between like 17 different services makes managing anything not so fun. The other big blocker is small team and only one of us met the original staff who designed this.
Also, call stack can go pretty deep. You can't go deep with microservices. Everything has to be first hop or you risk adding latency.
Also, transactional functionalities become lot more challenging to implement across services and databases because it's blasphemous to share database among Microservices.
Surely you can delivery 15 metric tons of wood with either of them. You will just need a lot of motorcycles to match capacity.
Somehow a big part of industry became convinced that of course that approach is better, since motorcycles are cheaper*, easier to scale*, and solve some managerial problems*.
* They don't, IMHO.
>its components become increasingly coupled over time
>The codebase becomes complex enough that nobody fully understands every part of it
>if a change introduces a bug - like a memory leak - the entire service can potentially be affected by it.
I believe the only real solution to those challenges is Competency, or Programmer Professionalism. Uncle Bob had some great points on this topic.
A bit over a decade ago, I was sold on the concept of microservices. After implementing, maintaining, and integrating many microservices; I have realized how incredibly exhausting it is. Sure microservices have their place, but understanding the often higher costs associated with microservices should be considered when developing a new service. Focus on the costs that are less tangible: siloed knowledge, integration costs, devops costs, coordinating releases, cross-team communication, testing, contract versioning, shared tooling. I could go on, but that's just a sample of some of the less obvious costs.
I feel that the collective opinion of microservices has shifted towards being trepidatious of microservices as many have experienced the pains associated with microservcies.
As a contractor, having worked with a lot of teams, I get the feeling that the kubernetes and microservices hype a few years ago resulted in a lot of unnecessary refactors. It can make sense, of course, but often doesn't, and running a monolith on k8s is perfectly fine.
The article mentions Development Experience, but doesn't mention what I think is an overlooked huge cost.
Bad Development Experience results in unhappy and/or frustrated developer. Unhappy and/or frustrated developer is usually performing considerably worse than his happier self would.
I'm as much of a "build a monolith until you can't" person as any, but one motivation for using microservices that I haven't seen mentioned here is differing resource/infra requirements + usage patterns.
Throw the request/response-oriented API with bursty traffic on something serverless, run the big async background tasks on beefy VMs (maybe with GPUs!) and scale those down when you're done. Run the payments infra on something not internet-facing, etc.
Deploying all those use cases as one binary/service means you've dramatically over-provisioned/underutilized some resources, and your attack service is larger than it should be.
Anytime these discussions come up I always wonder if there are any great examples in the form of a deep technical analysis of microservice architecture?
I've built things that I've called microservices, but generally have failed to do a complete job or always left with something that feels more like a monolith with a few bits broken out to match scaling patterns.
I know there are talks and papers by Netflix for example, but if anybody knows if any smaller scale, easier to grok talks or papers that go into common pitfalls and solve them for real (vs giving a handwavey solution that sounds good but isn't concrete) I'd love to check it out.
Most issues I have seen around microservices circle around bad design decisions or poor implementations. Both are also major issues for monoliths, but since those issues do not surface early on, it is easier to start developing a monolith and take on the technical debt to be dealt with later on.
Microservices architecture takes time and effort, but pays huge dividends in the long run.
> As the number of feature teams contributing to the same codebase increases, its components become increasingly coupled over time.
Nope. Not automatically true. Unless your development team is incompetent. And if your development team is incompetent, switching to micro-services will be even worse.
30+ years of experience here working on very large scale software composed of libraries from many teams. I never ever had the kind of monolith problems described in the article.
What I have seen though is a 30+ years old micro-services system that generates more WTF comments per day than most software gets per month. It is literally the worst written software I have ever seen or heard about. 150+ micro-services all failing in new random ways every single day. Solving a problem that monoliths I have worked on solved way better.
Even if one service needs to be developed separately, it can still live in the monolith. For dev, it's easier to test. For production, deploy the same monolith but have it only handle a single service, depending on how it's deployed. You get all the benefits of a monolith while a little benefit of separate services.
Microservices are okay, but "kick up" a lot of issues that can be slow/awkward to solve. Testing being the big one. If the business decides it's worth the exponential complexity then that's fine.
Goldilocks went inside the Git Hut. First she inspected the git repo of the great, huge bear, and that was 2TB which was far too large and monolithic for her. And then she tasted the 281 repos of the middle bear, and they were far too small and numerous for her. And then she went to the 10 repos of the little, small wee bear, and checked out those repos. And they were neither too large nor too small and numerous, but just right; each had it's own reason to be and she liked it so well, that she sat down and cloned it all.
We started with a microservices architecture in a greenfield project. In retrospect we really should have started with a monolith. Every 2-3 days we have to deal with breaking changes to API schemas. Since we’re still pre-production it doesn’t make sense to have a dozen major versions up side-by-side, so we’re just sucking up the pain for now. It’s definitely a headache though.
We also are running on AWS API Gateway + lambda so the availability and scalability are the same regardless of monolith or not…
Having an API gateway sounds nice, but that’s not how I’ve seen it. Usually the FE has to query and synchronize multiple API endpoints (eg auth, product, billing).
In this case microservices really just means "let the front-end handle the complexity of synchronizing multiple API endpoints."
I still don’t fully understand what makes something a monolith.
For example, I have a big app which is serving 90% of the API traffic. I also have three separate services, one for a camera, one to run ML inference, and one to drive a laser welding process. They are split up because they all need specific hardware and preferably a separate process (one per gpu for example).
Is this a monolothic app or a micro service architecture?
I would call that a micro-service architecture using a mono-repo.
I think a lot of people here are conflating mono-repo/poly-repo with a mono-deployment. You can easily add in extra entrypoint executables to a single mono-repo. That allows initiating parts of the system on different machines, allowing scaling for skewed API rates across your different request handlers.
Similarly, you can create a monolith application building from a poly-repo set of dependencies. This can be easier depending on you version control system, as I find git starts to perform really poorly when you enter multi-million SLOC.
At my job we have a custom build system for poly-repos that analyzes dependencies and rebuilds higher level leaf packages for lower level changes. Failing a dependency rebuild gets your version rejected, keeping the overall set of packages green.
I tend to call these things “distributed monoliths” assuming the camera, ML and laser driver are integral to the functionality of the system.
There is no hard rule that I know of but my heuristic is something like “can I bring instances up and down randomly without affecting operations (too much)”. If so, I’d call that a microservice(ish) architecture.
The waters have been muddied though. Microservices were IMO once associated with Netflix-like scalability, bringing up and down bunches up “instances” without trouble. But nowadays what used to be good old service-oriented architecture (SOA) tend to be also called microservices.
SOA can also be about scalability, but it tended to focus more on how to partition the system on (business) boundaries. I guess like you did.
There should be a distinction between doing a migration of a "monolithic" system to "microservices" vs adding functionality to a monolithic system by implementing a "microservice" that will be consumed by the monolithic system.
In some cases, microservices are helpful for separation of concerns and encapsulation.
But teams often get the idea that their monolithic system should be rewritten using "microservice" patterns. It's important to question whether the monolithic system is using the correct abstractions and whether spllitting it into different services is the only way to approach the deficiencies of the system.
In most systems there are bits that can (and probably should) be quite decoupled from the rest of the system. Understanding this is a more fundamental aspect of system design and does not apply only to monolithic or microservice approaches.
I feel like this article conflates monoliths with monorepos. You can have a more flexible (though not as "flexible" as microservices, perhaps for the better) engineering by just following the "libraries" pattern mentioned in the article and splitting each of the libraries into their own repos, and then having version bumps of these libraries as part of the release process for the main application server.
Doing it this way gets you codebase simplicity, release schedule predictability, and makes boundaries a bit more sane. This should get you a lot farther than the strawmanned "one giant repo with a spaghetti mess of code" model that the author posits.
I've found that microservices are fine. In fact, they're pretty damn neat!
But what isn't fine are nanoservices, where a service is basically just a single function. Imagine creating a website where you have a RegisterService, LoginService, ForgotPasswordService, ChangePasswordService, Add2FAService, etc.
While I haven't seen it THAT bad, I've seen things pretty close to it. It becomes a nightmare because the infra becomes far more complex than it needs to be, and there ends up being so much repeated boilerplate.
I tried to use an eBPF sampling profiler to produce a flame graph/marker chart of our c/c++ desktop application. I struggled to get it going, seems like the existing tools are for kernel level profiling and more complicated stuff.
Anyone recommend a simple technique to produce a quality profiler report for profiling a desktop application? Honestly the chrome/firefox profiler is so great that I do wasm builds and use those. Ideally Id like a native profiler that can output .json that chrome/ff can open in it's profile analyzer.
> The term micro can be misleading, though - there doesn’t have to be anything micro about the services. In fact, I would argue that if a service doesn’t do much, it just creates more operational toll than benefits. A more appropriate name for this architecture is service-oriented architecture, but unfortunately, that name comes with some old baggage as well.
What baggage is that?
Because in my experience 90% of the argument revolves around the word "micro."
If "micro" is indeed irrelevant to "microservices," let's name things better, yes?
I look forward to 2030 when microservices are all the rage. Anyone who avoids the temptation to tear down and rebuild their entire org as a monolith will be way ahead of the curve.
I’ve always thought the monolith vs. micro service debate to miss the point.
Taking an RPC call has costs. But sometimes you need to. Maybe the computation doesn’t fit on any of your SKUs, maybe it is best done by a combination of different SKUs, maybe you need to be able to retry if an instance goes down. There are countless reasons that justify taking the overhead of an RPC.
So, do you have any of those reasons? If yes, take the RPC overhead in both compute and version management and stuff.
Looking at the “costs” section and considering the recent developments, it seems better tooling will increasingly be helpful in running a software company. See for example Nix Flakes. You can now have instant and performant declaratively defined identical environments in production and development, and across teams.
IMHO widespread use of such technologies (as opposed to heavyweight ones like Docker) could relieve one of the biggest costs of microservices: They are expensive to run.
I think most organizations should take a stab at scaling out with in-process services first - ie, instead of an rpc it's a function call, instead of a process, it's a module, etc.
In a well-factored app, stepping on one another's toes should be the exception, not the rule.
Unfortunately, I think modern web frameworks don't do a good job of enabling this. They often encourage "God object" patterns and having code/data very tightly coupled.
> Once the monolith is well matured and growing pains start to rise, then you can start to peel off one microservice at a time from it.
Curious what is considered a growing pain of the monolith vs tech debt that hasn't been tackled. If the issue is the monolith can't scale in performance then a services oriented architecture doesn't necessarily give you that automatically unless you know where the bottleneck of your monolith is.
>As each team dictates its own release schedule and has complete control over its codebase, less cross-team communication is required, and therefore decisions can be taken in less time.
In my experience, it's the opposite. When a feature spans several microservices (and most of them do), there's much more communication to coordinate the effort between several isolated teams.
> Getting the boundaries right between the services is challenging - it’s much easier to move them around within a monolith until you find a sweet spot. Once the monolith is well matured and growing pains start to rise, then you can start to peel off one microservice at a time from it.
I couldn't agree more. "Do not optimize your code" applies here.
I think that chunking up your application layer into smaller parts is always a good idea. But when do you say its a microservice? When its completely isolated, with its own database etc. Are many small applications running as seperate binaries/processes/on different ports talking to one database endpoint also microservices?
> Are many small applications running as seperate binaries/processes/on different ports talking to one database endpoint also microservices?
One database endpoint as in ? You can use different schemas and have no relations between tables used by different services, or on the other extreme have services which write to same table.
I have read in a book that the most important criteria is independent deployability. I forget the name of the book.
Testing hugely via expensive integration tests or E2E is needed no matter which software you have. And you could easily fall into doing that for microservice architectures. However, a common way is rather having contract tests. Google Fowler's article for it
I liked this well balanced approach. What I think is necessary is more capability to have hard modularisation within a monolith, so decoupling is not a reason to introduce microservices. Performance should be the main/only reason to do it. It's a shame few languages support this.
In my experience, if a piece of code is stateful and is called from several different services, then it should be its own service. Otherwise it is usually better off as a library.
Has anyone approached microservices with good old brookes modularization perspective ? when and how to split your app into modules / services whatever.
Microservices are necessary and the best way to architect something new that is going to be used at scale. In my experience working with monolithic architecture with 20+ teams at a large Tech company, I have found it takes multiple years to convert to microservices. Rebuilding generally is possible in half as much and gives you the opportunity to hire good talent, motivate existing employees and use the latest tech. thoughts?
If your company has a microservice architecture but doesn't have proper knowledge on how should they communicate, how should they share code etc then it is the worst thing possible.
Can we just build our localized monolith systems in a modular way so they can easily be decomposed into decentralized microservices at any sensible division when the need arises..? Must we always have this debate against an idea that is stupid on its face - sending remote requests for every single service..?
Although one point I'd like to contest is the first "pro" which is you can use a different language for each service. We tried this approach and it failed fantastically. You're right about the cons, it becomes un-maintable.
We had 3 microservices that we maintained on our team, one in Java, one in Ruby and one in Node. We very quickly realized we needed to stick to one, in order to share code, stop the context switching, logging issues, etc.
The communication piece is something that solid monoliths should practice as well (as is it touched on in the article). Calling an 3rd party API without a timeout is not a great idea (to be it lightly), monolith or microservice.
Thought-provoking nevertheless, thank you for sharing.
1) there are old 3rd party dependency incompatibilities that you can spin off and let live separately instead of doing a painful refactor, rebuilding in house, or kludgy gluing
2) there are deploy limitations on mission critical high available systems that should not hold up other systems deployment that have different priorities/sensitive business hours time windows
3) system design decisions that cannot be abstracted away are at the mercy of some large clients of the company that are unable or unwilling to change their way of doing things - you can silo the pain.
And to be clear, it's not that these things are "cost free". It's just a cost that is worth paying to protect the simpler monolith from becoming crap encrusted, disrupted with risky deploys, or constrained by business partners with worse tech stacks.