Hacker News new | past | comments | ask | show | jobs | submit login

I recently had some discussions and did some research on this topic and I feel like there is a lot people don't talk about in these articles.

Here are some more considerations between micro services and monolothic tradeoffs. Its also important to consider these two things as a scale and not a binary decision.

1. Isolation. Failure in on service doesn't fail the whole system. Smaller services have better isolation.

2. Capacity management. Its easier to estimate the resource usage of a smaller service because it has less responsibilities. This can result in efficiency gains. Extended to this is you can also give optimized resources to specific services. A prediction service can use GPU while web server can use CPU only. A monolothic may need to use compute with both which could result in less optimized resources.

3. Dev Ops Overhead. In general monolothic services have less management overhead because you only need to manage/deploy one or few services over many.

4. Authorization/Permissions. Smaller services can be given a smaller scope permissions.

5. Locality. Monolothic can share memory and therefore have better data locality. Small services use networks and have higher overhead.

6. Ownership. Smaller services can have more granular ownership. Its easier to transfer ownership.

7. Iteration. Smaller services can move independently of one another and can release at seperate cadences.




1. Isolation

With a well built monolith, a failure on a service won't fail the whole system.

For poorly built microservices, a failure on a service absolutely does being down the whole system.

Not sure I am convinced that by adopting microservices, your code automatically gets better isolation


I work on low code cloud ETL tools. We provide the flexibility for the customer to do stupid things. This means we have extremely high variance in resource utilization.

An on demand button press can start a processes that runs for multiple days, and this is expected. A job can do 100k API requests or read/transform/write millions of records from a database, this is also expected. Out of memory errors happen often and are expected. It's not our bad code, its the customer's bad code.

Since jobs are run as microservices on isolated machines, this is all fine. A customer(or multiple at once) can set up something badly, run out of resources, and fail or go really slow and nobody is effected but them.


Its not automatic but it has the potential for more isolation by definition.

If your service has memory leak, crash it only takes down the service. It is still up to your system to handle such a failure gracefully. If such a service is a critical dependency then your system fails. But if it is not then your service can still partially function.

If your monolith has memory leak, or crash it takes down the whole monolith.


1. Except that a single process usually involves multiple services and the failure of one service often makes entire sequences impossible.


But not all sequences. It depends on your dependencies. Some services are critical for some processes. In the monoloth design its a critical dependency for all processes.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: