> Developing a montolith for years but now you have written a 15 line golang http api that converts pdfs to stardust and put it into on a dedicted server in your office? welp thats a microservice.
But the 15 lines of Golang are not just 15 lines of Golang in production. You need:
- auth? Who can talk to your service? Perhaps ip whitelisting?
- monitoring? How do you know if you service is up and running? If it's down, you need alerts as well. What if there is a memory problem (because code is not optimal)?
- how do you deploy the service? Plain ansible or perhaps k8s? Just scp? Depending on your solution, how do you implement rollbacks?
- what about security regarding outdated packages the Go app is using? You need to monitor it as well.
And so on. The moment you need to store data that somehow needs to be in sycn with the monolith's data, everything gets more complicated.
Many of these points you're mentioning is exactly why k8s was developed. Yes it makes deploying simple applications unnecessary hard, but it make deploying more complicated applications WAY more manageable.
So in the k8s world:
- auth: service meshes, network policies, ...
- monitoring: tons of tooling there to streamline that
- deploy: this at scale is trickier than you'd think, many seem to assume k8s on it's own here is the magic dust they need. But GitOps with ArgoCD + helm has worked pretty well at scale in my experience.
- Security is a CI problem, and you have that with every single language, not just Go. See Log4j.
Kubernetes is my bread & butter, but I do realise this has way too much overhead for small applications. However, once you reach a certain scale, it solves many of the really really hard problems by streamlining how you look at applications from an infrastructure and deployment side of things. But yes - you need dedicated people who understand k8s and know what the hell they're doing - and that's in my experience a challenge on it's own.
Let's also dispel a myth that k8s is only suitable for microservices. I have clients that are running completely separate monolith applications on k8s, but enough of those that managing them 'the old way' became very challenging, and moving these to k8s in the end simplified thing. But getting there was a very painful process.
1. auth? probably an internal service, so don't expose it to the outside network.
2. monitoring? if the service is being used anywhere at all, the client will throw some sort of exception if its unreachable.
memory problem? it should take <1 day to ensure the code for such a small service does not leak memory. if it does have memory leaks anyways, just basic cpu/mem usage monitoring on your hosts will expose it. then ssh in, run `top` voila now you know which service is responsible.
3. deployment? if its a go service, literally a bash script to scp over the binary and an upstart daemon to monitor/restart the binary.
rollback? ok, checkout previous version on git, recompile, redeploy. maybe the whole process is wrapped in a bash script or assisted by a CI/CD build job.
4. security? well ok, PDFs can be vulnerable to parser attacks. so lock down the permissions and network rules on the service.
Overall this setup would work perfectly fine in a small/medium company and take 5-10x less time than doing everything the FAANG way. i don't think we should jump to calling these best practices without understanding the context in which the service lives.
I agree more or less with 1 and 4 mostly. But for monitoring either you would have to monitor the service calling this microservice or need to have a way to detect error.
> if it does have memory leaks anyways, just basic cpu/mem usage monitoring on your hosts
Who keeps on monitoring like this? How frequently would you do it? In a startup there are somewhere in the range of 5 microservice of that scale per programmer and daily monitoring of each service by doing top is not feasible.
> 3. deployment? if its a go service, literally a bash script to scp over the binary and an upstart daemon to monitor/restart the binary.
Your solution literally is more complex than simple jenkins or ansible script for build then kubectl rollout restart yet is lot more fragile. Anyways the point stands that you need to have a way for deployment
My larger point is basically just against dogma and “best practices”. Every decision has tradeoffs and is highly dependent on the larger organizational context.
For example, kubectl rollout assumes that your service is already packaged as a container, you are already running a k8s cluster and the team knows how to use it. In that context, maybe your method is a lot better. But in another context where k8s is not adopted and the ops team is skilled at linux admin but not at k8s, my way might be better. There’s no one true way and there never will be. Technical decisions cannot be made in a vacuum.
> Overall this setup would work perfectly fine in a small/medium company and take 5-10x less time than doing everything the FAANG way.
The point was never comparing it to the FAANG way. The point is: it's easier (at the beginning) to maintain ONE monolith (and all the production stuff related to it) than N microservices.
It's easy to be snarky and dismissive about these things, but i've stood in the offices of a governmental org and have looked at people who are unable to receive their healthcare services queueing up because some similarly neglected system refused to work. One that also had a lot of the basics left out.
Figuring out what was wrong with it was hard, because the logging was inconsistent, all over the place in where it logged and also insufficient.
The deployments and environments were inconsistent, the sysadmins on clients's side manually changed stuff in the .war archives, all the way up to library versions, which was horrendous from a reproducibility perspective.
The project was also severely out of date, not only package wise, but also because the new versions that had been developed actually weren't in prod.
After ripping out the system's guts and replacing it with something that worked, i became way more careful when running into amateur hour projects like that and about managing the risks and eventual breakdown of them. I suggest that you do the same.
Don't put yourself at risk, especially if some sort of a liability about the state of the system could land on you. What are you going to do when the system gets breached and a whole bunch of personal data gets leaked?
> ...then don't spend hundreds of thousands on infrastructure.
I find it curious how we went from doing the basics of software development that would minimize risks and be helpful to almost any project out there to this.
To clarify, i agree with the point that you'll need to prioritize different components based on what matters the most, but i don't think that you can't have a common standard set of tools and practices for all of them. Let me address all of the points with examples.
> Auth: nobody knows IP address of our server anyway, don't bother with that. And for extra security we have secret port number.
Port scanning means that none of your ports are secret.
JWT is trivial to implement in most languages. Even basic auth is better than nothing with HTTPS, you don't always need mTLS or the more complicated solutions, but you need something.
This should take a few days to a week to implement. Edit: probably a day or less if you have an easily reusable library for this.
> Monitoring? Well, we have our clients for that. They'll call us if something happens.
This is not viable if you have SLAs or just enjoy sleeping and not getting paged.
There are free monitoring solutions out there, such as Zabbix, Nagios, Prometheus & Grafana and others. Preconfigured OS templates also mean that you just need the monitoring appliance and an agent on the node you want to monitor in most cases.
This should take close to a week to implement. Edit: probably an hour or less if you already have a server up and just need to add a node.
This is an error prone way to do things, as the experience of Knight Capital showed: https://dougseven.com/2014/04/17/knightmare-a-devops-caution.... In addition, manual configuration changes lead to configuration drift and after a few years you'll have little idea about who changed what and when.
In contrast, setting up Ansible and versioning your config, as well as using containers for the actual software releases alongside fully automated CI cycles addresses all of those problems. In regards to rollbacks, if you have automated DB migrations, you might have to spend some time writing reverse migrations for all of the DDL changes.
This should take between one to two weeks to implement. Edit: probably a day or less once per project with small fixes here and there.
> Outdated packages? We call those stable packages.
Log4j might be stable, but it also leads to RCEs. This is not a good argument, at least for as long as the software packages that we use are beyond our ability to control or comprehend.
This should be a regular automated process, that alerts you about outdated and/or insecure packages, at least use npm audit or something, or proactive scanning like OpenVAS. This should take close to a week to implement.
All of the mentioned software can easily run on a single node with 2 CPU cores and about 8 GB of RAM. I know this, because i did all of the above in a project of mine (mentioned technologies might have been changed, though). Of course, doing that over N projects will probably increase the total time, especially if they have been badly written.
In my eyes that's a worthwhile investment, since when you finally have that across all of your apps, you can develop with better certainty that you won't have to stay late and deliver 2 or 3 hotfixes after your release, which goes hand in hand with test coverage and automated testing.
That is hardly hundreds of thousands of dollars/euros/whatever on infrastructure. And if the personnel costs are like that, i'd like to work over there, then.
There are plenty of solutions to authentication. But really, don't implement a user system if it is not needed. There are plenty of other ways to secure on applications, which are way out of scope for this discussion.
The main point is, that one should never spend a "a few days to a week" to implement a feature that at best i useless and at worst is detrimental to the service stood up.
Implement auth, if it is needed, implement monitoring, CI, CD, dependency monitoring, testing, everything, if it is needed.
But don't implement it as dogmatic consequences of doing software development.
And regarding the spend: one week worth of work could be USD 8k. So just the initial implementation of your JWT based authentication system is 4% into the "hundreds of thousands of dollars". Then you need to factor in the extra complexity on maintenance and before you know it we do not talk about hundreds of thousands of dollars but millions...
I feel like "spaghetti garbage now, we'll fix it later" is a big part of why startups fail to execute well. Yeah you saved $8000 by launching your thing in a completely unmaintainable way, but now it's both harder to maintain and more likely to need it. Literally the first time it breaks you will probably lose that cost advantage just because it takes so long to debug.
The point you should have made is that dogmatic approaches usually produce a lot of waste, but the example you gave us exactly why teams end up that way. Otherwise people come up with bullshit hacks like you describe and the entire team pays for it.
> But really, don't implement a user system if it is not needed.
Sure, i'm not necessarily advocating for a full blown RBAC implementation or something like that, merely something so that when your API is accidentally exposed to the rest of the world, it's not used for no good (at least immediately).
> Implement auth, if it is needed, implement monitoring, CI, CD, dependency monitoring, testing, everything, if it is needed.
> But don't implement it as dogmatic consequences of doing software development.
Now this is a bit harder to talk about, since the views here will probably be polarized. I'd argue that if you're developing software for someone else, software that will be paid for (and in many cases even in pro-bono development), most of that is needed, unless you just don't care about the risks that you (or someone else) might have to deal with otherwise.
If there's an API, i want it to at the very least have basicauth in front of it, because of the reasons mentioned above.
If there is a server running somewhere, i want to be alerted when something goes wrong with it, see its current and historic resource usage and get all of the other benefits running a few apt/yum commands and editing a config file would get me, as opposed to discovering that some bottleneck in the system is slowing down everything else because the memory utilization is routinely hitting the limits because someone left a bad JVM GC config in there somewhere.
If something's being built, i want it to be done by a server in a reasonably reproducible and automated manner, with the tasks are described in code that's versioned, so i'm not stuck in some hellscape where i'm told: "Okay, person X built this app around 2017 on their laptop, so you should be able to do that too. What do you mean, some-random-lib.jar is not in the classpath? I don't know, just get this working, okay? Instructions? why would you need those?"
Furthermore, if there is code, i want to be sure that it will work after i change it, rather than introducing a new feature and seeing years of legacy cruft crumble before my eyes, and to take blame for all of it. Manual testing will never be sufficient and integration tests aren't exactly easy in many circumstances, such as when the app doesn't even have an API but just a server side rendered web interface, which would mean that you need something like Selenium for the tests, the technical complexity of which would just make them even more half baked than unit tests would be.
Plus, if i ever stumble upon a codebase that lacks decent comments or even design docs/issue management, i will want to know what i'm looking at and just reading the code will never be enough to understand the context behind everything but if there are at least tests in place, then things will be slightly less miserable.
I'm tired of hating the work that i have to do because of the neglect of others, so i want to do better. If not for those who will come after me, then at least for myself in a year or so.
Do i do all of that for every single personal project of mine? Not necessarily, i cherrypick whatever i feel is appropriate (e.g. server monitoring and web monitoring for everything, tests for things with "business logic", CI/CD for everything that runs on a server not a local script etc.), but the beauty is that once you have at least the basics going in one of your projects, it's pretty easy to carry them over to others, oftentimes even to different stacks.
Of course, one can also talk about enterprise projects vs startups, not just personal projects, but a lot of it all depends on the environment you're in.
As for the money, i think that's pretty nice that you're paid decently over there! Here in Latvia i got about 1700 euros last month (net). So that's about 425 euros a week, or more like 850 if you take taxes and other expenses for the employer into account. That's a far cry from 100k. So that is also situational.
Offtopic, but if you really are paid 425 euros per week then you are seriously underpaid even for Eastern European standards. There are (relatively rare, but still) jobs that pay this much per day.
Yep, i've heard that before. Currently thinking of finishing the modernization of the projects that i'm doing at my current place of work and then maybe looking at other opportunities.
However, i just had a look at https://www.algas.lv/en, one site that aggregates the local salary information, based on a variety of stats. As someone with a Master's degree and ~5 years of experience and whose job description states that i'm supposed to primarily work in Java (even though i do full stack and DevOps now), i input those stats and looked at what information they have so far.
The average net salary monthly figures, at least according to the site, are:
So there's definitely a bit of variance, but it's still in the same ballpark. Of course, there are better companies out there, but the reality for many is that they're not rewarded for their work with all that much money.
USD 8k/w is not the pay a employee will receive, but the cost of the operation of that one employee, and it is ballpark numbers -- I work in Europe also, I am a Danish citizen.
Again, I advocate developing in a timely manner, and not do over engineering (neither under engineering).
> USD 8k/w is not the pay a employee will receive, but the cost of the operation of that one employee
I made the above response with that in mind.
> And regarding the spend: one week worth of work could be USD 8k.
The original claim was that one week's worth could be 8000 USD, or let's say roughly 7094 EUR. That comes out to 28376 EUR per month.
Last month i made around 1700 EUR, so it's possible to calculate approximately how much my work cost to my employer. Let's do that with a calculator here: https://kalkulatori.lv/en/algas-kalkulators
After inputting the data that's relevant to me, i got the following:
Gross salary 2654.51 EUR
Social tax 278.72 EUR
Personal income tax from income till 1667 EUR 277.66 EUR
Personal income tax (987.51 EUR), from part ... 227.13 EUR
Social tax, employer's part 626.20 EUR
Business risk fee 0.36 EUR
Total employer's expenses 3281.07 EUR
It should be apparent that 28376 EUR is a far cry from 3281 EUR, which is how much my work cost to my employer.
Thus, per week, 7094 EUR is also a far cry from 820 EUR, which is how much my work cost to my employer.
Also, 820 is actually pretty close to my initial guess of 850 EUR.
Of course, it's possible to argue that either i'm underpaid individually, or that many of my countrymen in Latvia are underpaid in general (on which i elaborated in an adjacent comment https://news.ycombinator.com/item?id=29595158), but then the question becomes... so what?
Does that mean that if you're in a well paid country like US, then you cannot afford proper development practices due to all of the payroll expenses that would cause? While that may well be, to me that sounds weird and plain backwards - if that were really true, then US would outsource even more to countries like mine and these outsourced systems would work amazingly well, since you can supposedly afford a team of developers here for what would buy you a single developer over there. And yet, most systems are still insecure, buggy and slow.
Maybe someone else is pocketing a lot of the money they receive in these countries, and is simply charging high consulting rates? The prevalence of WITCH companies here is telling, but that's a discussion for another time.
I really can’t tell how serious you are. I’m too aware that what you describe often is exactly how it works in practice. It’s just that very few admit it in public. :)
Yup. This is why I think that microservices require a stronger operational plattform, but then it enables new and more effective ways of developing new tunctionality.
Our internal software plattform is getting to a point so it can answer most of these things - auth via the central OIDC providers, basic monitoring via annotations of the job's services, deployments via the orchestration and some infrastructure around it, including optional checks and automated rollbacks and automated vulnerability scanning on build-servers and for the running systems. It wouldn't be 15 lines of go, more like 15 lines, plus about 100-200 lines of terraform and/or yaml to get everything configured, and a ticket do register the service in the platform. It's pretty nice and our solution consultants like it very much.
The thing is - this took a team about a year to build and it'll take another half a year to get everything we currently want to do right. And it takes a non-trivial time to maintain and support all of this. This kind of infrastructure only makes business sense, because we have enough developers and consultants moving a lot faster with this.
Back when we were a lot smaller, it made a lot more sense to just push a single java monolith on VMs with chef or ansible, because that was a lot easier and quicker to get working correctly (for one thing).
Well, it quickly can become a mess because of people having different ideas about what microservice is, and also decrying things as "for microservices only" when for example I just want to offload auth and monitoring to a specialized service.
It's also a common trope when I'm dealing with k8s decriers - yes, you might have one application that you can easily deploy, but suddenly there are 15 other medium-weight applications that solve different important problems and you want them all ;)
P.S. Recently a common thing in my own architectures is separate keycloak deployment that all services either know how to use, or have it handled at separate request router (service mesh or ingress or loadbalancer)
The task of the Microservice is to convert the pdf to stardust and to return it to its sender. so no auth.
Furthermore its most likely only reachable through the local network, or at least should be if you want some stranger not to be able to also make stardust from pdfs.
Monitoring: are you trying to say that its a lot esaier to pick up one logfile thant lets say 15? because they should be aggregated somewhere anyway no?
Deployment: Depending on anything you listed how do i do anything? Of course if have to define it but if you want a fancy example: k8s argocd canary deployments done. I literally set it up once.
security? Really?
Please dont get this wrong but this feels to me like whataboutism but well here i go:
i implement security just the same way as i would in the monorepo. The thing/person/entity just has to look into more repositories ;)
It comes down do one sentence i think:
State is not shared, state is communicated.
A microservice runs as some (somewhat) privileged user, you may want some auth. Can everyone internally create sales tickets? Or can everyone just query them? If a team provides a library to run, and you run it, you still only run as whatever user you have access to.
Monitoring: it's easier to look at a stack trace, including some other team's external library, than a HTTP error code 500.
Deployment is certainly easier when you're just shipping code and a build. You don't have to faff around with the previous instance running, maybe having some active connections/transactions/whatever, needing to launch a new one. Maybe it's not hard overall, but less fun.
For monitoring I’d also say the alerting side of things can be done via the older monolith. That is, catch exceptions and log/re-raise them as “PdfServiceException”.
But the 15 lines of Golang are not just 15 lines of Golang in production. You need:
- auth? Who can talk to your service? Perhaps ip whitelisting?
- monitoring? How do you know if you service is up and running? If it's down, you need alerts as well. What if there is a memory problem (because code is not optimal)?
- how do you deploy the service? Plain ansible or perhaps k8s? Just scp? Depending on your solution, how do you implement rollbacks?
- what about security regarding outdated packages the Go app is using? You need to monitor it as well.
And so on. The moment you need to store data that somehow needs to be in sycn with the monolith's data, everything gets more complicated.
Production stuff is not just about lines of code.