> or they had other requirements that necessitated microservices
Scale
Both in people, and in "how do we make this service handle the load". A monolith is easy if you have few developers and not a lot of load.
With more developers it gets hard as they start affecting eachother across this monolith.
With more load it gets difficult as the usage profile of a backend server becomes very varied and performance issues hard to even find. What looks like a performance loss in one area might just be another unrelated part of the monolith eating your resources,
Exactly, performance can make it necessary to move away from a monolith.
But everyone should know that microservices are more complex systems and harder to deal with and a bunch of safety and correctness issues that come with it as well.
The problem here is not many people know this. Some people think going to microservices makes your code better, which I’m clearly saying here you give up safety and correctness as a result)
You usually can't simultaneously deploy two services. You can try, but in a non trivial environment there are multiple machines and you'll want a rolling upgrade, which causes an old client to talk to a new service or vice versa. Putting the code into a monorepo does nothing to fix this.
This is much less of a problem than it seems.
You can use a serialisation format that allows easy backward compatible additions. The new service that has a new feature adds a field for it. The old client, responsibly coded, gracefully ignores the field it doesn't understand.
You can version the API to allow for breaking changes, and serve old clients old responses, and new clients newer responses. This is a bit of work to start and sometimes overkill, given the first point
If you only need very rare breaking changes, you can deploy new-version-tolerant clients first, then when that's fully done, deploy the new-version service. It's a bit of faff, but if it's very rare and internal, it's often easier than implementing full versioning.
> You usually can't simultaneously deploy two services
Yeah it’s roundabout solution to create something to deploy two things simultaneously. Agreed.
> Putting the code into a monorepo does nothing to fix this.
It helps mitigate the issue somewhat. If it was a polyrepo you suffer from an identical problem with the type checker or the integration test. The checkers basically need all services to be at the same version to do a full and valid check so if you have different teams and different repos the checkers will never know if team A made a breaking change that will effect team B because the integration test and type checker can’t stretch to another repo. Even if it could stretch to another repo you would need to do a “simultaneous” merge… in a sense polyrepos suffer from the same issue as microservices on the CI verification layer.
So if you have micro services and you have a polyrepos you are suffering from a twofold problem. Your static checks and integration tests are never correct and always either failing and preventing you from merging or deliberately crippled so as to not validate things across repos. At the same time your deploys will also guarantee to be broken if a breaking api change is made. You literally give up safety in testing, safety in type checking and working deploys by going microservices and polyrepos.
Like you said it can be fixed with backward comparability but that’s a bad thing to restrict your code to be that way.
> This is much less of a problem than it seems.
It is not “much less of a problem then it seems” because big companies have developed methods to do simultaneous deploys. See Netflix. If they took the time to develop a solution it means it’s not a trivial issue.
Additionally are you aware of any api issues in communication between your local code in a single app? Do you have any problems with this such that you are aware of it and come up with ways to deal with it? No. In a monolith the problem is nonexistent and it doesn’t even register. You are not aware this problem exists until you move to micro-services. That’s the difference here.
> You can use a serialisation format that allows easy backward compatible additions.
Mentioned a dozen times in this thread. Backwards compatibility is a bad thing. It’s a restriction that freezes all technical debt into your code. Imagine python 3 stayed backward compatible with 2 or the current version of macOS was still compatible with binaries from the first Mac.
> You can version the API to allow for breaking changes, and serve old clients old responses, and new clients newer responses. This is a bit of work to start and sometimes overkill, given the first point
Can you honestly tell me this is a good thing? The fact that you have to pay attention to this in microservices while in a monolith you don’t even need to be aware there’s an issue tells you all you need to know. You’re just coming up with behavioral work around and coping mechanisms to make microservices work in this area. You’re right it does work. But it’s a worse solution for this problem then monoliths which doesn’t have these work arounds because these problems don’t exist in monoliths.
> If you only need very rare breaking changes, you can deploy new-version-tolerant clients first, then when that's fully done, deploy the new-version service. It's a bit of faff, but if it's very rare and internal, it's often easier than implementing full versioning.
It’s only very rare in microservices because it’s weaker. You deliberately make it rare because of this problem. Is it rare to change a type in a monolith? No. Happens on the regular. See the problem? You’re not realizing but everything you’re bringing up is behavioral actions to cope with an aspect that is fundamentally weaker in microservices.
Let me conclude to say that there are many reasons why microservices are picked over monoliths. But what we are talking about here is definitively worse. Once you go microservices you are giving up safety and correctness and replacing it with work arounds. There is no trade off for this problem it is a logical consequence of using microservices.
Of course things are easier if you can run all your code in one binary on one machine, without remote users or any need to scale.
As soon as you add users you need to start coping with backwards compatibility though, even if your backend is still a monolith.
The backend being a monolith is probably easier for a while yes, but I've also lost count of the number of companies I've been to who have been in the painful process of breaking apart their monolith, because it doesn't scale and is hard to work with
Microservices or SOA aren't trivial, but the problems you bring up as extremely hard are pretty easy to deal with once you know how; and it buys you independent deploys and scale, which are pretty useful things at a bigger place
Very few companies reach the scale such that they require microservices. It’s not even about raw scale either as monoliths can also scale. Microservices serve only a specific kind of scale. Think about it. Load balancing across multiple servers? You can scale that monolith.
Most companies that switch to microservices often do it because of hype. The most common excuse is that microservices prevent spaghetti code as if carving it into different services is the thing that creates modularity while forgetting that folders and functions do the same exact thing.
It’s generally weak reasoning. Better to go with reasoning that is logically invariant like what I brought up in this thread.
Monoliths can scale to handle tons of users. Microservices are only needed for a specific type of scaling. For example Netflix you need http servers but you also need servers to handle video streaming. Or for google the search engine must be different from Gmail. Most companies provide 1 or few services that can be handled and scaled as a monolith to handle anything.
To add to this conversation from our other thread, you solve a bunch of problems that are nearly just as bad by not using microservices yet you still do. And that is the same reason why people use JavaScript despite the issues it introduces. It’s not like you’re the only person the industry who hasn’t used a technology that irrationally introduces horrible consequences.
Why do all services need to understand all these objects though? A service should as far as possible care about its own things and treat other services' objects as opaque.
... otherwise you'd have to do something silly like update every service every time that library changed.
From a definition point of view that might be right and it’s no doubt a good step up, compared to continuing with tainted data. In practice though, that is still not enough, these days we should expect higher degree of confidence from our code before it’s run. Especially with the mountains of code that LLMs will pour over us.
I've been at an embarassing number of places where turning off server side rendering improved performance as the number of browsers rendering content scales with the number of users, but the server-side rendering provisioning doesn't
I don't need my boss to care about me. I need them to care about the team succeeding, and the mission I signed up to.
I need them to show some very baseline decency and honesty so that I can somewhat trust what they say. I need them to not drive their own career at the expense of the company or team.
If the company needs to do layoffs, I want them to pick the right people to stay, to have a good shot at still doing a good job, not pick emotionally.
You don't have to care about people to understand it's better to not burn them out. Staff turnover is expensive and bad for the team performance. Quality and innovation starts suffering long before people implode.
Caring and showing you care can be independent. Some people care and don't show it. Some people don't care but pretend to. If you don't care, showing you care is harder, and your acts might betray your true feelings
Scale
Both in people, and in "how do we make this service handle the load". A monolith is easy if you have few developers and not a lot of load.
With more developers it gets hard as they start affecting eachother across this monolith.
With more load it gets difficult as the usage profile of a backend server becomes very varied and performance issues hard to even find. What looks like a performance loss in one area might just be another unrelated part of the monolith eating your resources,