The other big advantage is you can monitor and scale your services independently and decouple outages.
If one endpoint needs to scale to handle 10x more traffic, its wholefully inefficient to 10x your whole cluster.
Ideally you write the code as services/modules in a monolith imo. Then you can easily run those services as separate deployments later down the line if need be
You also have to determine how much traffic for monitoring is due to microservices itself - where the messaging and logging might happen in memory, it now has to be read, and written, in much more expensive exectuions.
There's not one silver bullet. It's not 100% monoliths, or 100% microservices for all.
Learning from the things we don't do, haven't done yet, in the ways you haven't yet thought of also helps expand one's skills.
This is because clever architecture, will always beat clever coding.
Im not sure what you mean. Whats the difference in messaging and logging? What you mean by messaging?
Like through network versus code running on the same machine? Cause that should already be distributed unless you can really fit your whole needs on a single machine
There isn't that much difference between an application and a library. You can always create multiple deployments of the same code configured differently.
We have an app with two different deployments. One is serving HTTP traffic, and the other is handling kafka messages. The code is exactly the same, but they scale based on different metrics. It works fine.
If one endpoint needs to scale to handle 10x more traffic, its wholefully inefficient to 10x your whole cluster.
Ideally you write the code as services/modules in a monolith imo. Then you can easily run those services as separate deployments later down the line if need be