> We propose moving towards a three-tier architecture where presentation (client), business logic and data are separated. This has also been called a service-based architecture. The applications (clients) would no longer be able to access the database directly, but only through a well-defined interface that encapsulates the business logic required to perform the function.
It is really interesting to see a recent(ish) trend away from this three tier design and back towards tighter coupling between application layers. Usually due to increased convenience & developer ergonomics.
We've got tools that 'generate' business layers from/for the data layer (Prisma, etc).
We've got tools that push business logic to the client (Meteor, Firebase, etc)
For what it's worth, Amazon's architecture for the core retail business has, if anything, moved even further up in abstraction. Tighter coupling is something that simple usecases can afford. Large scale but low-complexity can be closely coupled. High-complexity can't be.
The thing about Amazon's systems is that they are horrendously complex. In ~2016 I was working on the warehousing software, and it was a set of some hundreds of microservices in the space, which also communicated (via broad abstraction) to other spaces (orders, shipments, product, accounting, planning, ...) which were abstractions over 100s of other microservices.
So what I observed at the time was a broad increase in abstraction horizontally, rather than vertically. This manifesto describes splitting client-server into client-service-server; the trend two decades later was splitting <a few services, one for each domain> into <many services, one for each slice of each domain>, often with services that simply aggregated the results of their subdomains for general consumption in other domains.
I'm sure things have only gotten more complicated since then (in particular, a large challenge at the time was the general difficulty in producing maintainable asynchronous workflows, so lots of work was being done synchronously or very-slightly-asynchronously that should have been done in long-running or even lazy workflows).
A big part of the difference has to be that if you have a small number of developers (esp. n=1) and you can deploy everything at once, then those layers just get in the way of fast change. It seems Amazon were optimising for the ability to distribute data because they had big volume, and hide its form so they could change it without having to change lots of applications.
Of course, there’s some cargo culting around services where people jump to that architecture before they need it, but for most apps YAGNI. it’s cool that their architecture was driven by clear needs “just in time” to allow them to continue to scale
Nowadays you separate service by business capability and not by "layer". Layers just lead to a dependencies and dependencies lead to bad reliability and terrible development speed.
What Amazon were describing here is simply the division between a frontend web gateway service (or, in modernity, client-delivered SPAs); an API backend service to serve the XHRs of the web-gateway / SPA; and some kind of DBMS where user-visible query schema is separable from storage architecture via e.g. views. I don't think there's any modern system that doesn't have those things, no?
Certainly you can build a server-side rendered web application without a strict separation between frontend and backend and you absolutely should do so if you can. The common separation in frontend and backend microservice is only because JavaScript is so terrible that it's worth the effort to use a different language for backend, but at the same time you can't go all the way because frontend-tooling for backend languages (i.e. Java) is even worse. Introducing this technical separation generally only causes more complexity, inefficient network communication and bad developer experience. It is a historic wart that will hopefully go away over time. As for "a DB where query schema is separated from storage via views": The usual pattern nowadays is to not share data wherever possible (by building self-contained microservices that are aligned with business capabilities instead of layers), have a private database per microservice (in which case it is pointless to do this view indirection) and then provide a stream of business events to other microservices who build their own replicated data model from that, thus decoupling their own data model from external influences. I haven't seen any modern company use views to decouple the schemas but I suppose it is the obvious solution in a 1998 world where everyone shares the same database. If you add asynchronous replication to that, it is basically identical to the modern event-based replication.
It sounds like you're talking mostly about CRUD OLTP systems. Amazon in 1998 didn't actually have very many of those!
Consider instead what 1998 systems engineering looks like in the context of a Big Data OLAP data-warehouse (one where having denormalized replicas of it per service would cost multiples of your company's entire infra budget), where different services are built to either:
1. consume various reporting facilities of the same shared data-warehouse, adding layers of authentication, caching, API shaping, etc.; to then expose different APIs for other services to call. Think: BI; usage-based-billing reporting for invoice generation; etc.
2. abstract away Change Data Capture ETL of property tables from partners' smaller, more CRUD-y databases into your big shared data warehouse. (Think: product catalogues from book publishers), where the service owns an internal queue for robust at-least-once idempotent upserts into append-only DW tables.
At scale, an e-commerce storefront is more like banking (everything is CQRS; all data needs to be available in the same place so that realtime(!) use-cases can be built on top of joining gobs of different tables together) than it is like a forum or an issue-tracker.
There's a reason Amazon was the company to define the Dynamo architecture: their DW got so big it couldn't live on any one vertically-scaled cluster, so they had to transpose it all into a denormalized serverless key-value store (and do all the joins at query time) to keep those Big Data use-cases going!
It is really interesting to see a recent(ish) trend away from this three tier design and back towards tighter coupling between application layers. Usually due to increased convenience & developer ergonomics.
We've got tools that 'generate' business layers from/for the data layer (Prisma, etc).
We've got tools that push business logic to the client (Meteor, Firebase, etc)