Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Certainly you can build a server-side rendered web application without a strict separation between frontend and backend and you absolutely should do so if you can. The common separation in frontend and backend microservice is only because JavaScript is so terrible that it's worth the effort to use a different language for backend, but at the same time you can't go all the way because frontend-tooling for backend languages (i.e. Java) is even worse. Introducing this technical separation generally only causes more complexity, inefficient network communication and bad developer experience. It is a historic wart that will hopefully go away over time. As for "a DB where query schema is separated from storage via views": The usual pattern nowadays is to not share data wherever possible (by building self-contained microservices that are aligned with business capabilities instead of layers), have a private database per microservice (in which case it is pointless to do this view indirection) and then provide a stream of business events to other microservices who build their own replicated data model from that, thus decoupling their own data model from external influences. I haven't seen any modern company use views to decouple the schemas but I suppose it is the obvious solution in a 1998 world where everyone shares the same database. If you add asynchronous replication to that, it is basically identical to the modern event-based replication.


It sounds like you're talking mostly about CRUD OLTP systems. Amazon in 1998 didn't actually have very many of those!

Consider instead what 1998 systems engineering looks like in the context of a Big Data OLAP data-warehouse (one where having denormalized replicas of it per service would cost multiples of your company's entire infra budget), where different services are built to either:

1. consume various reporting facilities of the same shared data-warehouse, adding layers of authentication, caching, API shaping, etc.; to then expose different APIs for other services to call. Think: BI; usage-based-billing reporting for invoice generation; etc.

2. abstract away Change Data Capture ETL of property tables from partners' smaller, more CRUD-y databases into your big shared data warehouse. (Think: product catalogues from book publishers), where the service owns an internal queue for robust at-least-once idempotent upserts into append-only DW tables.

At scale, an e-commerce storefront is more like banking (everything is CQRS; all data needs to be available in the same place so that realtime(!) use-cases can be built on top of joining gobs of different tables together) than it is like a forum or an issue-tracker.

There's a reason Amazon was the company to define the Dynamo architecture: their DW got so big it couldn't live on any one vertically-scaled cluster, so they had to transpose it all into a denormalized serverless key-value store (and do all the joins at query time) to keep those Big Data use-cases going!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: