Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is how client-server applications have been done for decades, it's basically only the browser that does the whole "big ole requests" thing.

The problem with API + frontend is:

1. You have two applications you have to ensure are always in sync and consistent.

2. Code is duplicated.

3. Velocity decreases because in order to implement almost anything, you need buy-in from the backend AND frontend team(s).

The idea of Blazor Server or Phoenix live view is "the server runs the show". There's now one source of truth, and you don't have to spend time making sure it's consistent.

I would say, really, 80% of bugs in web applications come from the client and server being out of sync. Even if you think about vulnerability like unauthorized access, it's usually just this. If you can eliminate those 80% or mitigate them, then that's huge.

Oh, and thats not even touching on the performance implications. APIs can be performant, but they usually aren't. Usually adding or editing an API is treated as such a high risk activity that people just don't do it - so instead they contort, like, 10 API calls together and discard 99% of the data to get the thing they want on the frontend.



No, it's not. I've built native Windows client-server applications, and many old-school web applications. I never once sent data to the server on every click, keydown, keyup, etc. That's the sort of thing that happens with a naive "livewire-like" approach. Most of the new tools do ship a little JavaScript, and make it slightly less chatty, but it's still not a great way to do it.

A web application should either be server-generated HTML with a little JS sprinkled in, or a client-side application with traditional RPC-like calls when necessary.

Blazor (and old-school .NET Web Forms) do a lot more back-and-forth than either of those two approaches.


Yes, as I've stated, the big stuff is new Web stuff.

When I say traditional client-server applications, I mean the type of stuff like X or IPC - the stuff before the Web.

> A web application should either be server-generated HTML with a little JS sprinkled in, or a client-side application with traditional RPC-like calls when necessary.

There's really no reason it "should" be either one or the other because BOTH have huge drawbacks.

The problem with the first approach (SSR with JS sprinkled) is that particular interactions become very, very hard. Think, for example, a node editor. Why would we have a node editor? We're actually doing this at work right now, building out a node editor for report writing. We're 95% SSR.

Turns out, super duper hard to do with this approach. Because it's so heavily client-side interactive so you need lots and lots of sync points, and ultimately the SERVER will be the one generating the report.

But actually, the client-side approach isn't very good either. Okay, maybe we just serialize the entire node graph and sent it over the pipe once, and then save it now and again. But what if we want to preview what the output is going to look like in real-time? Now this is really, really hard - because we need to incrementally serialize the node graph and send it to the server, generate a bit of report, and get it back, OR we just redo the report generation on the front-end with some front-loaded data - in which case our "preview" isn't a preview at all, it's a recreation.

The solution here is, actually, a chatty protocol. This is the type of thing that's super common and trivial in desktop applications - it's what gives them superpowers. But it's so rare to see on the Web.


You have two applications you have to ensure are always in sync and consistent.

No, the point of the API is to loosely couple the frontend and backend with a contract. The frontend doesn't need to model the backend, and the backend doesn't need to know what's happening on the frontend, it just needs to respect the API output. Changes/additions in the API are handled by API versioning, allowing overlap between old and new.

Code is duplicated.

Not if the frontend isn't trying to model the internals of the backend.

Velocity decreases because in order to implement almost anything, you need buy-in from the backend AND frontend team(s).

Velocity increases because frontend works to a stable API, and backend doesn't need to co-ordinate changes that don't affect the API output. Also, changes involving both don't require simultaneous co-ordinated release: once the PM has approved a change, the backend implements, releases non-breaking API changes, and then frontend goes on its way.


> No, the point of the API is to loosely couple the frontend and backend with a contract. The frontend doesn't need to model the backend, and the backend doesn't need to know what's happening on the frontend, it just needs to respect the API output. Changes/additions in the API are handled by API versioning, allowing overlap between old and new.

This is the idea, and idea which can never be fully realized.

The backend MUST understand what the frontend sees to some degree, because of efficiency, performance, and user-experience.

If we build the perfect RESTful API, where each object is an endpoint and their relationships are modeled by URLs, we have almost realized this vision. But it cost us our server catching on fire. It thrashed our user experience. Our application sucks ass, it's almost unusable. Things show up on the front-end but they're ghosts, everything takes forever to load, every button is a liar, and the quality of our application has reached new depths of hell.

And, we haven't realized the vision even. What about Authentication? User access? Routing?

> Not if the frontend isn't trying to model the internals of the backend.

The frontend does not get a choice, because the model is the model. When you go against the grain of the model and you say "everything is abstract", then you open yourself up to the worst bugs imaginable.

No - things are linked, things are coupled. When we just pretend they are not, we haven't done anything but obscure the points where failure can happen.

> Velocity increases because frontend works to a stable API, and backend doesn't need to co-ordinate changes that don't affect the API output. Also, changes involving both don't require simultaneous co-ordinated release: once the PM has approved a change, the backend implements, releases non-breaking API changes, and then frontend goes on its way.

No, this is a stark decrease in velocity.

When I need to display a new form that, say, coordinates 10 database tables in a complex way, I can just do that if the application is SSR or Livewire-type. I can just do that. I don't need the backend team to implement it in 3 months and then I make the form. I also don't need to wrangle together 15+ APIs and then recreate a database engine in JS to do it.

Realistically, those are your two options. Either you have a performant backend API interface full of one-off implementations, what we might consider spaghetti, or you have a "clean" RESTful API that falls apart as soon as you even try to go against the grain of the data model.

There are, of course, in-betweens. RPC is a great example. We don't model data, we model operations. Maybe we have a "generateForm" method on the backend and the frontend just uses this. You might notice this looks a lot like SSR with extra steps...

But this all assumes the form is generated and then done. What if the data is changing? Maybe it's not a form, maybe it's a node editor? SSR will fall apart here, and so will the clean-code frontend-backend. It will be so hellish, so evil, so convoluted.

Bearing in mind, this is something truly trivial for desktop applications to do. The models of modern web apps just cannot do this in a scalable, or reliable, way. But decades old technology like COM, dbus, and X can. We need to look at what the difference is and decide how we can utilize that.


The problem with all-backend is that to change the order of a couple buttons, you now need buy-in from the backend team. There's definitely a happy medium or several between these extremes: one of them is that you have full-stack devs and don't rigidly separate teams by the implementation technology. Some devs will of course specialize in one area more than others, but that's the point of having a diverse team. There's no good reason that communicating over http has to come with an automatic political boundary.


Communicating over HTTP comes with pretty much as many physical boundaries as possible. The main problem, and power, of APIs is their inflexibility. By their design, and even the design of HTTP itself, they are difficult to change over time. They're interfaces, with defined inputs and outputs.

Say I want to draw a box which has many checkboxes - like a multi-select. A very, very simple, but powerful, widget. In most Web applications, this widget is incredibly hard to develop.

Why is that? Well first we need to get the data for the box, and ideally just this particular page of the box, if it's paginated. So we have to use an API. But the API is going to come with so much baggage - we only need identifiers really, since we're just checking a checkbox. But what API endpoint is going to return a list of just identifiers? Maybe some RESTful APIs, but not most.

Okay okay, so we get a bunch of data and then throw away most of it. Whatever. But oh no - we don't want this multi-select to be split by logical objects, no, we have a different categorization criteria. So then we rope in another API, or maybe a few more, and we then group all the stuff together and try to splice it up ourselves. This is a lot of code, yes, and horribly frail. The realization strikes that we're essentially doing SQL JOIN and GROUP BY in JS.

Okay, so we'll build an API. Oh no you won't. You can't just build an API, it's an interface. What, you're going to write an API for your one-off multi-select? But what if someone else needs it? What about documentation? Versioning? I mean, is this even RESTful? Sure doesn't look like it. This is spaghetti code.

Sigh. Okay, just use the 5 API endpoints and recreate a small database engine on the frontend, who cares.

Or, alternative: you just draw the multi-select. When you need to lazily update it, you just update it. Like you were writing a Qt application and not a web application. Layers and layers of complexity and friction just disappear.


There's a lot of different decisions to make with every individual widget, sure, but I was talking about political boundaries, not physical ones. My point is that it's possible for a single team to make decisions across the stack like whether it's primarily server-side, client-side, or some mashup, and that stuff like l10n and a11y should be the things that get coordinated and worked out across teams. A lot of that starts with keeping hardcore True Believers off the team.


Stop having backend and frontend teams. Start having crossfunctional teams. Problem solved.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: