This perspective isn't really making an apples-to-apples comparison. The author is comparing modern framework bloat to the simplicity of a standalone PHP script, but disregarding the underlying stack that it takes to serve those scripts (i.e., the Linux, Apache/Nginx, MySQL/Postgres in LAMP).
Back in those days, it was never really as simple as "sftp my .php file into a folder and call it a day". If you were on a shared host, you may or may not have access to any of the PHP config, needed for things such as adjusting memory limits (or your page might not render), which particular PHP version was available (limiting your available std lib functions), which modules were installed (and which version of them, and whether they were made for fastcgi or not). Scaling was in its infancy those days and shared hosts were extremely slow, especially those without caching, and would frequently crash whenever one tenant on that machine got significant traffic. If you were hosting your own in a VM or bare-metal, things were even worse, since then you had to manage the database on your own, the firewall, the SSH daemon, Apache config files in every directory or Nginx rules and restarts, OS package updates, and of course hardware/VM resource constraints.
Yes, the resulting 100-line PHP script sitting on top of it all might be very simple, but maintaining that stack never was (and still isn't). Web work back then was like 25% coding the PHP and 75% sys-admining the stack beneath it. And it was really hard to do that in a way that didn't result in customer-facing downtime, with no easy way to containerize, scale, hot-standby, rollover, rollback, etc.
=====================
I'd probably break down this comparison (of LAMP vs modern JS frameworks) into questions like this, instead:
1) "What do I have to maintain? What do I WANT to maintain?"
IMHO this is the crux of it. Teams (and individual devs) are choosing JS frameworks + heavy frontends because even though there are still servers and configurations (of course), they're managed by someone else. That abstraction and separation of concerns is what makes it so much easier to work on a web app these days than in the PHP days, IMO.
Any modern framework now is a one-command `create whatever app` in the terminal, and there, you have a functioning app waiting for your content and business logic. That's even easier than spinning up a local PHP stack with MAMP or XAMPP, especially when you have more than one app on the same disk/computer. And when it comes time to deploy, a single `git push` will get you a highly-available website automagically deployed in a couple minutes, with a preconfigured global CDN, HTTPS, asset caching, etc. If something went wrong, it's a one-click rollback to the previous version. And it's probably going to be free, or under $20/mo, on Vercel, Cloudflare Pages, Netlify, etc. Maybe AWS Amplify Hosting too, but like Lambda, that's a lot more setup (AWS tends to be lower-level and offers nitty-gritty enterprise-y configs that simpler sites don't need or want).
By contrast, to actually set up something like that in the PHP world (where most of the stack is managed by someone else), you'd either have to find a similar PHP-script-hosting-as-a-service like Google App Engine (there's not many similar services that I know of; it's different from a regular shared host because it's a higher level of abstraction) or else use something like Docker or Lando or Forge or GridPane to manage your own VM fleet. In the latter cases you would often still have to manage much of the underlying stack and deal with various configs and updates all the time. It's very different from the hosted JS world.
The benefit of going with a managed approach is that you're really only needing to touch your own application code. The framework code is updated by someone else (not that different from using Laravel or Symfony or Wordpress or Drupal). The rest of the stack is entirely out of your sphere of responsibility. For "jamming" as an individual or producing small sites as a team, this is a good thing. It frees up your devs to focus on business needs rather than infrastructure management.
Of course, some teams want entirely in-house control of everything. In that case they can still manage to their own low-level VMs (an EC2 or similar) and maintain the whole LEMP or Node stack. That's a lot more work, but also more power and control.
A serverless func, whether in JS (anywhere) or PHP (like via Google Cloud Run), is just a continuation of this same abstraction. It's not necessarily just about high availability, but low maintenance. You and your team (and the one after them, and the one after that) only ever have to touch the function code itself, freeing you from the rest of the stack. It's useful the same way that being able to upload a video to YouTube is: You can focus on the content instead of the delivery mechanism.
2) Serverside resource consumption
It's not really true that "PHP scripts don't consume any resources (persistent processes, etc.) when they're not being used", any more than a JS site or serverless func isn't consuming resources when they're not being used. Both still require an active server on the backend (or some server-like technology, like a Varnish or Redis cache or similar).
Neither is really an app author's concern, since they are both hosting concerns. But the advantage of the JS stuff is that it's easier and cheaper for hosts to containerize and run independently, like in a V8 isolate (for Cloudflare Workers). It's harder to do that with a PHP script and still ensure safety across shared tenants. Most shared PHP environments I know of end up virtualizing/dockerizing much of the LAMP stack.
3) Serverside rendering vs static builds vs clientside rendering
As for serverside rendering vs static builds, the article doesn't really do a fair comparison of that either. This is a tradeoff between delivery speed and dynamicness, not between PHP and JS.
Even in the PHP world, the PHP processor itself offered caching, then frameworks like Wordpress would offer its own caching on top of that, then you would cache even the result of that in Varnish or similar. That essentially turns a serverside rendered page into a static build that can then be served over a CDN. This is how big PHP hosts like Pantheon or Acquia work. No medium or big size would make every request hit the PHP process directly for write-rarely, read-often content.
In the JS world, you can also do serverside rendering, static builds, clientside renders, and (realistically) some combination of all of those. The difference is that it's a lot more deliberate and explicit (but also confusing at first). But this is by design. It makes use of the strength of each part of that stack, as intended. If you're writing a blog post, chances are you're not going to edit that more than once every few weeks/months (if ever again). That part of it can be statically built and served as flat HTML and easily cached on the CDN. But the comments might trickle in every few minutes. That part can be serverside rendered in real time and then cached, either at the HTTP level with invalidations, or incrementally regenerated at will. And some things need to be even faster than that, like maybe being able to preview the image upload in your WYSIWYG editor, in which case you'd optimistically update the clientside editor with a skeleton and then verify upload/insertion success via AJAX. The server can do what it does best (query/collate data from multiple sources and render a single page out of it for all users to see), the cache can do what it does best (quickly copy and serve static content across the world), and the client can do what it does best (ensure freshness for an individual user, where needed).
It is of course possible (and often too easy) to mis-use the different parts of that stack, but you can say the same thing about the PHP world, with misconfigured caches and invalidations causing staleness issues or security lapses like accidentally shared secrets between users' cached versions.
4) Serverless as "CGI but it's trendy, [with vendor lock-in and a more complex deployment process]"
What vendor lock-in? Most of the code is just vanilla JS. There might be a different deployment procedure if you're using Cloudflare vs Lambda vs Vercel vs Serverless Framework, but those are typically still simpler than having to set up an SFTP connection or git repo in a remote folder. With a framework like Next, a serverless function is just another file in the API folder, managed in the same repo as the rest of your app. Even without a framework, you can edit and deploy a serverless function in a Cloudflare Sandbox with a few clicks and no special tooling. If you later want to move that to another serverless host (what an ironic term), you can copy and paste the code and modify maybe 10-15% of it to get it running again. And the industry is trying to standardize that part of it too.
And I think this directly relates to #1: It's not so much that serverless is high availability (which is nice), but more than they are well... server-less. Meaning maintenance-less for the end user. You don't have to manage a whole LAMP stack just to transform one object shape into another. If you already have a working app setup, yes, you can just add another script into your cgi-bin folder. But you can do the same in any JS framework's API folder.
5) Framework bloat
I feel like what this author really doesn't like is heavy frameworks. That's fine, they're not for everyone. But in either the PHP or JS world, frameworks are optional.
I guarantee you Drupal is heavier and more bloated than any popular JS framework (it's also a lot more powerful). Just like the PHP world has everything from Drupal to Wordpress to Symfony to Laravel, JS has Next, Remix, Astro, Svelte, Vue, etc. HTMX has Alpine. Ruby has Rails. Etc.
On the contrary, you can certainly write a few paragraphs of HTML as a string and render it in any of those frameworks, either as a template literal (PHP heredoc) or using JSX-like syntax.
That's not really what the frameworks try to solve. They are there for addressing certain business needs. In the case of the heaviest framework of them all, Next, it goes back to #3 and #4, about optimally separating work between the server, cache, and client. If your app is simple enough that you don't need that complexity, then either don't use that framework, use its older "pages" mode, or use another framework or none at all. If you don't need deterministic component rendering based on state, don't use React. If you don't need clientside state, don't use Javascript at all.
Similarly, you can write a dead-simple PHP page with a few server-side includes and heredocs, or maintain a labyrinthine enterprise Drupal installation for a few blog posts and marketing pages (not recommended... no, really, don't do that to yourself... ask me how I know).
In either case, it's again a question of "what do I want or need to maintain it". Choosing the right level of power vs simplicity, or abstraction vs transparency perhaps, is an architectural question about your app and business needs, not the language or ecosystem underneath it.
6) Vendor lock-in
You can host PHP anywhere. You can also host JS anywhere these days. In fact I'd argue there are more high-quality,low-cost JS hosts now than there ever were similar PHP hosts. Shared PHP hosts were a nightmare, because PHP was not easy to containerize for shared tenancy. JS hosting is cheap in comparison.
Most of the frameworks in either world are open-source. Not many are not-for-profit (Drupal is, but Laravel Forge/Forge and Vercel/Next have similar business models of open-source frameworks coupled with for-profit hosting).
In either case, though, it's really your application logic that's valuable (and even then, questionably so, since it'll likely end up completely rewritten in a few years anyway).
Ultimately we're all at the mercy of the browser developers. Google singlehandedly made shared hosting very difficult for everyone with the introduction by forcing HTTPS a few years back. It singlehandedly made the heavy JS frontend possible with its performant Javascript engine. WASM is still recent. WebGPU is on the horizon. New technologies will give rise to new practices, and new frameworks will soon feel old.
But JS is here to stay, because it's the only language that can natively interact with the DOM clientside. If your core business logic is written in almost-vanilla JS (or even JSX, by now), portability between JS frameworks isn't as hard as porting between different languages (like PHP to JS, or PHP to Ruby). Using it for both the client and server and in between just means fewer languages to keep track of, a shared typing system, etc. In that sense there's probably less vendor lock-in with JS than there is with PHP, which fewer and fewer companies and hosts support over time. PHP is overwhelmingly just Wordpress these days, which itself has moved to more dynamic React-based elements too (like in the Gutenberg editor).
I think the problem with the JS ecosystem is actually the opposite: not lock-in, but too many choices. Between the start and end of this post, probably five new frameworks were released =/ It's keeping up that's hard, not portability. You can copy and paste most of the same code and modify it slightly to make it work in another framework, but there is rarely any obvious gain from doing so. For a while there Next seemed like it was on track to becoming the standard JS framework, but then the app router confused a lot of people and now simpler alternatives are popping up again. For that much, at least, I can agree with the article: everything old is new again.
I couldn't agree with the article on almost any point.
One additional point: You can get "mildly dynamic" websites by using services. I have a completely static web site that's 100% on a CDN and that I've written zero lines of code for...but it has a full dynamic comment section due to Disqus integration. My "how many people have visited my page" is handled by Google Analytics. Other similar embedded services can provide many of the most common "mildly dynamic features".
I'm using Astro on a newer project, which allows you to static-generate pages however you like, but also allow you to run just one component as JavaScript, you can, without the inherent danger of running code on a server every time someone hits your web site. For full heavy-dynamic pages, you can render on the server as well. It's a nice compromise IMO.
That and I never want to use PHP again. Especially Drupal. I liked Drupal at first, but I never want to see it again.
> What vendor lock-in? Most of the code is just vanilla JS.
That runs in a specific environment with vendor-specific IAM configurations, vendor-specific DNS configurations, vendor-specific network configurations, vendor-specific service integrations, vendor-specific runtimes and restrictions, vendor-specific...
That sounds like an AWS thing? There's a lot of frameworks that can deploy straight to Vercel, Cloudflare Pages, Netlify, etc. without all that.
And if you really want to manage all that, it would apply to both PHP sites and JS and anything else. That's really more of a discussion of fully vs partially managed cloud solutions, not PHP or JS or any framework in particular.
> There's a lot of frameworks that can deploy straight to Vercel, Cloudflare Pages, Netlify, etc. without all that.
All of them need all that. And those frameworks are for a reason: they sweep a lot of these things under a rug, and after a certain complexity you will have a vendor lock-in. Just because you will end up depending on certain policies that other vendors don't provide. Or on certain services that other vendors don't provide. Or guarantees that other vendors don't provide. Or pricing that... Or...
Back in those days, it was never really as simple as "sftp my .php file into a folder and call it a day". If you were on a shared host, you may or may not have access to any of the PHP config, needed for things such as adjusting memory limits (or your page might not render), which particular PHP version was available (limiting your available std lib functions), which modules were installed (and which version of them, and whether they were made for fastcgi or not). Scaling was in its infancy those days and shared hosts were extremely slow, especially those without caching, and would frequently crash whenever one tenant on that machine got significant traffic. If you were hosting your own in a VM or bare-metal, things were even worse, since then you had to manage the database on your own, the firewall, the SSH daemon, Apache config files in every directory or Nginx rules and restarts, OS package updates, and of course hardware/VM resource constraints.
Yes, the resulting 100-line PHP script sitting on top of it all might be very simple, but maintaining that stack never was (and still isn't). Web work back then was like 25% coding the PHP and 75% sys-admining the stack beneath it. And it was really hard to do that in a way that didn't result in customer-facing downtime, with no easy way to containerize, scale, hot-standby, rollover, rollback, etc.
=====================
I'd probably break down this comparison (of LAMP vs modern JS frameworks) into questions like this, instead:
1) "What do I have to maintain? What do I WANT to maintain?"
IMHO this is the crux of it. Teams (and individual devs) are choosing JS frameworks + heavy frontends because even though there are still servers and configurations (of course), they're managed by someone else. That abstraction and separation of concerns is what makes it so much easier to work on a web app these days than in the PHP days, IMO.
Any modern framework now is a one-command `create whatever app` in the terminal, and there, you have a functioning app waiting for your content and business logic. That's even easier than spinning up a local PHP stack with MAMP or XAMPP, especially when you have more than one app on the same disk/computer. And when it comes time to deploy, a single `git push` will get you a highly-available website automagically deployed in a couple minutes, with a preconfigured global CDN, HTTPS, asset caching, etc. If something went wrong, it's a one-click rollback to the previous version. And it's probably going to be free, or under $20/mo, on Vercel, Cloudflare Pages, Netlify, etc. Maybe AWS Amplify Hosting too, but like Lambda, that's a lot more setup (AWS tends to be lower-level and offers nitty-gritty enterprise-y configs that simpler sites don't need or want).
By contrast, to actually set up something like that in the PHP world (where most of the stack is managed by someone else), you'd either have to find a similar PHP-script-hosting-as-a-service like Google App Engine (there's not many similar services that I know of; it's different from a regular shared host because it's a higher level of abstraction) or else use something like Docker or Lando or Forge or GridPane to manage your own VM fleet. In the latter cases you would often still have to manage much of the underlying stack and deal with various configs and updates all the time. It's very different from the hosted JS world.
The benefit of going with a managed approach is that you're really only needing to touch your own application code. The framework code is updated by someone else (not that different from using Laravel or Symfony or Wordpress or Drupal). The rest of the stack is entirely out of your sphere of responsibility. For "jamming" as an individual or producing small sites as a team, this is a good thing. It frees up your devs to focus on business needs rather than infrastructure management.
Of course, some teams want entirely in-house control of everything. In that case they can still manage to their own low-level VMs (an EC2 or similar) and maintain the whole LEMP or Node stack. That's a lot more work, but also more power and control.
A serverless func, whether in JS (anywhere) or PHP (like via Google Cloud Run), is just a continuation of this same abstraction. It's not necessarily just about high availability, but low maintenance. You and your team (and the one after them, and the one after that) only ever have to touch the function code itself, freeing you from the rest of the stack. It's useful the same way that being able to upload a video to YouTube is: You can focus on the content instead of the delivery mechanism.
2) Serverside resource consumption
It's not really true that "PHP scripts don't consume any resources (persistent processes, etc.) when they're not being used", any more than a JS site or serverless func isn't consuming resources when they're not being used. Both still require an active server on the backend (or some server-like technology, like a Varnish or Redis cache or similar).
Neither is really an app author's concern, since they are both hosting concerns. But the advantage of the JS stuff is that it's easier and cheaper for hosts to containerize and run independently, like in a V8 isolate (for Cloudflare Workers). It's harder to do that with a PHP script and still ensure safety across shared tenants. Most shared PHP environments I know of end up virtualizing/dockerizing much of the LAMP stack.
3) Serverside rendering vs static builds vs clientside rendering
As for serverside rendering vs static builds, the article doesn't really do a fair comparison of that either. This is a tradeoff between delivery speed and dynamicness, not between PHP and JS.
Even in the PHP world, the PHP processor itself offered caching, then frameworks like Wordpress would offer its own caching on top of that, then you would cache even the result of that in Varnish or similar. That essentially turns a serverside rendered page into a static build that can then be served over a CDN. This is how big PHP hosts like Pantheon or Acquia work. No medium or big size would make every request hit the PHP process directly for write-rarely, read-often content.
In the JS world, you can also do serverside rendering, static builds, clientside renders, and (realistically) some combination of all of those. The difference is that it's a lot more deliberate and explicit (but also confusing at first). But this is by design. It makes use of the strength of each part of that stack, as intended. If you're writing a blog post, chances are you're not going to edit that more than once every few weeks/months (if ever again). That part of it can be statically built and served as flat HTML and easily cached on the CDN. But the comments might trickle in every few minutes. That part can be serverside rendered in real time and then cached, either at the HTTP level with invalidations, or incrementally regenerated at will. And some things need to be even faster than that, like maybe being able to preview the image upload in your WYSIWYG editor, in which case you'd optimistically update the clientside editor with a skeleton and then verify upload/insertion success via AJAX. The server can do what it does best (query/collate data from multiple sources and render a single page out of it for all users to see), the cache can do what it does best (quickly copy and serve static content across the world), and the client can do what it does best (ensure freshness for an individual user, where needed).
It is of course possible (and often too easy) to mis-use the different parts of that stack, but you can say the same thing about the PHP world, with misconfigured caches and invalidations causing staleness issues or security lapses like accidentally shared secrets between users' cached versions.