So nobody will ever start another successful software project? People will, what, just stop creating software? I understand people's apprehension because of the pace of change, but this is just silly.
You're overstating the case, but I think there's a strong possibility people will prompt AIs to produce bespoke apps that solve their niche use-case rather than paying a developer to do it.
I pay for a SaaS app that tracks my finances, but it's not that great and missing some features I would like. Very soon I expect I'll be able to get a better, local-first replacement tailored to my needs by prompting Claude & Friends.
There are two big advantages to using a 3rd party system.
1) There are a lot of cases where aggregated user data, even if anonymized, allows for insights that you can't get using just your own data.
2) The software is really just a stand in for a process. A way of doing something, like record keeping or tax filing, etc. A lot of times it makes sense to follow an already established process rather than creating your own. You are less likely to encounter unexpected pitfalls that way.
I don't see how you can overcome those just by having an AI that can build simple crud apps at will.
I think developers overestimate the amount of people who want to create app. My friends are lawyers, doctors, musicians, Pr, sales and they really dont care about creating their own apps or software. They use their iPhones for calls and Instagram.
I’m publishing a very simple app with very little human written code and so far 90% of the actual work has had nothing to do with development. Most of it has been the “business” stuff, especially since the app stores have a lot of compliance and setup requirements.
Your example of a financial app is perfect: maybe one day grandma will be able to vibecode a budget app but then how is she going to set up the integration with banks? Publish it to the App Store? Keep it updated with bug fixes and resolve security issues? Is the AI going to handle security and incident response too?
Maybe you’ll say that one day the AI will just handle all this automatically with zero input or setup, but I think we have to assume that we are still asking grandma to spend time writing down what she wants and interfacing with the AI a pretty substantial amount to get it finished.
The thing is, we are also talking about competing with a SaaS product that is already available for around $5/month, and the professional software developers working on that product also have access to AI (and a whole lot of other skills).
Even making grandma put in a few prompts here and there is going to result in enough wasted time to say “screw this, I’ll just pay $5 a month for Simplifi.”
I can't even think of what #2 is. If the technology gets better at writing code perhaps it can start to do other things by way of writing software to do it, but then you effectively have AGI, so...
I don't know if a bunch of sloppy jQuery modules were ever really a viable option for an SPA. People tried to do it, sure, but I'd say the SPA era really started with backbone.js
It's still the best RDB schema creation/migration tool I know of. It has a crazy number of plugins to handle all sorts of unusual field types and indexing. I usually add django to any project I'm doing that involves an RDB just to handle migrations. As long as you avoid any runtime use of the ORM it's golden.
I kinda consider it a P!=nP type thing. If I need to write a simple function, it will almost always take me more time to implement it than it will to verify if an implementation of it suits my needs. There are exceptions, but overall when coding with LLMs this seems to hold true. Asking the LLM to write the function then checking it's work is a time saver.
I think this perspective is kinda key. Shifting attention towards more and better ways to verify code can probably lead to improved quality instead of degraded.
I see it as basically Cunningham's Law. It's easier to see the LLM's attempt a solution and how it's wrong than to write a perfectly correct solution first time.
Yeah i was wondering about that too. Even small cap PoW chains have dedicated mining hardware that is orders of magnitude faster than a GPU. I guess in theory it could work if you cobbled together enough hacked AWS accounts, but the scale required to make any sort of real profit would be gigantic. It just doesn't seem worthwhile.
Exactly. The problem is volume on exchanges to unload what you've mined. Some of these tokens only have a few thousand a day and any selling risks dumping the entire market. If you can steal the compute, sure, that is one thing, but it is very risky for not a huge payout.
Yeah, the brain is a sparse MoE. There is a lot of overlap in the hardware of the "language brain" and the "math brain". That being said, I can discuss software concepts in a foreign language, but struggle with basic arithmetic in anything but English. So while the hardware might be the same, the virtualization layer that sits on top might have some kind of compartmentalization.
I think the fundamental problem is that next.js is trying to do two things at once. It wants to a) Be fast to load for content that is sensitive to load speeds (SEO content, landing pages, social media sharable content, etc). It also wants to support complex client side logic (single page app, navigation, state store, etc). Doing those two things at the same time is really hard. It is also, in my experience, completely unnecessary.
Use minimal html/css with server side rendering (and maybe a CDN/edge computing) for stuff that needs to load fast. Use react/vue/whatever heavy framework for stuff that needs complex functionality. If you keep them separate, it's all very easy. If you combine them, it becomes really difficult to reason about.
This is my approach. My website tyleo.com is just a bunch of CSS/HTML classic webpage stuff. If a page needs a small amount of JS I just bundle it ad-hoc. More complex pages get the full React/SPA treatment but it doesn’t mean the whole website needs to be that way.
As an aside, I reuse code by using React as the template engine for HTML. Each page essentially has a toggle whether to ship it in dynamic mode or static mode which includes the full JS bundles or nothing.
When I was first getting into Deep Learning, learning the proof of the universal approximation theorem helped a lot. Once you understand why neural networks are able to approximate functions, it makes everything built on top of them much easier to understand.
I've used NEAT a few for a few different things. The main upside of it is that it requires a lot less hyper-parameter tuning than modern reinforcement learning options. But that's really the only advantage. It really only works on a subset of reinforcement learning tasks (online episodic). Also, it is a very inefficient search of the solution space as compared to modern options like PPO. It also only works on problems with fairly low dimensional inputs/outputs.
That being said, it's elegant and easy to reason about. And it's a nice intro into reinforcement learning. So definitely worth learning.
reply