What's it called when you describe an app with sufficient detail that a computer can carry out the processes you want? Where will the record of those clarifying questions and updates be kept? What if one developer asks the AI to surreptitiously round off pennies and put those pennies into their bank account? Where will that change be recorded, will humans be able to recognize it? What if two developers give it conflicting instructions? Who's reviewing this stream of instructions to the LLM?
"AI" driven programming has a long way to go before it is just a better code completion.
Plus coding (producing a working program that fits some requirement) is the least interesting part of software development. It adds complexity, bugs and maintenance.
> What's it called when you describe an app with sufficient detail that a computer can carry out the processes you want?
You're wrong here. The entire point is that these are not computers as we used to think of them. These things have common sense; they can analyse a problem including all the implicit aspects, suggest and evaluate different implementation methods, architectures, interfaces.
So the right question is: "what's it called when you describe an app to a development team and they ask back questions and come back with designs and discuss them with you, and finally present you with an mvp, and then you iterate on that?"
Bold of you to imply that GPT asks questions instead of making baseless assumptions every 5 words, even when you explicitly instruct it to ask questions if it doesn't know. When it constantly hallucinates command line arguments and library methods instead of reading the fucking manual.
It's like outsourcing your project to [country where programmers are cheap]. You can't expect quality. Deep down you're actually amazed that the project builds at all. But it doesn't take much to reveal that it's just a facade for a generous serving of spaghetti and bugs.
And refactoring the project into something that won't crumble in 6 months requires more time than just redoing the project from scratch, because the technical debt is obscenely high, because those programmers were awful, and because no one, not even them, understands the code or wants to be the one who has to reverse engineer it.
Of course, but who's talking about today's tools? They're definitely not able to act like an independent, competent development team. Yet. But if we limit ourselves to the here-and-now, we might be like people talking about GPT3 five years ago: "yes it does spit out a few lines of code, which sometimes even compiles. When it doesn't forget half way and starts talking about unicorns".
We're talking about the tools of tomorrow, which, judging by the extremely rapid progress, I think is only a few (3-5) years away.
Anyway, I had great experiences with Claude and DeepSeek.
"AI" driven programming has a long way to go before it is just a better code completion.