Exactly this - show me something and I can tell the AI what I don't like or what it is missing.
Equally, you can ask the GenAI to keep asking you questions to broaden its knowledge of the problem you are solving, and also ask it to research the issues customers are having with a current solution.
Some engineers seem to imagine any non coder using AI will behave very simply 'make me a new search engine' . Lots of very clever people (who just don't know how to or want to learn to code) will be picking up the skills to use AI as it gets better and better.
I can see AI being used to write far better requirements and produce amazing prototypes - but if you work at a megacorp, chances are (for now) they will want that code rewritten by a 'human' developer.
True, I'm no doubt being too cautious - where I work it is mainly used for unit testing and prototyping - as I understand it we are using it with developers, but always with a human review - we never have a product owner making code with a AI tool and deploying to live. Yet.
Oh dear God, if you think product owners are going to be committing code you must be missing something. The AI is to help devs work easier, not replace them. We are nowhere near that. Too much involved.
I have had it build out entire API in node Express. Shit needed help for sure. The point of AI is to have it so low value work like scaffolding and boilerplate. The AI goes off the rails a lot. You have to have the skills to recognize when it is and somehow change course.
it doesn't matter, it's still likely true. many programmers use it, many of those generated lines will in fact become part of commits. (maybe not millions, but ... it depends on the definition of large orgs.)
The issue with LLM driven development is that it’s often as hard to verify the outputs of the model as it would’ve been to write it myself. It’s basically the programming equivalent of a Gish gallop.
But now you need to learn how the LLM understands your natural language words, which is very context dependent and will change in the next LLM update. I don't think that takes less time, at least if you are going to write something non-trivial.
no, you just need to vaguely know what you want, and get the LLM to produce something that you then examine, and crawl towards the end goal.
LLM's could potentially allow fast iteration from a laymen's description of what they want.