> But when you delegate work to a junior developer, you still need to understand the problem deeply to communicate it properly, and to recognize when their solution is wrong or incomplete
You really don't. Most delegation work to a junior falls under the training guideline. Something trivial for you to execute, but will push the boundary of the junior. Also there's a lot of assumptions that you can make especially if you're familiar with the junior's knowledge and thought process. Also the task are trivial for you meaning you're already refraining from describing the actual solution.
> AI tools work similarly. You still hit edit-compile-test cycles when output doesn't compile or tests fail.
That's not what the edit-compile-test means, at least IMO. You edit by formulating an hypothesis using a formal notation, you compile to test if you've followed the formal structure (and have a faster artifact), and you test to verify the hypothesis.
The core thing here is the hypothesis, and Naur's theory of programming generally describe the mental model you build when all the hypotheses works. Most LLM prompts describe the end result and/or the processes. The hypothesis requires domain knowledge and to write the code requires knowledge of the programming environment. Failure in the latter parts (the compile and test) will point out the remaining gaps not highlighted by the first one.
Well put and I concur with your points (for what that is worth :-)).
And thanks for referencing "Naur's theory of programming". For those like myself previously unaware of this paper, it can be found below and is well worth a read:
You really don't. Most delegation work to a junior falls under the training guideline. Something trivial for you to execute, but will push the boundary of the junior. Also there's a lot of assumptions that you can make especially if you're familiar with the junior's knowledge and thought process. Also the task are trivial for you meaning you're already refraining from describing the actual solution.
> AI tools work similarly. You still hit edit-compile-test cycles when output doesn't compile or tests fail.
That's not what the edit-compile-test means, at least IMO. You edit by formulating an hypothesis using a formal notation, you compile to test if you've followed the formal structure (and have a faster artifact), and you test to verify the hypothesis.
The core thing here is the hypothesis, and Naur's theory of programming generally describe the mental model you build when all the hypotheses works. Most LLM prompts describe the end result and/or the processes. The hypothesis requires domain knowledge and to write the code requires knowledge of the programming environment. Failure in the latter parts (the compile and test) will point out the remaining gaps not highlighted by the first one.