TBH I'm not sure how he arrived at "won’t replace software engineers anytime soon"
The LLM solved his task. With his "improved prompt" the code is good. The LLM in his setup was not given a chance to actually debug its code. It only took him 5 "improve this code" commands to get to the final optimized result, which means the whole thing was solved (LLM execution time) in under 1 minute.
A non-engineer by definition would not be able to fix bugs.
But why does it matter that they won't be able to interpret anything? Just like with real engineers you can ask AI to provide an explanation digestible by an eloi.
That statement is not being discussed as it is obvious. The question is "can AI be a developer", not "am I a developer if I use an AI who is a developer".
This doesn't make any sense: it's a question, not the answer. I don't see the relevance of doctors to the current topic anymore. Your initial reference made sense as an analogy (although the analogy itself was irrelevant), but the new reference doesn't make any sense whatsoever.
Did you read the two paragraphs written above and the one where he made that statement?
My comments on "what you are not sure" is that Max is a software engineer (I am sure a good one) and he kept iterating the code until it reached close to 100x faster code because he knew what "write better code" looked like.
Now ask yourself this question: Is there any chance a no-code/low-code developer will come to a conclusion deduced by Max (he is not the only one) that you are not sure about?
An experienced software engineer/developer is capable of improving LLM written code into better code with the help of LLM.
The LLM solved his task. With his "improved prompt" the code is good. The LLM in his setup was not given a chance to actually debug its code. It only took him 5 "improve this code" commands to get to the final optimized result, which means the whole thing was solved (LLM execution time) in under 1 minute.