How people derive utility from programming varies from person to person and I suspect is the root cause of most AI generation pipeline debates, creative and code-wise. There are two camps that are surprisingly mutually exclusive:
a) People who gain value from the process of creating content.
b) People who gain value from the end result itself.
I personally am more of a (b): I did my time learning how to create things with code, but when I create things such as open-source software that people depend on, my personal satisfaction from the process of developing is less relevant. Getting frustrated with code configuration and writing boilerplate code is not personally gratifying.
Recently, I have been experimenting more with Claude Code and 4.5 Opus and have had substantially more fun creating utterly bizarre projects that I suspect would have more frustration than fun implementing the normal way. It does still require brainpower to QA, identify problems, and identify potential fixes: it's not all vibes. The code quality, despite intuition, has no issues or bad code smells that is expected of LLM-generated code and with my approach actually runs substantially more performantly. (I'll do a full writeup at some point)
a) People who gain value from the process of creating content.
b) People who gain value from the end result itself.
I personally am more of a (b): I did my time learning how to create things with code, but when I create things such as open-source software that people depend on, my personal satisfaction from the process of developing is less relevant. Getting frustrated with code configuration and writing boilerplate code is not personally gratifying.
Recently, I have been experimenting more with Claude Code and 4.5 Opus and have had substantially more fun creating utterly bizarre projects that I suspect would have more frustration than fun implementing the normal way. It does still require brainpower to QA, identify problems, and identify potential fixes: it's not all vibes. The code quality, despite intuition, has no issues or bad code smells that is expected of LLM-generated code and with my approach actually runs substantially more performantly. (I'll do a full writeup at some point)