I'm struggling to understand how the two ideas are different in any way other than intent. Sure, I'm not likely to throw an <|endoftext|> into a tailored context, but anybody who, for example, lies about what "assistant" says in the API calls is surely attempting to coerce behavior out of the model that isn't in line with OpenAI's intentions.
I thought you were suggesting renaming "prompt engineering" - the activity of designing prompts to solve specific problems - to "prompt injection", which means deliberately attacking prompts using input designed to subvert their planned behaviour.
To me, that's like rebranding "software engineering" to "exploit engineering" - sure, one is a subset of the other but they are not the same thing.
I don't think "prompt engineering" was ever a clearly-defined practice. The way I see it, it's just some over-eager noobs both prompting and prompt-injecting until they get results close to what they want, and then subsequently pretending like they're engaging in some new branch of mathematical reasoning. Hence why I called the moniker "pretentious".
Personally, I've never liked the title of "software engineer" or even "data engineer" (my own title). However, those are more rooted in engineering-like practices than any of this "prompt engineering" nonsense.