The problem with your idea is that when designing a programming language you are creating something meant to be replicable. If you obscure part of the language within a black box, you nullify the entire purpose of creating it in the first place. Is your goal to create an unanalyzable-by-AI programming language or is it information security?
You can use many obfuscation techniques and sleight of hand tricks (like those stated below) to make it very hard to superficially analyze. If you over obfuscate, you run the risk of making it unintelligible to humans. The problem becomes that conventional programming languages follow a 'predictable' structure and are created so that they can be replicated by other humans.
If that pattern is figured out, im sure it can be used to train an LLM to 'comprehend' that programming language. Think of it like designing a cipher or a puzzle; you can create a very complex cipher that is understood only by you or those you choose to share it with. But if the 'trick' is revealed, then the entire cipher is broken.
AI adoption and investment is incestuous. Many major customers and investors of Microsoft are themselves heavy proponents of AI, so they need increased AI adoption to justify and sustain the extent to which they've already bought in.
> And even if there are, the largest pool of candidates, the better.
More competition is not inherently "better" nor does it necessarily yield greater innovation. Trying to impose arbitrary competition as some abstract principle is just masochism.
VeraCrypt has been tried and tested but it appears to perform poorly on modern NVME devices due to inherent limitations in its TrueCrypt based architecture. As a Windows FOSS alternative, you can consider DiskCryptor. However, unlike VeraCrypt, it has not been audited (to my knowledge) and lacks many QoL features. Use at your own discretion.