Somehow the transformer architecture does pretty well at this task, and other architectures do not. You could say a transformer has "innate grammar", while other architectures do not.
That an LLM does well at grammar doesn't prove or disprove this possibility. A more poignant criticism of "innate grammar" would be that it's not a hypothesis that can be disproven, and as such not really a scientific statement.
That an LLM does well at grammar doesn't prove or disprove this possibility. A more poignant criticism of "innate grammar" would be that it's not a hypothesis that can be disproven, and as such not really a scientific statement.