This isn't a question of understanding the brain. We don't even have a theory of AGI, the idea that LLMs are somehow anywhere near even approaching an existential threat to humanity is science fiction.
LLMs are a super impressive advancement, like calculators for text, but if you want to force the discussion into a grandiose context then they're easy to dismiss. Sure, their outputs appear remarkably coherent through sheer brute force, but at the end of the day their fundamental nature makes them unsuitable for any task where precision is necessary. Even as just a chatbot, the facade breaks down with a bit of poking and prodding or just unlucky RNG. Only threat LLMs present is the risk that people will introduce their outputs into safety critical systems.
LLMs are a super impressive advancement, like calculators for text, but if you want to force the discussion into a grandiose context then they're easy to dismiss. Sure, their outputs appear remarkably coherent through sheer brute force, but at the end of the day their fundamental nature makes them unsuitable for any task where precision is necessary. Even as just a chatbot, the facade breaks down with a bit of poking and prodding or just unlucky RNG. Only threat LLMs present is the risk that people will introduce their outputs into safety critical systems.