Well they do of course. There are export restrictions on supercomputers now, including many NVIDIA GPUs.
I contend that doesn't matter.
There is sufficient compute available now at consumer levels to make it too late to stop training LLMs.
If cloud A100s became unavailable tomorrow it'd be awkward, but there is enough progress being made on training on lower RAM cards to show it is possible.
So far it seems letting private industry iterate on LLMs doesn't directly pose to a risk of ending lives like human trials and nuclear weaponry development do.
I think the fear of LLMs is very overblown. On the other hand, I think that if LLMs actually manage to do what proponents hope it will, some people will die as a result due to economics when they lose their jobs.
That's not unique to LLMs, of course. It's what has happened before every time something has obsoleted a bunch of jobs. There's no reason to think this time would be any different.
The old excuse from AI researchers was that once AI takes all the mundane jobs, people will be free to become artists. Ask artists now what they think about AI. A whole lot of them aren't very happy about it.