The arguments for decelerating AI development invoke fear-mongering and cite “existential risk”, which is a fallacy. We’re talking about LLMs, not AGI here (which is quite a ways out, realistically). If anything - we should be accelerating development toward the goal of AGI, as the implications for humanity are profound.
Okay, but imagine someone strips ChatGPT of the safeguard layers, asks it to shut down MAERSK operation world wide without leaving tracks,, and connects the outputs to a bash terminal, and the stdout to the chat api.
It is still an LLM, but if it can masquerade as an AGI, is that then not enough to qualify as one? To me, this is what the Chinese Room Experiment [1] is about.
>not AGI here (which is quite a ways out, realistically)
You don't know how far away it is.
>If anything - we should be accelerating development toward the goal of AGI, as the implications for humanity are profound.
Given we don't know how far away it is but current models are matching Human performance on lots of tasks and we don't know any way to ensure their safety it's entirely reasonable to be scared.