i think you might be focusing too much on ai risk presented in fantasy (killer robots), meanwhile we can already clearly see how LLMs negatively impact society via disruption or popular opinion, politics (recommendation algorithms), and rapid uncontrolled scientific discovery. Such disruptions could potentially result in nuclear war, human created plagues, etc. You might be getting down voted without comment because you're message comes across like cheerleading that only examines one future distant risk. Your framing as "extential fear" is particularly dismissive and doesn't seem to be in good faith for such a serious and subject.
It's hard to take doomers seriously when they think that things that exist and are clearly not causing nuclear war or plagues are only a few steps from doing so. If you want to talk seriously, talk about actual problems AI is causing right now, like job loss or malpractice.