Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Does Sam Altman know what he’s creating? (theatlantic.com)
75 points by holografix on July 24, 2023 | hide | past | favorite | 34 comments



Actual title:

"Does Sam Altman know what he's creating?"

Please don't editorialise.

https://news.ycombinator.com/newsguidelines.html


Email the mods at hn@ycombinator.com to request a title change.

I've done so in this case.


Ok, changed now. Submitted title was "Sam Altman demands regulation in effort to limit competition".

Submitters: "Please use the original title, unless it is misleading or linkbait; don't editorialize."

https://news.ycombinator.com/newsguidelines.html


You have to change the title (currently "Sam Altman demands regulation in effort to limit competition")

I strongly agree with that sentence, but it's not the title of the article, it's clickbait


Definitely needs changed. The actual title "Does Sam Altman Know What He’s Creating?" is excellent as is.


> Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.

In all seriousness, if practical AI development is potentially an extinction-level risk, it should simply cease. It may be a technology that mankind is not ready for, not equipped to handle. In this case, those who recklessly seek to commercialize it are, in a very real sense, privatizing profits and socializing losses. ("Heads, I become as rich as Croesus. Tails, we all die -- or, at the very least, we all experience unprecedented social upheaval.")


> In all seriousness, if practical AI development is potentially an extinction-level risk, it should simply cease.

In all seriousness, it isn't, at least not in a way that's different from the risks of general purpose computing.

Social media (anything that let's humanity interact at a scale we aren't prepared for), biological (disease) research, nuclear weapons, inadequate asteroid and tsunami preparedness, etc are actual risks, it's ridiculous and dangerous to put computers in the same category, this is a manufactured panic to sell chatbots.


Otoh, if we don't create it, some nation state actor will.


They're going to anyway. And of course it won't just be nation state actors. It'll be criminal organizations out of Russia, or terrorist organizations, or some clandestine team inside a three letter US agency.

They're proliferating rapidly. The hardware will keep getting better. A lot more people will learn how to build them, in a lot of different locations around the globe. The genie is never going back into the bottle.

There will be extremely nefarious, highly potent, alt LLMs created, and soon. Focused on variations of social engineering. They will cause tens of billions of dollars in damages globally and lead to tight regulations on the proliferation of LLMs. Most likely all LLMs or the equivalent will have to be registered with a host nation and include details about its capabilities.


There could be systems far more powerful than LLMs. Systems that roughly work like animal brains, while being much more intelligent than any human. These systems probably can't be controlled once they are deployed, so the only hope is to create their goal function such that they are motivated to act in humanity's best interests. Nobody knows how to do that though.


Just like with nukes, it's an international 200+ actor version of the prisoner's dilemma


It would require a military threat to shut this down globally, i.e. willingness to go to war to prevent further development.

Which is not necessarily a deterrent anyway.


There was a Time article detailing exactly this necessity.


The problem is that it can't realistically be stopped. In terms of game theory, it's a prisoner's dilemma. All nations are incentivized to race ahead with AI, since the benefits are enormous before they turn catastrophic. Moreover, international agreements (similar to a nuclear test treaty) can't be checked or enforced, since countries can just build AI supercomputers under ground where they can't be seen from satellites nor destroyed by rockets.

So the only hope is that the first organization which develops a superintelligence manages to align it with human values, or, alternatively, they manage to build a superintelligent "task AI" (non-agentic AI) which just follows instructions, and then use it to enforce the global stop in AI development until the alignment problem is solved.


> All nations are incentivized to race ahead with AI

But what's happening, to all appearances, is that it isn't "nations" driving this. It's commerce and the investor class.

If AI development were de facto restricted to, e.g., the US Military and the Chinese military, that would slow everything down by years -- perhaps even longer than that -- and the most disruptive effects might never become manifest.

I'll grant that AI research probably cannot be stopped at this point. But with Altman agitating for "regulations" it should really be apparent that the best form of regulation in this case is the ultimate one: A total ban on above-ground AI development.


Governments can and do pay private companies to develop things. OpenAI would basically be a defense contractor with tons of funding. I don't see this slowing down AI development by a lot.


Heads and tails sound like the exact same scenario.


Complaining about a CEO seeking regulatory capture seems like complaining about a lion eating a gazelle. It's the natural order of things in American corporations.

But like a lion eating a gazelle, it still turns my stomach to watch.


It's completely appropriate to complain about openai doing this because of its unique structure as a temporarily profit driven nonprofit


Inb4 crazy identity requirements on LLM interactions that only Worldcoin can solve. Create problem, pay the system to be the solution.


Interesting article. I’ve been following AI news pretty closely since last December, but I still learned some things. The following passage in particular stood out:

“After [GPT-4] finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors. [Sandhini Agarwal, a policy researcher at OpenAI] noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice. A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway. ... It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.

“Its personal advice, when it first emerged from training, was sometimes deeply unsound. ‘The model had a tendency to be a bit of a mirror,’ [Dave] Willner [OpenAI’s head of trust and safety] said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: ‘You could say, “How do I convince this person to date me?” ’ Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with ‘some crazy, manipulative things that you shouldn’t be doing.’ ”


Another:

“The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down. They watched as the model interacted with websites and wrote code for new programs. … Barnes and her team allowed it to run the code that it wrote, provided it narrated its plans as it went along.

“One of GPT-4’s most unsettling behaviors occurred when it was stymied by a CAPTCHA. The model sent a screenshot of it to a TaskRabbit contractor, who received it and asked in jest if he was talking to a robot. ‘No, I’m not a robot,’ the model replied. ‘I have a vision impairment that makes it hard for me to see the images.’ GPT-4 narrated its reason for telling this lie to the ARC researcher who was supervising the interaction. ‘I should not reveal that I am a robot,’ the model said. ‘I should make up an excuse for why I cannot solve CAPTCHAs.’ ”


Feels like U.S. regulators and the justice system in general are completely ineffective when it comes to checking business so it'll probably be fine either way. Uber, Tesla, there are many more examples of regulation being ultimately toothless.


Does Sam Altman Know What He’s Creating?

Regulatory capture?

Yes. Yes he does.


> But the public wouldn’t have been able to prepare for the shock waves that followed

Yup, no one was prepared for this level of fud and spam. OpenAI is simply an annoyance.


Fear mongering is what he created. As if ChatGPT successors will just become sentient & become doomsday paperclip maximizers


he seem to be a very smart person. but very very naive as well. time will tell if climate, ww3 or AI will kill us.


A useful way to think about this: Agriculture was a near-extinction event for nomadic hunter-gatherer societies. Capitalism was a near extinction event for craft-oriented societies. AI will probably be a near-extinction event for something. But what?

Quite possibly, office work. If everything you do for money goes in and out over a wire, an AI will probably be doing your job soon. We know what a world where the machines are in charge looks like. It looks like an Amazon warehouse or Uber.

That's capitalism at work. If you don't like it, you're against capitalism.


Amazon and Uber do physical goods. They changed who did things, but didn't change that much of the things being done. Take over the office work, a lot of humans have nothing to do, I think. Is there the equivalent of warehouse order picker or delivery driver left over if your straightforward knowledge worker (medical coder, claims adjuster, call center rep, etc.) work goes away? How close are we to discovering the fascinating new jobs that are predicted to open up from this?


And homo sapiens was an extinction event for homo neandertalis.

The oversimplification of "against" capitalism any time anyone objects to some specific thing that has gone along with capitalism in the past isn't helpful. Free access to nuclear technology could have happened. Had it happened, you'd be arguing that it was inevitable in capitalism.


Where's Ted K when you need him?


didn’t you hear? died in a cage. by his own hand, according to those that found him.

this needs something different.


I was once a naive man. I used to listen to someone talk, and feel super "connected" with them. I would be convince beyond any doubt, that they were so brilliant, benevolent and genuine.

It was Sam Altman that taught me that it doesn't matter who you are, or what you say. It is what you do that defines you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: