Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On Twitter, it was "multi-blillionaires [Elon Musk] want to keep us down!" In the HN open letter thread it was "All the companies falling behind want a chance to catch up." Now it's "Gary Marcus".

The letter was signed by a lot of people, including many AI researchers, including people working on LLMs. Any dismissal that reduces to [one incumbent interest] opposes this" is missing the mark.

A lot of people, incumbents and non-incumbents, relevant experts and non-experts, are saying we need to figure out AI safety. Some have been saying it for years, other just recently, but if you want to dismiss their views, you're going to need to address their arguments, not just ad hominem dismissals.



<< A lot of people, incumbents and non-incumbents, relevant experts and non-experts, are saying we need to figure out AI safety. Some have been saying it for years, other just recently, but if you want to dismiss their views, you're going to need to address their arguments, not just ad hominem dismissals.

Some of us look at patterns and are, understandably, cynical given some of the steps taken ( including those that effectively made OpenAI private the moment its potential payoff became somewhat evident ).

So yeah. There is money on the table and some real stakes that could be lost by the handful of recent owners. Those incumbent and non-incumbent voices are only being amplified now ( as you noted they DID exist before all this ), because it is convenient for the narrative.

They are not being dismissed. They are simply being used.


I don't particularly care about crowds on Twitter or HN. Musk has lots of money but can't stop me spending mine.

Marcus said:

> We must demand transparency, and if we don’t get it, we must contemplate shutting these projects down.

https://garymarcus.substack.com/p/the-sparks-of-agi-or-the-e...

(While at the same time still saying "it's nothing to do with AGI")


Sure, sure, Marcus says one thing and has one set of motives. Elon Musk says another thing and has another set of motives. But if you want to dismiss this by questioning the motives of the signers, you've got a few thousand other people whose motives you have to identify and dismiss.

It would be much more effective, and meaningful, to challenge their arguments.


I think the Scott Aaroson link posted did a pretty good job of that.

I don't think their arguments deserve anything more than that.


Elon Musk is the most ironic signatory of this blog post, considering his decade-long effort to put AI behind the wheel of 100mph+ vehicles. And then he's got the gall to lecture the rest of us on "AI safety" after we finally get a semi-intelligent chatbot? Come on, man.


If we are being fair, even though we might refer to self driving capabilities as "ai" and as self aware supercomputer overlord as "ai" they aren't the same thing and you can hold different opinions on the development of them.


Given that FSD is obviously a scam, wouldn't a pause like this be in his best interest? (buys them more time, while keeping the hype machine going)


I don't know, a Tesla got me door to burrito store (6 miles) in the pelting rain the other day without human input. Seems like that's not quite the bar for an outright scam.


There are actually no relevant experts in the field of artificial general intelligence, or the safety thereof. No one has defined a clear path to build such a thing. Claiming to be an expert in this field is like claiming to be an expert in warp drives or time machines. Those calling for a halt in research are merely ignorant, or attention seeking grifters. Their statements can be dismissed out of hand regardless of their personal wealth or academic credentials.

Current LLMs are merely sophisticated statistical tools. There is zero evidence that they could ever be developed into something that could take intentional action on it's own, or somehow physically threaten humans.

LLMs are useful for improving human productivity, and we're going to see some ugly results when criminals and psychopaths use those tools for their own ends. But this is no different from any other tool like the printing press. It is not a valid reason to restrict research.


This is a bit like saying that 1939 Einstein wasn't an expert in nuclear bombs. Sure, it didn't exist, so he wasn't an expert on them, but he was an expert on the thing that led to it and when we said it was possible, sensible people listened.

A lot of people working on LLMs say that they believe there is a path to AGI. I'm very skeptical of claims that there's zero evidence in support of their views. I know some of these people and while they might be wrong, they're not stupid, grifters, malicious, or otherwise off-their-rockers.

What would you consider to be evidence that these (or some other technology) could be a path to be a serious physical threat? It's only meaningful for there to be "zero evidence" if there's something that could work as evidence. What is it?


That is not a valid analogy. In 1939 there was at least a clear theory of nuclear reactions backed up by extensive experiments. At that point building a weapon was mostly a hard engineering problem. But we have no comprehensive theory of cognition, or even anything that legitimately meets the criteria to be labeled a hypothesis. There is zero evidence to indicate that LLMs are on a path to AGI.

If the people working in this field have some actual hard data then I'll be happy to take a look at it. But if all they have is an opinion then let's go with mine instead.

If you want me to take this issue seriously then show me an AGI roughly on the level of a mouse or whatever. And by AGI I mean something that can reach goals by solving complex, poorly defined problems within limited resource constraints (including time). By that measure we're not even at the insect level.


> something that can reach goals by solving complex, poorly defined problems within limited resource constraints (including time).

DNN RL agents can do that. Of course you'll wave it away as "not general" or "mouse is obviously better". But you won't be able to define that precisely, just the same you're not able to prove ChatGPT "doesn't really reason".

PS. Oh nevermind, I've read your other comments below.


> Current LLMs are merely sophisticated statistical tools.

This is wrong. They are have the capability for in-context learning, which doesn't match most definitions of "statistical tools"


I'd also guess the correct take to see LLMs as human magnifiers more than human replacers* -- most technology does this, magnifying aspects of the human condition rather than fundamentally change it..

But that doesn't make me sanguine about them. The printing press was amazing and it required new social conceptions (copyright). Nuclear weapons did "little" other than amplify human destructive capability but required a whole lot of thought on how to deal with it, some of it very strange like MAD and the logic of building doomsday devices. We're in the middle of dealing with other problems we barely understand from the extension of communications technology that may already have gotten out of hand.

We seem like we're limited in our habits of social reflection. We seem to prefer the idea that we can so we must, and if we don't someone else will in an overarching fundamentally competitive contest. It deprives us of the ability to cooperate thoughtfully in thinking about the ends. Invention without responsibility will have suboptimal and possibly horrifying outcomes.

(* I am much less certain that there isn't some combination system or future development that could result in an autonomous self-directed AGI. LLMs alone probably not, but put an LLM in an embodied system with its own goals and sensory capacities and who knows)


> We seem to prefer the idea that we can so we must, and if we don't someone else will in an overarching fundamentally competitive contest.

Yes. This line of argument terrifies me not only because it's fatalist, but because the logical result of it is that it results in the worst possible outcomes.

It smells a lot like "we need to destroy society because if we don't do it, someone else will." Before anyone jumps on me about this, I'm not saying LLMs will destroy society, but this argument is almost always put in response to people who are arguing that they will destroy society.


There are researches working on the specific problem of AI safety, and I consider them to be the experts of this field, regardless of the fact that probably no university is currently offering a master's degree specifically on AI safety. Whether one or more of these researchers are in favor of the ban, I don't know.


The safety people aren't that bright. They are just trying to write tons of requirements that the actual ML model writers follow to make sure race and violence is accounted for somehow. These are not AI experts. They are mostly lay people.


> Current LLMs are merely sophisticated statistical tools.

Yawn. At a minimum they are tools of which we do not understand how exactly they work internally.


You clearly have not read anything regarding the capabilities of GPT-4 and also clearly have not played around with ChatGPT at all. GPT-4 has already displayed a multitude of emergent capabilities.

This is incredibly ignorant and I expect better from this forum.


Bullshit. I have used the various LLM tools and have read extensively about them. They have not displayed any emergent capabilities in an AGI sense. Your comment is simply ignorant.


I'm going to just assume you're not being malicious.

https://www.microsoft.com/en-us/research/publication/sparks-...

https://www.assemblyai.com/blog/emergent-abilities-of-large-...

If you'd like you can define "emergent capabilities in an AGI sense".


I am not being malicious. I do not accept those papers as being actual examples of emergent behavior in a true AGI sense. This is just another case of humans imagining that they see patterns in noise. In other words, they haven't rejected the null hypothesis. Junk science. (And just because the science is bad doesn't mean the underlying products aren't useful for solving practical problems and enhancing human productivity.)

The blowhards who are calling for arbitrary restrictions on research are the ones being malicious.


OK you're moving the goalposts and just flat-out saying that you know better than the actual researchers in the field. That's fine, and it's what I was assuming you were going to say, but I appreciate you being open about it.


Yeah, there aren’t experts in something that doesn’t exist. That means we have to make an educated guess. By far the most rational course of action is to halt AI research. And then you say there’s no proof that we are on the path to AGI or that it would harm us. Yeah, and there never could be any proof for either side of the argument. So your dismissal of AI is kind of flaccid without any proof or rational speculation or reasoning. Listen man I’m not a cynical commenter. I believe what I’m saying and I think it’s important. If you really think you’re right then get on the phone with me or video chat so we can actually debate and settle this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: