Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I know things keep moving faster and faster, especially in this space, but GPT-4 is less than a year old. Claiming they are losing their luster, because they aren’t shaking the earth with new models every quarter, seems a little ridiculous.

As the popularity has exploded, and ethical questions have become increasingly relevant, it is probably worth taking some time to nail certain aspects down before releasing everything to the public for the sake of being first.



Given how fast the valuation of the company and the scope of their ambition (e.g. raising a trillion dollars for chip manufacturing) has been hyped up, I think it's fair to say "You live by the hype, you die by the hype."


Just time your exit correctly!


"This year I invested in pumpkins. They've been going up the whole month of October, and I've got a feeling they're going to peak right around January and BANG! That's when I'll cash in!" -Homer Simpson


Homer obviously was smart, a nuclear scientist, car developer and Junior Vice President in his own tech start-up! So he should know!

Edit: I forgot, NASA trained astronaut!


RIP vim users


Beautifully said.


You don't lose your luster only by not innovating.

Altman saga, allowing military use and other small things step by step tarnish your reputation and pushes you to the mediocrity or worse.

Microsoft has many great development stories (read Raymond Chen's blog to be awed), but what they did at the end to other competitors and how they behave removed their luster, permanently for some people.


At the end of the day the US.mil is spending billions to trillions of dollars. I'm not exactly sure what you mean by lose your luster, but becoming part of the military industrial complex is generally a way to bury yourself in deep piles of gold.


I think you answered it yourself. The main way from cool to not cool is to be buried in "piles of gold".


> a way to bury yourself in deep piles of gold

Unfortunately, no deep piles of gold without deep piles of corpses. It is inevitable, though. Prompted by the US military, other countries have also always pioneered or acquired advance tech, and I don't see why AI would be any different: Never send a human to do a machine's job is as ominous now as it is dystopian as machines increasingly become more human-like.


There will always be corpses.

Do you want American corpses? Or somebody elses?


"allowing military use"

That would actually increase their standing in my eyes.

Not too far from where I live, Russian bombing is destroying homes of people whose language is similar to mine and whose "fault" is that they don't want to submit to rule from Moscow, direct or indirect.

If OpenAI can somehow help stop that, I am all for it.


On the other hand, Israel is using AI to generate their bombing targets and pound Gaza strip with bombs non-stop [0].

And, according to UN, Turkey has used AI powered, autonomous littering drones to hit military convoys in Libya [1].

Regardless of us vs. them, AI shouldn't be a part of warfare, IMHO.

[0]: https://www.theguardian.com/world/2023/dec/01/the-gospel-how...

[1]: https://www.voanews.com/a/africa_possible-first-use-ai-armed...


> AI shouldn't be a part of warfare, IMHO.

Nor should nuclear weapons, guns, knives, or cudgels.

But we don’t have a way to stop them being used.


Sure we do. We enforce it through the threat of warfare and subsequent prosecution, the same way we enforce the bans on chemical weapons and other war crimes.

We may lack the motivation and agreement to ban particular methods of warfare, but the means to enforce that ban exists, and drastically reduces their use.


"We enforce it through the threat of warfare and subsequent prosecution, the same way we enforce the bans on chemical weapons and other war crimes."

Do we, though? Sometimes, against smaller misbehaving players. Note that it doesn't necessarily stop them (Iran, North Korea), even though it makes their international position somewhat complicated.

Against the big players (the US, Russia, China), "threat of warfare and prosecution" does not really work to enforce anything. Russia rains death on Ukrainian cities every night, or attempts to do so while being stopped by AA. Meanwhile, Russian oil and gas are still being traded, including in EU.


We lack the motivation precisely because of information warfare that is already being used.


This is literally the only thing that matters in this debate. Everything else is useless hand-wringing from people who don't want to be associated with the negative externalities of their work.

The second that this tech was developed it became literally impossible to stop this from happening. It was a totally foreseeable consequence, but the researchers involved didn't care because they wanted to be successful and figured they could just try to blame others for the consequences of their actions.


> the researchers involved didn't care because they wanted to be successful and figured they could just try to blame others for the consequences of their actions

Such an absurdly reductive take. Or how about just like nuclear energy and knives, they are incredibly useful, society advancing tools that can also be used to cause harm. It's not as if AI can only be used for warfare. And like pretty much every technology, it ends up being used 99.9% for good, and 0.1% for evil.


I think you're missing the point. I don't think we should have prevented the development of this tech. It's just absurd to complain about things that we always knew would happen as though they're some sort of great surprise.

If we cared about preventing LLMs from being used for violence, we would have poured more than a tiny fraction our resources into safety/alignment research. We did not. Ergo, we don't care, we just want people to think we care.

I don't have any real issue with using LLMs for military purposes. It was always going to happen.


You say ‘we’ as if everyone is the same. Some people care, some people don’t. It only takes a a few who don’t, or who feel the ends justify the means. Because those people exist, the people who do care are forced into a prisoners dilemma forcing them to develop the technology anyway.


Safe or alignment research isn't going to stop it from being used for military purposes. Once the tech is out there, it will be used for military purposes; there's just no getting around it.


If it ever happens again, they'll develop the lists in seconds from data collected from our social media, intercept. What took organizations warehouses and thousands of agents will be done in a matter of seconds.


Why not? Maybe AI is what is needed to finally tear Hamas out of Palestine root and branch. As long as humans are still in the loop vetting the potential targets, it doesn't seem particularly different from the IDF just hiring a bunch of analysts to produce the same targets.


There is no "removing Hamas from Palestine". The only way to remove the desire of the Palestinian people for freedom is to remove the Palestinian people themselves. And that is what the IDF is trying to do.


Hamas isn't the only path to freedom for Palestinians. In fact, they seem to be the major impediment to it.


If we're going to be reductive, at least include the other main roadblock to a solution which is the current government of Israel.


That doesn't explain why deals weren't reached with the previous governments of Israel.


Sure it doesn't explain that. Would be nice if things were that easy wouldn't it?


Generally if a main roadblock is removed, you can get a little farther down the road.


Hamas doesn't exist in a vacuum where you can just remove it and then it's gone. You have to offer a life that's better than Hamas.


Considering the incredible amount of civilian casualties, I don't think the target vetting is working very well.


I would be very surprised if Turkey was capable of doing that. If they did, that's all Erdoğan would be talking about. Also it's a bit weird that the linked article's source is a Turkish name. (Economy and theology major too)

I am not saying this is anything but it's definetely tingling my "something's up" senses.


Voice of America generally employs country's nationals for their reporting. There are some other resources:

    - NPR: https://www.npr.org/2021/06/01/1002196245/a-u-n-report-suggests-libya-saw-the-first-battlefield-killing-by-an-autonomous-d
    - Lieber Institute: https://lieber.westpoint.edu/kargu-2-autonomous-attack-drone-legal-ethical/
    - ICRC: https://casebook.icrc.org/case-study/libya-use-lethal-autonomous-weapon-systems
    - UN report itself (Search for Kargu): https://undocs.org/Home/Mobile?FinalSymbol=S%2F2021%2F229&Language=E&DeviceType=Desktop&LangRequested=False
    - Kargu itself: https://www.stm.com.tr/en/kargu-autonomous-tactical-multi-rotor-attack-uav
From my experience, Turkish military doesn't like to talk about all the things they have.


The major drone manufacturer is Erdoğan's son-in-law. He's being groomed as one of his possible sucessors on the throne. They looove to talk about those drones.

I will check out the links. Thanks a lot.


You're welcome.

The drones in question (Kargu) are not built by his company.


True. I had been reading about how other drones are in service but they never get mentioned anymore.


>If OpenAI can somehow help stop that, I am all for it.

I got some bad news for you then.


Agreed. It's the most important and impactful use case. All else are a set of parlor tricks in comparison.


Yep. AI is, and will be used militarily.

These virtue signaling games are childish.


It is indeed tragic that virtue is a childish trait among adults.


That assumes that being a pacifist when living under the umbrella of the most powerful military in the world is, in fact, a virtue.

I don't think so. In order to be virtuous, one should have some skin in the game. I would respect dedicated pacifists in Kyiv a lot more. I wouldn't agree with them, but at least they would be ready to face pretty stark consequences of their philosophical belief.

Living in the Silicon Valley and proclaiming yourself virtuous pacifist comes at negligible personal cost.


That's kind of like saying that not being a murderer only has moral value if you're constantly under mortal threat yourself.


I don't really see the comparison. Not being a murderer isn't a virtue, it is just normal behavior for 99,9 per cent of the population.


First of all no one declared themselves a virtuous pacifist.

People don't participate in murder and they think others shouldn't either.

People don't participate in wars (which are essentially large scale murder) and they think others shouldn't.

Murder happens anyway. War happens anyway.

Yet if someone says 'war bad' people jump and say 'virtue signaling', but no one does that when people say 'murder bad'.

There's some really weird moral entanglement happening in the minds of people that are so eager to call out virtue signaling.


Virtue isn't childish, shooting telegraphed signals to be perceived as virtuous regardless of your true nature is childish. Also, using a one dimensional, stereotypical storybook definition of virtue (and then trying to foist that on others) is also childish.


I don’t think a lot of companies care whether they lose their luster to techies since corporations and most individuals will still buy their product. MSFT was $12 in 2000 (when they had their antitrust lawsuit) and is $400 now.


I never bought into ethical questions. It's trained on publicly available data as far as I understand. What's the most unethical thing it can do?

My experience is limited. I got it to berate me with a jailbreak. I asked it to do so, so the onus is on me to be able to handle the response.

I'm trying to think of unethical things it can do that are not in the realm of "you asked it for that information, just as you would have searched on Google", but I can only think of things like "how to make a bomb", suicide related instructions, etc which I would place in the "sharp knife" category. One has to be able to handle it before using it.

It's been increasingly giving the canned "As an AI language model ..." response for stuff that's not even unethical, just dicey, for example.


One recent example in the news was the AI generated p*rn of Taylor Swift. From what I read, the people who made it used Bing, which is based on OpenAI’s tech.


This is more sensationalism than ethical issue. Whatever they did they could do, and probably do better, using publicly available tools like Stable Diffusion.


or just photoshop. The only thing these tools did was make it easier. I don't think the AI aspect adds anything for this comparison.


An argument can be made that "more is different." By making it easier to do something, you're increasing the supply, possibly even taking something that used to be a rare edge case and making it a common occurrence, which can pose problems in and of itself.


It's more dangerous if it's uncommon. It's knowledge that protects people and not a bunch of annoying "AI safety" "researchers" selling the lie that "AI is safe". Truth is those morons only have a job because they help companies save face and create a moat around this new technology where new competitors will be required to have "AI safety" teams & solutions. What have "AI safety" achieved so far besides making models dumber and annoying to use?


Put in a different context: The exploits are out there. Are you saying we shouldn't publish them?

Deepfakes are going to become a concern of everyday life whether you stop OpenAI from generating them or not. The cat is out of the proverbial bag. We as a society need to adjust to treating this sort of content skeptically, and I see no more appropriate way than letting a bunch of fake celebrity porn circulate.

What scares me about deepfakes is not the porn, it's the scams. The scams can actually destroy lives. We need to start ratcheting up social skepticism asap.


You probably don't care about the porn cause I'm assuming you're a man, but it can ruin lives too.


It can only ruin lives if people believe it's real. Until recently, that was a reasonable belief; now it's not. People will catch on and society will adapt.

It's not like the technology is going to disappear.


I mean, the same applies to scams, scams only work if people believe them.


Right - as I said, we need to ramp up social skepticism, fast. Not as in some kind of utopian vision, but "the amount of fake information will be moving from a trickle to a flood soon, there's nothing you can do about that, so brace yourselves".

The specific policies of OpenAI or Google or whatnot are irrelevant. The technology is out of the bag.


You are talking like it's something bad. Kids are learning AI and computing instead of drugs and guns. And nobody is hurt.


> Claiming they are losing their luster, because they aren’t shaking the earth with new models every quarter, seems a little ridiculous.

If that's the foundation your luster is built on - then it's not really ridiculous.

GPT popularized LLMs to the world with GPT-3, not too long before GPT-4 came out. They made a lot of big, cool changes shortly after GPT-4 - and everyone in their mother announced LLM projects and integrations in that time.

It's been about 9 months now, and not a whole lot has happened in the space.

It's almost as if the law of diminishing returns has kicked in.


GPT-3 came out 3 years before 4.


GPT-3.5 is when LLMs start to get "main stream". That's about 4.5 months before the GPT-4 release.

Keep in mind GPT-3.5 is not an overnight craze. It takes months before normal people even know what it is.


>GPT-3.5 is when LLMs start to get "main stream".

To the general public sure but not research which is what produces the models.

The idea that diminishing returns has hit because there hasn't been a new SOTA model in 9 months is ridiculous. Models take months just to train. Open AI sat on 4 for over half a year after training was done just red-teaming it.


It sure is, but the theme in the sub-thread was about if OAI in particular can afford to do that (i.e. wait) while there are literally dozens of other companies and open-source projects showing they can solve a lot of the tasks GPT-4 does, for free, so that the OAI value proposition seems weaker and weaker by the month.

Add to that a company environment that seems to be built on money-crazed stock option piling engineers and a CEO that seems to have gotten power-crazed.. I mean they grew far too fast I guess..


Perhaps GPT-4 is losing its luster because the more people actually use it, they go from "wow that's amazing" to "amazing, yes, but..."? And the "but" looms larger and larger with more time and more exposure?

Note well: I haven't actually used it myself, so I'm speculating (guessing) rather than saying that this is how it is.


i got a feeling this is beginning to happen all over the place, I'm really curious to see where the hype train ends up at the end of this year.


This space is growing by leaps and bounds. It's not so much the passage of time as it is the number of notable advancements that is dictating the pace.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: