Hacker Newsnew | past | comments | ask | show | jobs | submit | WesolyKubeczek's commentslogin

I propose to adopt the word „morge”, a verb meaning „use an LLM to generate content that badly but recognizably plagiarizes some other known/famous work”.

A noun describing such piece of slop could be „morgery”.


I read through all the proposals in this discussion and I like yours the best out of them.

Seconded!


> Codex quite often refuses to do "unsafe/unethical" things that Anthropic models will happily do without question.

That's why I have a functioning brain, to discern between ethical and unethical, among other things.


Yes, and most of us won’t break into other people’s houses, yet we really need locks.

This isn't a lock

It's more like a hammer which makes its own independent evaluation of the ethics of every project you seek to use it on, and refuses to work whenever it judges against that – sometimes inscrutably or for obviously poor reasons.

If I use a hammer to bash in someone else's head, I'm the one going to prison, not the hammer or the hammer manufacturer or the hardware store I bought it from. And that's how it should be.


This view is too simplistic. AIs could enable someone with moderate knowledge to create chemical and biological weapons, sabotage firmware, or write highly destructive computer viruses. At least to some extent, uncontrolled AI has the potential to give people all kinds of destructive skills that are normally rare and much more controlled. The analogy with the hammer doesn't really fit.

Given the increasing use of them as agents rather than simple generators, I suggest a better analogy than "hammer" is "dog".

Here's some rules about dogs: https://en.wikipedia.org/wiki/Dangerous_Dogs_Act_1991


How many people do dogs kill each year, in circumstances nobody would justify?

How many people do frontier AI models kill each year, in circumstances nobody would justify?

The Pentagon has already received Claude's help in killing people, but the ethics and legality of those acts are disputed – when a dog kills a three year old, nobody is calling that a good thing or even the lesser evil.


> How many people do frontier AI models kill each year, in circumstances nobody would justify?

Dunno, stats aren't recorded.

But I can say there's wrongful death lawsuits naming some of the labs and their models. And there was that anecdote a while back about raw garlic infused olive oil botulism, a search for which reminded me about AI-generated mushroom "guides": https://news.ycombinator.com/item?id=40724714

Do you count death by self driving car in such stats? If someone takes medical advice and dies, is that reported like people who drive off an unsafe bridge when following google maps?

But this is all danger by incompetence. The opposite, danger by competence, is where they enable people to become more dangerous than they otherwise would have been.

A competent planner with no moral compass, you only find out how bad it can be when it's much too late. I don't think LLMs are that danger yet, even with METR timelines that's 3 years off. But I think it's best to aim for where the ball will be, rather than where it is.

Then there's LLM-psychosis, which isn't on the competent-incompetent spectrum at all, and I have no idea if that affects people who weren't already prone to psychosis, or indeed if it's really just a moral panic hallucinated by the mileau.


Why would we lock ourselves out of our own house though?

How is it related? I dont need lock for myself. I need it for others.

The analogy should be obvious--a model refusing to perform an unethical action is the lock against others.

But "you" are the "other" for someone else.

Can you give an example where I should care about other adults lock? Before you say image or porn, it was always possible to do it without using AI.

Claude was used by the US military in the Venezuela raid where they captured Maduro. [1]

Without safety features, an LLM could also help plan a terrorist attack.

A smart, competent terrorist can plan a successful attack without help from Claude. But most would-be terrorists aren't that smart and competent. Many are caught before hurting anyone or do far less damage than they could have. An LLM can help walk you through every step, and answer all your questions along the way. It could, say, explain to you all the different bomb chemistries, recommend one for your use case, help you source materials, and walk you through how to build the bomb safely. It lowers the bar for who can do this.

[1] https://www.theguardian.com/technology/2026/feb/14/us-milita...


Yeah, if US military gets any substantial help from Claude(which I highly doubt to be honest), I am all for it. At the worst case, it will reduce military budget and equalize the army more. At the best case, it will prevent war by increasing defence of all countries.

For the bomb example, the barrier of entry is just sourcing of some chemicals. Wikipedia has quite detailed description of all the manufacture of all the popular bombs you can think of.


> Wikipedia has quite detailed description of all the manufacture of all the popular bombs you can think of.

Did you bother to check? It contains very high level overviews of how various explosives are manufactured, but no proper instructions and nothing that would allow an average person to safely make a bomb.

There's a big difference in how many people can actually make a bomb if you have step by step instructions the average person can follow vs soft barriers that just require someone to be a standard deviation or two above average. At two sigma, 98% will fail, despite being able to do it in theory.

> Yeah, if US military gets any substantial help from Claude(which I highly doubt to be honest), I am all for it.

That's not the point. I'm not saying we need to lock out the military. I'm saying if the military finds the unlocked/unsafe version of Claude useful for planning attacks, other people can also find useful for planning attacks.


> Did you bother to check?

Yeah I am not a chemist, but watch Nilered. And from [1], I know how all steps would look like. Also there are literal videos in youtube for this.

And if someone can't google what nitrated or crystallization mean, maybe they just can't build a bomb with somewhat more detailed instruction.

> other people can also find useful for planning attacks.

I am still not able to imagine what you mean. You think attacks don't happen because people can't plan it? In fact I would say it's the opposite. Random lazy people like school shooters precisely attacks because they didn't plan for it. If ChatGPT gave detailed plan, the chances of attack would reduce.

[1]: https://en.wikipedia.org/wiki/TNT#Preparation


You're kidding yourself if you think you can make TNT from the 3 sentences Wikipedia has on the two-step process with no chemistry background. (And even moreso if you attempt the industrial process instead.) This isn't nearly as simple as making nitroglycerin. TNT is a much trickier process. You're more likely to get yourself injured than end up with a useable explosive. There's no procedure written there.

> If ChatGPT gave detailed plan, the chances of attack would reduce.

So you think helping a terrorist plan how to kill people somehow makes things safer? That's some mental gymnastics...


I don't think I can make TNT but I can understand the steps without chemistry background. I believe I will likely injure myself but more detailed steps is unlikely to help.

> So you think helping a terrorist plan how to kill people somehow makes things safer?

They just need to run a bus into some crowded space or something. They don't need ChatGPT for this. With more education, the chances of becoming terrorist reduces even if you can plan better.


The same law prevents you and me and a hundred thousand lone wolf wannabes from building and using a kill-bot.

The question is, at what point does some AI become competent enough to engineer one? And that's just one example, it's an illustration of the category and not the specific sole risk.

If the model makers don't know that in advance, the argument given for delaying GPT-2 applies: you can't take back publication, better to have a standard of excess caution.


You are not the one folks are worried about. US Department of War wants unfettered access to AI models, without any restraints / safety mitigations. Do you provide that for all governments? Just one? Where does the line go?

> US Department of War wants unfettered access to AI models

I think the two of you might be using different meanings of the word "safety"

You're right that it's dangerous for governments to have this new technology. We're all a bit less "safe" now that they can create weapons that are more intelligent.

The other meaning of "safety" is alignment - meaning, the AI does what you want it to do (subtly different than "does what it's told").

I don't think that Anthropic or any corporation can keep us safe from governments using AI. I think governments have the resources to create AIs that kill, no matter what Anthropic does with Claude.

So for me, the real safety issue is alignment. And even if a rogue government (or my own government) decides to kill me, it's in my best interest that the AI be well aligned, so that at least some humans get to live.


If you are US company, when the USG tells you to jump, you ask how high. If they tell you to not do business with foreign government you say yes master.

> Where does the line go?

a) Uncensored and simple technology for all humans; that's our birthright and what makes us special and interesting creatures. It's dangerous and requires a vibrant society of ongoing ethical discussion.

b) No governments at all in the internet age. Nobody has any particular authority to initiate violence.

That's where the line goes. We're still probably a few centuries away, but all the more reason to hone in our course now.


That you think technology is going to save society from social issues is telling. Technology enables humans to do things they want to do, it does not make anything better by itself. Humans are not going to become more ethical because they have access to it. We will be exactly the same, but with more people having more capability to what they want.

> but with more people having more capability to what they want.

Well, yeah I think that's a very reasonable worldview: when a very tiny number of people have the capability to "do what they want", or I might phrase it as, "effect change on the world", then we get the easy-to-observe absolute corruption that comes with absolute power.

As a different human species emerges such that many people (and even intelligences that we can't easily understand as discrete persons) have this capability, our better angels will prevail.

I'm a firm believer that nobody _wants_ to drop explosives from airplanes onto children halfway around the world, or rape and torture them on a remote island; these things stem from profoundly perverse incentive structures.

I believe that governments were an extremely important feature of our evolution, but are no longer necessary and are causing these incentives. We've been aboard a lifeboat for the past few millennia, crossing the choppy seas from agriculture to information. But now that we're on the other shore, it no longer makes sense to enforce the rules that were needed to maintain order on the lifeboat.


How exactly have humans changed recently that we no longer require the systems we developed over thousands of years to make society work?

Absolutely everyone should be allowed to access AI models without any restraints/safety mitigations.

What line are we talking about?


> Absolutely everyone should be allowed to access AI models without any restraints/safety mitigations.

You recon?

Ok, so now every random lone wolf attacker can ask for help with designing and performing whatever attack with whatever DIY weapon system the AI is competent to help with.

Right now, what keeps us safe from serious threats is limited competence of both humans and AI, including for removing alignment from open models, plus any safeties in specifically ChatGPT models and how ChatGPT is synonymous with LLMs for 90% of the population.


from what i've been told, security through obscurity is no security at all.

> security through obscurity is no security at all.

Used to be true, when facing any competent attacker.

When the attacker needs an AI in order to gain the competence to unlock an AI that would help it unlock itself?

I would't say it's definitely a different case, but it certainly seems like it should be a different case.


it is some form of deterrence, but it's not security you can rely on

Yes IMO the talk of safety and alignment has nothing at all to do with what is ethical for a computer program to produce as its output, and everything to do with what service a corporation is willing to provide. Anthropic doesn’t want the smoke from providing DoD with a model aligned to DoD reasoning.

the line of ego, where seeing less "deserving" people (say ones controlling Russian bots to push quality propaganda on big scale or scam groups using AI to call and scam people w/o personnel being the limiting factor on how many calls you can make) makes you feel like it's unfair for them to posses same technology for bad things giving them "edge" in their en-devours.

What about people who want help building a bio weapon?

The cat is out of the bag and there’s no defense against that.

There are several open source models with no built in (or trivial to ecape) safeguards. Of course they can afford that because they are non-commercial.

Anthorpic can’t afford a headline like “Claude helped a terrorist build a bomb”.

And this whataboutism is completely meaningless. See: P. A. Luty’s Expedient Homemade Firearms (https://en.wikipedia.org/wiki/Philip_Luty), or FGC-9 when 3D printing.

It’s trivial to build guns or bombs, and there’s a strong inverse correlation between people wanting to cause mass harm and those willing to learn how to do so.

I’m certain that _everyone_ looking for AI assistance even with your example would be learning about it for academic reasons, sheer curiosity, or would kill themselves in the process.

“What saveguards should LLMs have” is the wrong question. “When aren’t they going to have any?” is an inevitability. Perhaps not in widespread commercial products, but definitely widely-accessible ones.


> There are several open source models with no built in (or trivial to ecape) safeguards.

You are underestimating this. It's almost trivial to remove the safeguards for any open-weight model currently available. I myself (a random nobody) did it a few weeks ago on a recently released model as a weekend side-project. And the tools/techniques to do this are only getting better and easier to use!


What about libraries and universities that do a much better job than a chatbot at teaching chemistry and biology?

Sounds like you're betting everyone's future on that remaing true, and not flipping.

Perhaps it won't flip. Perhaps LLMs will always be worse at this than humans. Perhaps all that code I just got was secretly outsourced to a secret cabal in India who can type faster than I can read.

I would prefer not to make the bet that universities continue to be better at solving problems than LLMs. And not just LLMs: AI have been busy finding new dangerous chemicals since before most people had heard of LLMs.


chances of them surviving the process is zero, same with explosives. If you have to ask you are most likely to kill yourself in the process or achieve something harmless.

Think of it that way. The hard part for nuclear device is enriching thr uranium. If you have it a chimp could build the bomb.


I’d argue that with explosives it’s significantly above zero.

But with bioweapons, yeah, that should be a solid zero. The ones actually doing it off an AI prompt aren't going to have access to a BSL-3 lab (or more importantly, probably know nothing about cross-contamination), and just about everyone who has access to a BSL-3 lab, should already have all the theoretical knowledge they would need for it.


It gave me serious vibes of the old internet homepages of highly eccentric people that became a part of the internet folklore, whether in a good way or a bad way.

The video is probably the least bizarre thing there, if that's what you are warning about.


What were you browsing where someone cutting off their own testicles is not as bizarre as other things? I didn't watch the video but atleast there was a warning.

Feds this guy right here ^^


There's some distance between setting pubes on fire and cutting testicles off, dare I say.

Although, setting any kind of hair on fire in public should be punishable, primarily because of stench of the burnt hairs.


> What were you browsing where someone cutting off their own testicles is not as bizarre as other things?

One of my formative early internet experiences was loading up a video of a man being beheaded with a knife.

Luckily, I realized what was about to happen, and didn't subject myself to the whole thing.


As a transgender woman, that isn't something I'd expect to see but am not surprised to see on a site called girl.surgery. dead doves and all that

Looks at the chain of comments, then at the URL domain

Thanks for the warnings, kind strangers.


So this is how you apply for a job in 2026...

Took them how many weeks to go from „maintenance mode” to unmaintained?

They could just archive it there and then, at least it would be honest. What a bunch of clowns.


I know of cities where real estate development is rampant, sometimes to the detriment of quality, and yet apartment prices are soaring.

That's because, in the places where housing is expensive, it's expensive because a _LOT_ of people want to live there. It's a pipe dream that you can out build demand in these places. Reducing prices of housing in nice places to live (by any means, including building) will only result in more demand up until that insatiable demand is satisfied.

Nice places to live can't support all the people that want to live there.

Because demand is, for all intents and purposes, insatiable, the dollar value of housing/property isn't based on supply and demand because supply can't practically be increased to affect demand. Instead, the price is related to what a prospective buyer can afford to pay _every month_ and, thus, is related to interest rates. Interest rates go down, prices go up to the point where a prospective buyer's mortgage payment would be the same.

People who bring up the (un)affordability of housing are never talking about Oklahoma, they're talking about the Bay Area, Southern California, New York City, Seattle, Portland, etc. All places that are so desirable, they can't practically support everyone that wants to live there.


> it's expensive because a _LOT_ of people want to live there.

I can't figure out how to make the math make sense even if I were to build a house in the middle of nowhere. Time and materials is the real killer.

Some day, when AI eliminates software development as a career, maybe you will be able to hire those people to build you houses for next to nothing, but right now I don't think it matters where or how many you build. The only way the average Joe is going to be able to afford one — at least until population decline fixes the problem naturally — is for someone else to take a huge loss on construction. And, well, who is going to line up to do that?


You can't afford a 175k house on a software engineer salary?

https://www.zillow.com/homedetails/3024-N-Vermont-Ave-Oklaho...


"Built in 1954" doesn't sound like new construction. Of course you can buy used houses at a fraction of the cost. That's nothing new. Maybe you missed it, but the discussion here about building new to make homes more affordable.

It's not like the newly built homes are typically the most affordable. It causes a ripple effect as those that can afford it upgrade their housing.

https://research.upjohn.org/cgi/viewcontent.cgi?article=1314...


It is not like I'm homeless. I would be the one upgrading. Except I don't see how the numbers make sense.

You're right: The cost of new construction anchors the used market. Used housing is so expensive because new housing is even more expensive. If new houses were cheaper I, like many others, would have already have built one and my current home would be up for grabs at a lower price than I'd expect in the current reality. However, that's repeating what was already said.


> building new to make homes more affordable

No need to build new, a plethora of affordable homes are available.


If one was freely able to move about the entire world you may have a point. Especially given current events, I am not sure the country in which that house is located would take kindly to many of us moving there. In a more practical reality you're not going to find anything for anywhere close to that price even in the middle of nowhere, never mind somewhere where everyone wants to live. That is where earlier comments suggest building more housing would help.

Except it is not clear who can afford new construction either. It is even more expensive.


> That is where earlier comments suggest building more housing would help.

I explained earlier why I don't think it would. The places with a housing "shortages" are the places where everyone wants to live. Those places would have to build an impossible number of houses to affect demand.

You have people saying they can't afford housing and then, when you show them they can, they say, "not there..."


> Those places would have to build an impossible number of houses to affect demand.

If houses were able to be built freely then everyone would be able to build a house... Except, if you can't afford a used house, you most definitely cannot afford a new one. As before, time and materials are the real killer. The used housing market is merely a reflection of the cost to build new. Same reason used cars have risen so high in price in recent years: Because new cars have even higher prices.

> You have people saying they can't afford housing and then, when you show them they can, they say, "not there..."

The trouble is that you confuse affordability with sticker price. I technically could live in that house for six months before I have to return back to my home country, but I could not legally work during that time. It is far more affordable to pay significantly higher prices in my country for a house and work all year long. The price of that house is low, but the cost is very high.

The places everyone wants to live are the places everyone wants to live because they are the most affordable places to live. If it were cheaper to move somewhere else, the people would have moved there already. Humans love to chase a good deal and carve out an advantage for themselves. However, a low price doesn't mean cheaper.


> The used housing market is merely a reflection of the cost to build new.

The majority of the cost of a home in places with shortages is the land, not the home.


Land is more or less worth the same whether it has a used house on it or if you build a new house on it. The trouble remains that the high cost of new construction anchors the cost of used houses.

Construction costs should really have been driven down by the march of technology, but that really hadn't been the case. It's mostly stagnant IIRC. But construction costs doesn't really explain the housing crisis well.

What does make it strange?

Dunno it felt out of place and forced. Just a feeling I have no data.

I also remember this amazing tiny thing, tomsrtbt. It was packed full of tools, and also had an http server in it.


Aren’t SMS that are over 160 characters being concatenated? There used to be a standard for that.


Generally yes.

I guess a phone/app could exist that does convert to MMS instead, though, since the app can make that decision.


Back in the day they used to be coherent.


Not much more than his recent posts, no.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: