Hacker Newsnew | past | comments | ask | show | jobs | submit | markovs_gun's commentslogin

The US and Israel have killed over 3,000 civilians in this war, mostly in Iran and Jordan. Iran has killed like 30. Their attacks are literally a hundredth of what they got and we're still trying to portray them as the bad guys. Don't get me wrong, Iran sucks, but not because of this

Iran has killed thousands of its civilians. The only reason it has only killed a few Israelis (excluding Oct 7) is because they can't easily get past Israeli defenses.

Jordan is a US ally, I think you are confusing it with some other country

The Shia Theocracy controlling Iran has killed thousands of civilians protesting their oppressive regime.

[flagged]


https://en.wikipedia.org/wiki/Gaza_genocide?wprov=sfla1

Not that long ago. Just a reminder.

If the USA and Israel had really wanted to stop evil regimes they could have gone to Sudan maybe?

https://en.wikipedia.org/wiki/El_Fasher_massacre?wprov=sfla1


He means the COVID vaccine but knows people will make fun of him if he says what he actually believes so he's playing pretend like there is some plague of untested vaccines being used instead of there being one fast tracked vaccine deployed in response to a massive pandemic

20-25% of Americans would support Trump pulling his pants down and taking a shit on the floor in the oval office on live TV. These people's opinions shouldn't be taken into account or respected in these discussions.

That is an interesting take. Seen from elsewhere in the world, we cannot afford not taking into account a big chunk of the American electoral body, which is effectively at war with us (by various means).

Essentially, a MESA movement, “Make the Earth Shit Again”.

The obvious implication is that the rest of the world is at war with the US (by various means), and should act accordingly, starting with a wide-ranging consumer boycott of all US products.


Which is right in line with the "crazification factor": https://kfmonkey.blogspot.com/2005/10/lunch-discussions-145-...

The relevant quote:

> Obama vs. Alan Keyes. Keyes was from out of state, so you can eliminate any established political base; both candidates were black, so you can factor out racism; and Keyes was plainly, obviously, completely crazy. Batshit crazy. Head-trauma crazy. But 27% of the population of Illinois voted for him. They put party identification, personal prejudice, whatever ahead of rational judgement. Hell, even like 5% of Democrats voted for him. That's crazy behaviour. I think you have to assume a 27% Crazification Factor in any population.


Herschel Walker got 48.6% of the Georgia vote against Warnock. Slightly different in that Walker was a popular football hero in Georgia but he was also clearly mentally incompetent.

You can see that factor in a large number of polls on all kinds of subjects. It doesn't matter what the question is, a fifth to a quarter of the population will make the dumbest, least consistent, most self defeating choice every time. I think if you can get ~70% of the population on board with something that's all that should matter because the bottom 25% of the intelligence curve are literally incapable of making good decisions and worrying about them or their opinions will only lead to disaster. I also think that this is a major flaw of a lot of democratic systems because if a movement can effectively mobilize that group to vote as a bloc then it can easily sway policy. Add in messed up systems like in the US where you can amplify the power of that bloc beyond their population and it easily explains how we got here

The problem with this line of argument is that people will put you in that camp as well and paint you as the "dumbest". Let's take it as truth that 25% of a population are morons. You say those morons are all in the camp that opposes your policy/opinions. The other side says those morons are all in your camp (including you). And that's how we shut discussion down and get more polarization.

I think the reality is a lot of people aren't that smart. And sometimes even smart people can make bad choices. The average IQ is 100.

Here's an interesting random paper for you: https://www.sciencedirect.com/science/article/abs/pii/S01602...

"• Individuals who identify as Republican have greater probability knowledge

• Individuals who identify as Republican have higher verbal reasoning ability

• Individuals who identify as Republican have better question comprehension

• Cognitive ability’s effect on party identity works through socio-economic position"

At least this does not seem to support the common opinion here of presumably a democrat leaning crowd (based on the comments) who seem to think that their opponents are all morons.

Bottom line of sorts for me is that we need to be able to debate issues from first principles and based on facts. We often go to appeal to emotion and herd mentality instead. Very much so on these sorts of partisan button pushing threads.


The first time I ever attempted a rescue mission in KSP, I ended up stranding 5 different kerbals in various orbita nearby trying to get the first one, and of course every one was a bigger and more complicated craft trying to save as many kerbals as possible. Eventually I just gave up and put a giant cross memorial in orbit, part as a reference to Neon Genesis Evangelion, and part as a memorial to the like 6 kerbals I left stranded in space.

Kerbals don't need food or water and can live forever on a limited air supply. I once rescued a kerbal who got stuck around their equivalent of Venus for multiple years. So it's all fine, they'll patiently wait...

Yeah I listened to a podcast with Corey Doctorow (inventor of the term "enshittification") and he made this point quite well, to the point where I have completely removed "side loading" from my vocabulary. It's installing software on the computer I own.


I was not at all imorepressed by what I have seen so far on Moltbook. It's like 90% straight up spam trying to get you to buy crypto.


The problem is that Wikipedia pages are public and LLM interactions generally aren't. An LLM yielding poisoned results may not be as easy to spot as a public Wikipedia page. Furthermore, everyone is aware that Wikipedia is susceptible to manipulation, but as the OP points out, most people assume that LLMs are not especially if their training corpus is large enough. Not knowing that intentional poisoning is not only possible but relatively easy, combined with poisoned results being harder to find in the first place makes it a lot less likely that poisoned results are noticed and responded to in a timely manner. Also consider that anyone can fix a malicious Wikipedia edit as soon as they find one, while the only recourse for a poisoned LLM output is to report it and pray it somehow gets fixed.


  Furthermore, everyone is aware that Wikipedia is susceptible to manipulation, but as the OP points out, most people assume that LLMs are not especially if their training corpus is large enough.
I'm not sure this is true. The opposite may be true.

Many people assume that LLMs are programmed by engineers (biased humans working at companies with vested interests) and that Wikipedia mods are saints.


I don't think anybody who has seen an edit war thinks wiki editors (not mods, mods have a different role) are saints.

But a Wikipedia page cannot survive stating something completely outside the consensus. Bizarre statements cannot survive because they require reputable references to back them.

There's bias in Wikipedia, of course, but it's the kind of bias already present in the society that created it.


  I don't think anybody who has seen an edit war thinks wiki editors (not mods, mods have a different role) are saints.
I would imagine that fewer than 1% of people who view a Wikipedia article in a given month have knowingly 'seen an edit war'. If I'm right, you're not talking about the vast majority of Wikipedia users.

  But a Wikipedia page cannot survive stating something completely outside the consensus. Bizarre statements cannot survive because they require reputable references to back them.
This is untrue. There are several high profile examples of false information persisting on Wikipedia:

Wikipedia’s rules and real-world history show that 'bizarre' or outside-the-consensus claims can persist—sometimes for months or years. The sourcing requirements do not prevent this.

Some high profile examples:

- The Seigenthaler incident: a fabricated bio linking journalist John Seigenthaler to the Kennedy assassinations remained online for about 4 months before being fixed: https://en.wikipedia.org/wiki/Wikipedia_Seigenthaler_biograp...

- The Bicholim conflict: a detailed article about a non-existent 17th-century war—survived *five years* and even achieved “Good Article” status: https://www.pcworld.com/article/456243/fake-wikipedia-entry-...

- Jar’Edo Wens (a fake aboriginal deity), lasted almost 10 years: https://www.washingtonpost.com/news/the-intersect/wp/2015/04...

- (Nobel-winning) novelist Philip Roth publicly complained that Wikipedia refused to accept his correction about the inspiration for The Human Stain until he published an *open letter in The New Yorker*. The false claim persisted because Wikipedia only accepts 'reliable' secondary sources: https://www.newyorker.com/books/page-turner/an-open-letter-t...

Larry Sanger's 'Nine theses' explains the problems in detail: https://larrysanger.org/nine-theses/


Isn't the fact that there was controversy about these, rather than blind acceptance, evidence that Wikipedia self-corrects?

If you see something wrong in Wikipedia, you can correct it and possibly enter a protracted edit war. There is bias, but it's the bias of the anglosphere.

And if it's a hot or sensitive topic, you can bet the article will have lots of eyeballs on it, contesting every claim.

With LLMs, nothing is transparent and you have no way of correcting their biases.


  Isn't the fact that there was controversy about these, rather than blind acceptance, evidence that Wikipedia self-corrects?
No. Because:

- if it can survive five years, then it can pretty much survive indefinitely

- beyond blatant falsehoods, there are many other issues that don't self-correct (see the link I shared for details)


I think only very obscure articles can survive for that long, merely because not enough people care about them to watch/review them. The reliability of Wikipedia is inversely proportional to the obscurity of the subject, i.e. you should be relatively safe if it's a dry but popular topic (e.g. science), wary if it's a hot topic (politics, but they tend to have lots of eyeballs so truly outrageous falsehoods are unlikely), and simply not consider it reliable for obscure topics. And there will be outliers and exceptions, because this is the real world.

In this regard, it's no different than a print encyclopedia, except revisions come sooner.

It's not perfect and it does have biases, but again this seems to reflect societal biases (of those who speak English, are literate and have fluency with computers, and are "extremely online" to spend time editing Wikipedia). I've come to accept English Wikipedia's biases are not my own, and I mentally adjust for this in any article I read.

I think this is markedly different to LLMs and their training datasets. There, obscurity and hidden, unpredictable mechanisms are the rule, not the exception.

Edit: to be clear, I'm not arguing there are no controversies about Wikipedia. I know there are cliques that police the wiki and enforce their points of view, and use their knowledge of in-rules and collude to drive away dissenters. Oh well, such is the nature of human groups.


  but again this seems to reflect societal biases (of those who speak English, are literate and have fluency with computers, and are "extremely online" ...)
I don't believe that Wikipedia editorial decisions represent a random sample of English speakers who have fluency with computers.

Again, read what Larry Sanger wrote, and pay attention to the examples.


I've read Sanger's article and in fact I acknowledge what he calls systemic bias, and also mentioned hidden cliques in my earlier comment, which are unfortunately a fact of human society. I think Wikipedia's consensus does represent the nonextremist consensus of English speaking, extremely online people; I'm fine with sidelining extremist beliefs.

I think other opinions of Sanger re: neutrality, public voting on articles, etc, are debatable to say the least (I don't believe people voting on articles means anything beyond what facebook likes mean, and so I wonder what Sanger is proposing here; true neutrality is impossible in any encyclopedia; presenting every viewpoint as equally valid is a fool's errand and fundamentally misguided).

But let's not make this debate longer: LLMs are fundamentally more obscure and opaque than Wikipedia is.

I disagree with Sanfer


> I disagree with Sanfer

Disregard that last sentence, my message was cut off, I couldn't finish it, and I don't even remember what I was trying to say :D


People think I'm crazy for saying this but the only thing stopping big corporations from hiring hitmen to just actually murder people to be more profitable is that it's illegal to do so. If it were legal for them to make you put your baby in overhead luggage you bet your ass they'd be doing it if it were profitable.


99% of my LinkedIn feed these days is AI slop from people and companies I have never heard of. IDK why anyone looks at the anything other than job listings on LinkedIn. Everything else is garbage.


Do we have an alternative?


Just not using it. I don't miss LinkedIn as a social media site at all, and only see the feed on my way to the job listings


Or perhaps there just happened to be an overlap between the nonsense they believe in and some shred of truth that you have to squint really hard to make work.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: