Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Facebook, Twitter dismantle global array of disinformation networks (reuters.com)
180 points by Bender on Oct 9, 2020 | hide | past | favorite | 245 comments


I always find it disheartening that people see the propagation of 'bad' information on social-media networks as a bigger risk to society than said networks becoming the gatekeepers of all propagated information. The term 'Orwellian' has become so overused that it seems to have lost all currency. When we're confronted with an actually Orwellian phenomena people seem to either not notice or not care. It's not as though the viral propagation of false, misleading and harmful information began with social media, or even with the internet. This phenomena was alive and well long before modern technology, which has surprisingly done very little to change it.

I'm genuinely shocked how little backlash there's been against big-tech censorship. I grew up as a liberal in the 90s and early 00s. Freedom of speech and fostering an environment for information to propagate freely were ideals held sacred. It's been sincerely disappointing to see how many of my fellow so-called-liberals are willing to accept huge multinational corporations (of all things) censoring information "For their own good".


>actually Orwellian phenomena

This isn't an Orwellian phenomenon. This is just the typical bad reading of Orwell that Americans have tried to squeeze out of the book because of their free speech fetish.

The central key issue of the book isn't a big scary guy with a mustache stealing your newspaper, it's reality inversion, it's the inability of everyone in society to distinguish between what is real and what is fiction, which is the basis for, not the consequence of totalitarianism. It's the moment when someone blatantly tells a lie, and everyone eats it up, when the world starts to feel unreal and like a bad movie.

The real Orwellian phenomenon is not some kind of scary gatekeeper separating truth from fiction, which is actually a productive use of authority and necessary in any civilisation that doesn't want to turn into a clowncar. What's Orwellian is people going to Facebook, reading blatant lies and untruths, and being unable to distinguish that from what is real. And that is actually caused by the disappearance of gatekeepers, who play a vital role in maintaining liberal institutions.


Yeah freedom in America tends to favor 'freedom to' rather than 'freedom from'. Cutting these disinformation sources off gives us freedom from some amount of manipulation. Meanwhile, if we're more concerned with the freedom to say anything anywhere, whatever limits imposed will be perceived by default in a negative light.

There's merit in examining from both perspectives for any given issue. However, if our objective is a better signal-to-noise ratio of truth in the context of slamming every human's stray thoughts, emotions, and urges into each other in realtime, the 'freedom from' perspective is probably a better approach right now.


Who says that this information is bad untrue?


[flagged]


I've asked you to stop posting ideological flamewar comments to HN


> The real Orwellian phenomenon is not some kind of scary gatekeeper separating truth from fiction, which is actually a productive use of authority and necessary in any civilisation that doesn't want to turn into a clowncar...

From this sentence, I assume that Stalinist Russia's approach to the subject of 'truth' would constitute a 'productive use of authority'?

Do you think that the reporting in the early 2000s mainstream media regarding Saddam Hussein's supposed stockpile of chemical weapons constitutes a good example of the establishment media being good stewards of the truth? This propagation of blatantly false information by the establishment led all the way up to the invasion of Iraq. A war that is these days generally seen as a regime change for the benefit of interested parties, established under false pretenses.

Or is all of this just 'not true communism' to you? Do you support the ways that governments, intelligence agencies and the media have been proven to have colluded to invert our reality? Or would you support some other, idealised government doing it? I have no idea what world your head lives in.


My mom believes that Obama is a Muslim, and my sister believes Breonna Taylor was a criminal who got what she deserved.

The disinformation in our social networks is beyond a breaking point. Its incredibly difficult to have a reasonable discussion when alternative facts exist... I dare say impossible.

-----

What is your plan to deal with the disinformation here? While my mother is over the age of 60, I don't believe her to be stupid. My sister is literally going for her PH.D (and is an accomplished Masters with multiple years of experience in the CDC tracking stuff down), she's certainly not stupid.

Its an issue of disinformation, not stupidity, that's leading people down the wrong path. There's an entire network of corrupted information that's building up alternative facts.

-------

> The term 'Orwellian' has become so overused that it seems to have lost all currency.

We're not in an Orwellian fantasy, we're in an all-out information-war akin to "Ender's Game". Peter vs Valentine Wiggin (or better known by their online personas: Locke and Demosthenes).

Several groups have learned how to weaponize information for their own gain. At first it was just ads, but now it includes a suite of alternative facts for political purposes.


> Its an issue of disinformation, not stupidity, that's leading people down the wrong path...

You're wrong. The difficulty has always been that the most pervasive so-called 'misinformation' is usually fairly plausible, difficult to validate, or simply matters of opinion or interpretation. The notion that Obama is a Muslim strains credulity, and it's not a widely held belief. What about more complex information, such as religious or political texts? These are even more difficult to validate and cause much more harm. Maybe we should just outright ban the communist manifesto? That seems to have caused a large amount of social harm and could be fairly readily classified as 'disinformation'. What about the Christian bible? The Koran? They're full of difficult to validate claims.

> What is your plan to deal with the disinformation here?

What disinformation? The information you don't like, or the information I don't like?

> We're not in an Orwellian fantasy, we're in an all-out information-war... > Several groups have learned how to weaponize information for their own gain...

The fact that you seem to see that information has been weaponised, but can't seem to see that big-tech's censorship is pursuant to their own political aspirations is alarming. Are you legitimately this biased, or just being disingenuous?


> The notion that Obama is a Muslim strains credulity, and it's not a widely held belief.

Its a held belief in my family. My very mother, my uncles, and more. Whether or not it is "widely held" has little-to-no bearing on the seriousness I take it.

Since my mother believes in it, I will take the alternative fact seriously.

> What disinformation? The information you don't like, or the information I don't like?

Disinformation like: Obama is a Muslim (or Obama was born in Kenya). Surely you can agree with me that such a fact running across political discussions is unhelpful.

> but can't seem to see that big-tech's censorship is pursuant to their own political aspirations is alarming

I'm prioritizing issues. I see misinformation / alternative facts to be a bigger problem than big-tech's censorship. As such, I'll stand in support of big-tech's moves here, if it helps the misinformation problem.


Did you read anything in my post? What about the widely held belief that Trump colluded with Russia to influence the 2016 Presidential election? The Meueller report was unable to prove collusion. What if big-tech suddenly banned any claims to the contrary on the grounds that it constituted 'misinformation'? Would you call that censorship then?


> What about the widely held belief that Trump colluded with Russia to influence the 2016 Presidential election?

And no one in my family believes that. Myself included (though I wish the Muller report were written with stronger language to clarify itself: its conclusion had extremely weak wording).

I talk about the misinformation that I can see personally and in my social circle. Fortunately, none of my family members are getting sucked into Q or some of the more dangerous stuff... but the misinformation across the spectrum online destroys discussion as a whole. Far more than "censorship" of posts by Twitter or whoever.

After all, I don't do Facebook. I honestly don't care about it: I'm practically "self-censored" from Facebook because I don't use it at all. What I do care about is the misinformation my family is getting from it.

---------

I mean, it seems clear to me that you are a user of Facebook, and you seem to agree with me that misinformation is running rampant?

I'm blind to it, since I don't read Facebook at all. I can only tell you what has come through my Mother and Sister's mouths in my discussions with them. But as far as I can tell, you're just confirming my point: that there's a huge amount of misinformation here. And I applaud Facebook's announcement that they're fighting against it.


> I mean, it seems clear to me that you are a user of Facebook, and you seem to agree with me that misinformation is running rampant?

No, I don't use Facebook. I haven't for a long time. While I agree that there is propagation of misinformation happening on the platform, that's not my main concern. My main concern is that Facebook, and other platforms, will use their framework for 'removing misinformation' as a tool for censorship in the pursuit of their political agendas. My chief contention is that this poses a larger risk to society than the propagation of misinformation in the first place.


Disinformation most empathically is not information that you or I dislike. You’re muddying the waters by claiming such absurdities. Claims like that have become commonplace, but so much more they need to be fought.


> What is your plan to deal with the disinformation here?

It is disinformation how much disinformation there is. You'd like to believe that people believe everything they read on Facebook, while the truth is people don't care. People don't care about a lot of things and will repeat memes and jokes. We all "believe" things that are more or less false just because we don't care about them. Your relatives believe in things you may think are false but to them is just rubbish tidbits of information that do not affect them. Obama being Muslim or not has no effect on her life, why must we enforce that people now must be informed of the "right beliefs to have"? Who picks what things are important as well?

Just stop. The sad thing is you as a human will not recognize in yourself how much your "knowledge" is faulty as well.


> It is disinformation how much disinformation there is. You'd like to believe that people believe everything they read on Facebook, while the truth is people don't care.

I don't have Facebook. I discussed things with my mother and sister personally. Our relationship is closer than just online crap. We actually talk together on a regular basis.

> Obama being Muslim or not has no effect on her life

This belief of my mother has an effect on our political discussion. In particular, she doesn't want to vote for a Muslim. It sets the stage for the rest of the discussion to come.

Where do I start? Do I start with Muslims aren't bad people? Do I attempt to prove Obama is a Christian? Etc. etc. These facts and beliefs lay the groundwork to the very discussion. And I have to take it seriously if I so wish to have an intelligent discussion with my family.

> Just stop. The sad thing is you as a human will not recognize in yourself how much your "knowledge" is faulty as well.

I know that Obama is a Christian, and that Breonna Taylor was innocent. I don't expect to know the truth to everything, but I'll start with the facts I know about and work my way up.

-----------

I don't believe myself to be the "defender of truth" or whatever. I can point out to my mom and my sister that I disagree with their "alternative facts", but I'm not so arrogant to believe that I can change their opinion on just discussion alone.

But when I see such obvious untruths seep into their talking points, it does make me weep inside on behalf of the truth.


Name the groups.


Social network marketing groups, very clearly. In particular, stock pumpers and/or short sellers.

Surely you've seen the multitude of stock posts in both directions, of any stock that you're interested in? And they're quite correlated to stock prices these days. Its not just political topics, I'm including everything here. Disinformation is the cornerstone of social media strategies and marketing.

The current focus is on the election, because that's in less than a month. But Facebook / Twitter are still at the center of plenty of other issues relating to this propagation of alternative facts.


> My mom believes that Obama is a Muslim, and my sister believes Breonna Taylor was a criminal who got what she deserved.

That is an interesting example because while Obama is factually not a Muslim, period, the question about what Breonna Taylor "deserved" is a political one- a matter of opinions. So you're putting in the same bucket objectively fake information and opinions with which you strongly disagree, and calling for measures that can prevent both from spreading.


Perhaps I was too vague on the Breonna Taylor example. Let me clarify: My sister was arguing with me that Breonna Taylor was a drug dealer. Which is as factually wrong as the "Obama is a Muslim" comments.

EDIT: Note that the warrant in the case was for Jamarcus Glover. I can accept Jamarcus Glover being a drug dealer (or having some connection to drugs), but seeing Breonna Taylor wrapped up in the drug charges is clearly misinformation.


> the question about what Breonna Taylor "deserved" is a political one- a matter of opinions

I don't know anything about her karma or what she deserves, but the allegations that she was a criminal are not based on any information known to the police involved and was mostly invented by people who wanted to defend the police before gathering any facts.


My teen years were the second half of the 80s. Like a lot of kids my age, I’d sometimes seek out weird stuff outside of the pre-internet mainstream. One of my favorite books from back then was “High Weirdness by Mail”. Which was a very funny tongue-in-cheek catalog of crackpots, conspiracy theorists, cults, etc. I won’t detail all the weird stuff I got in the mail, but it suffices to say that most of it was roughly on the same level as Q.

But the key difference between now and then was: I had to seek this crap. Someone had to put together their zine or cassette or, very rarely, their full color glossy magazine. All at their own expense and then send it off to me. Ordinary people would never see this stuff. The mainstream culture was often bland but it was successful at gate keeping this kind of stuff. And this material was too labor intensive & expensive to distribute widely without the mainstream.

So it’s obviously very different now. People like my mom would have never sent a dollar to a post office box in Idaho to get a newsletter about how Ronald Reagan was a microchip controlled antichrist. But these days I frequently have to alleviate her fear from some conspiracy crap that popped up on her timeline unsolicited. Or to tell my aunt that there’s absolutely no evidence Biden is receiving antifa messages from a hidden earpiece.

I don’t know what the answer is. But the situation of widely broadcast misinformation utterly divorced from reality is relatively new. Having something like Q approach mainstream status is new. At least for Americans in the past 50 years. It’s a potentially very dangerous situation. Maybe one day, a vast majority of the population will discount what they read online without corroborating evidence, but we’re not there yet.


> The mainstream culture was often bland but it was successful at gate keeping this kind of stuff...

This is a side-effect of the legacy media organisations no longer having a monopoly on the dissemination of information. Think for a moment on what kind of disinformation you may have unwittingly taken at face value for fact simply because every single news media outlet reported it.

This is exactly the concern that my original post highlighted. My fear is that if we allow them to become the arbiters of online information, it goes far beyond simply censoring the ridiculous and becomes a highly effective means of pursuing their own political agendas.

> But the situation of widely broadcast misinformation utterly divorced from reality is relatively new...

It's not at all new. Ever heard the conspiracy theory that Menthol cigarettes cause infertility to target certain demographics? I remember this wild theory being widely propagated long before social media even existed, among other ridiculous conspiracy theories. The idea that the moon landing is false is as old as the moon landing itself. This is not a new phenomena.


> It's not at all new. Ever heard the conspiracy theory that Menthol cigarettes cause infertility to target certain demographics? I remember this wild theory being widely propagated long before social media even existed, among other ridiculous conspiracy theories. The idea that the moon landing is false is as old as the moon landing itself. This is not a new phenomena.

Whether the phenomenon being new or not is beside the point. The concern is that conspiracy theories are spreading more than they were in the past, and that groups have learned to exploit the them to accomplish their goals.

This has a parallel to what happened with computer malware. It's existed for about as long as home computers have, but up until 2000 or so, the bulk of it was merely childish pranks and wasn't very concerning. Since then, it's exploded in volume and danger, as criminals realized it could be used to make money by sending spam, stealing passwords and payment info, or by holding data ransom. I remember getting a virus on a floppy disk when I was a kid, but that doesn't make me comfortable with getting contemporary malware on my computer.


So, what, we're gonna decide what's "suitable" for normal people to see?

Half of mainstream society, respectable people with respectable jobs, just spent 4 years engaged in a conspiracy theory about Russians and Cambridge Analytica which has quietly fallen apart into "nevermind, we just biffed the election is all".

But if one had said that in, say, 2017, they'd be accused of being a Russian bot. Bias passes everywhere.


The Russians engaged in a campaign to disrupt the US elections. That is not in dispute. They were effective in this by hacking the DNC and Podesta and by using thousands of troll accounts to pretend there was anything scandalous in that data.

Cambridge Analytica had negligible role in affecting the election. That was just a privacy scandal for Facebook.


You've identified the only claims that hold up. Compare to the crazy rhetoric and insinuation we saw for years, and it's clear people were way ahead of themselves.


Some people insinuated that Trump worked with the Russians to get their help, but the consistent message that came from his cast off associates and from journalists with access like Woodward was that he was too stupid to set anything like that up. I don't think anybody reasonable ever believed otherwise.

The Russians didn't give a damn what Trump wanted. He was simply a useful buffoon that they could use to disrupt the election, and they succeeded beyond their wildest hopes.


Propaganda and censorship are attributes of the same underlying aspect: monopoly. Centralised control.

Both problems have the same effective solution: break up the monopolies.

Propaganda is a function of amplification, attention, audience, selective promotion, discovery, stealing the air supply or acquiring of any competion, and coöption of the platform. Propaganda is an inherent property of centralised control.

Gatekeeping and censorship are functions of excludability, audience gating, distraction, negative selection (obfuscation), stealing the air supply or acquiring of any competion, and, again, coöption of the platform. Censorship is an inherent property of centralised control.

Audiences, a public, divided across independent networks, with access to different editorial selection, with access to different input message streams, are far less subject to either propaganda or censorship.

Or of surveillance, whether of the state, capitalist, or non-state-actor varieties, which is also an inherent property of centralised control.

It's importance to realise that the key is not nominal control but actual control, which may be nonobvious or unapparent to many participants. A system with appearances of decentralisation may well be centralised under the surface.


I also grew up in a liberal society and what made liberalism so valuable was that institutions were locked into a narrow and boring path.

It's becoming more clear that general humanity has less and less interesting things to say and is clamouring for power via outrage. I'm all for maintaining the vital ability to sort signal from noise outside of class and cash mechanisms. There's got to be some signal left to find though.

Straight commercial censorship is predictable and fightable. The coming danger is thought control from religious people who think their interpretation of what-not-to-do is more important than your liberalism.


> I always find it disheartening that people see the propagation of 'bad' information on social-media networks as a bigger risk to society than said networks becoming the gatekeepers of all propagated information.

Who thinks this? Are you just assuming that anyone who approves of social media networks removing disinformation also thinks that is a bigger problem than the social media networks being powerful gatekeepers?


The two beliefs are mutually exclusive. If you support social-networks removing what they decide is 'disinformation' by implication you think this is the bigger risk. This is the mechanism by which they become the gatekeepers of information.


I can support CDA 230 (which levels the playing field for startups and large social networks) and call for better moderation everywhere, your view precludes me from calling for better moderation, when in fact it's been a primary goal of CDA 230 all along.

It's really just as simple as do you support CDA 230 or not? Anti-trust concerns are a different conversation, where there should be strong enforcement too.


Not at all mutually exclusive. Whether Facebook removes disinformation does not affect how much power Facebook has to distribute information.


> Not at all mutually exclusive. Whether Facebook removes disinformation does not affect how much power Facebook has to distribute information.

While not mutually exclusive, it is clear that the importance of Facebook removing disinformation is linked proportionally to the power Facebook has to distribute information.

The less power Facebook has, the less important it becomes that they, in particular, remove disinformation.

The more power Facebook has, the more they become a disinformation SPOF.


> it is clear that the importance of Facebook removing disinformation is linked proportionally to the power Facebook has to distribute information

Yes, but not proportionally to the power Facebook should have to distribute information.


> Yes, but not proportionally to the power Facebook should have to distribute information.

Well, yes, though arguments can be made as to whether the two are orthogonal, or if the relationship is an inverse linear one, or something more complex (I'd lean toward an S-curve and that we're currently in the middle of it).


Disinformation content works because it drives huge engagement.

Huge engagement pays the bills for FB and twitter.

If we expect change we need an analog to a carbon or pollution tax for social.

“Show me the incentives and I will show you the outcome.”- Charlie Munger


This is a popular sentiment on HN, but it oversimplifies the situation. Engagement, in the aggregate, does not pay the bills. Advertisers paying us to sponsor tweets does. Advertisers will only do that if paying to sponsor tweets results in either improving brand image or converting sales.

Advertisers won't pay for ads that don't get clicked on. Political pissing matches and disinformation campaigns don't bring the kind of engagement that results in clicking on ads and buying things, or of making positive brand associations.

Engagement costs us money. It increases system load and the resources required to actually run the platform. Raising engagement for the sake of raising engagement is a net negative in the long run, because it results in too much engagement that doesn't sell products (and thus ads).

I suspect there are some here who won't believe this; a few may be tempted to throw out a certain pithy Upton Sinclair quote. I'd like to know from this camp why you think that way.


While I agree the parent is oversimplifying the connection between engagement and ad revenue, I would say that there is also an oversimplification in the way you represent the usefulness of engagement to the company and the people working there.

First, when a company has shitty earnings, what metrics do they typically use in their investor reports to make the company still look palatable? New user acquisitions. Increased engagement. It doesn’t matter that the engagement is not generating money because it still lets them say “well, we lost a billion dollars this quarter, but user engagement is up 80% YOY so things are looking good for the future! (Please keep buying our stock/fund our next round!)”

Second, engineers working at a social networking company can’t directly influence what advertisers spend, but they can influence the behaviour of users, and leverage that for personal gain. “Look at how valuable I am, I developed this enhanced rage generating machine that increased engagement by 20% in A/B testing! Give me the big bonus now!” It’s a much harder sell to say “hey, Boss, I deleted a feature which was causing immeasurable subjective harm to society! It probably caused 100,000 users to delete their accounts and move to Gab! Give me the big bonus now!”


In advance, please excuse the tone of this response: I am angry at the company and the industry you are defending, not really at the human who I'm responding to, but I cannot summon the energy right now to go back through this any more times and to keep stripping out the vitriol while retaining the intent.

> I'd like to know from this camp why you think that way.

Cool, hi.

> Engagement, in the aggregate, does not pay the bills. Advertisers paying us to sponsor tweets does.

half-truth; engagement in the aggregate does not pay the bills alone, but advertisers pay for people to see sponsored tweets, and more people see the sponsored tweets if more people are online for more time, and more people are online for more time if engagement is higher, so... engagement in the aggregate does in fact pay the bills.

> Advertisers will only do that if paying to sponsor tweets results in either improving brand image or converting sales.

false; this is a nice rosy image of advertisers as rational actors operating with perfect information that is not at all realistic.

If an advertisement performs poorly, who is to blame: twitter? the ad copywriter? a graphic designer? the product's dev team? A product owner? The marketer who made a buy on some specific targeting segment? Their manager? The guy who hired those two? The CMO? The CEO? Marketing teams get fired constantly, almost annually, but very few firms are ever going to all-out stop advertising on social media platforms on the basis of poor conversion on their ads, because they cannot know for certain that it's not their fault, and they have to keep playing the game even if it is rigged.

The thinking here is easy to comprehend: "Walmart is still doing it... so we must be doing it wrong."

> Advertisers won't pay for ads that don't get clicked on.

again, false. Advertisers do this all the time, because they're cargo-cult following buffoons and there's money sloshing around fucking everywhere.

Advertising money gets laundered, advertising money greases wheels, advertising money gets used by incompetents and people's nephews and rapacious over-confident investors who are running their own little schemes, and a billion other things, and it even occasionally gets used by savvy nerds who come to suspect it isn't actually working, but I assert: most advertisers will repeatedly pay for a lot of dumb shit that does not work at all. Always has been.

> Political pissing matches and disinformation campaigns don't bring the kind of engagement that results in clicking on ads and buying things, or of making positive brand associations.

This is an interesting academic distinction and I'm sure you know more about specific content -> behaviour readouts than I do, but the theory is simple: more time online results in more impressions and more impressions yields more conversions. Not a higher rate, mind you: just. more.

It does not really matter why or when or how we see the words "coca cola". All that matters is that we see those words more. Every day, more times per day, more places, more people. We need to be thinking about them always.

That is what they're paying for, and it doesn't even have to make sense! That might not even get them anything, but they have more money than they can possibly spend, so they may as well spend it on trying to get us to look at and think about them.

Can it be more optimal? Could you squeeze blood from this stone? Sure, probably, but whether I'm on twitter reading great video-game recommendations or reading flame wars is a pittance compared to whether I'm online or not at all.

I wouldn't be surprised to find out that the optimal situation is to make users cycle in and out of the outrage content: get me mad so I stick around a bit longer, cool me down with something funny/sweet, then show me an ad, then get me mad again. Repeat.

> Engagement costs us money. It increases system load and the resources required to actually run the platform.

this is absurd. It's absurd! It really makes me question your intent and good faith.

The platform is designed to be used, by definition! What, are you telling us twitter does not want its users online? You're saying that engagement is "expensive", is twitter running some kind of other service? Something outside of users looking at and posting tweets? This is nonsensical to the point of being upsetting.

It is like Philip Morris fretting over all the tobacco they're going to have to pay to harvest so that people can keep smoking. If only they didn't have to! You made an advertising platform and people must be present on it to be advertised to. End of story.

Now, of course, there are costs to be optimized. A dumb example: if you know you can't show me an ad between now and when I will naturally end my session, you should maybe try to get me offline right now, before I download any more video data, but that stuff is just sand on the beach for the difference it makes.

Maybe you think this matters because it is the work you do, or it's work you know does get done, but it does not matter in the big picture.

Side-rant: Twitter could fire 95% of its employees tomorrow and probably be more profitable than it is today in the long term, but it won't, again because of cargo cults and social wisdom.

Almost none of the feature work that has been done by twitter in the last 8+ years has materially mattered: the product is done, and you're mostly paid to arrange deck chairs on a ship that's sailing along smoothly and to look pretty. Employee headcount, profit, ad impressions: at the end of the day, all of this is just disembodied marketing metrics for investors to speculate on and to try to extract cash from before the whole thing tanks out someday.

> Raising engagement for the sake of raising engagement is a net negative in the long run, because it results in too much engagement that doesn't sell products (and thus ads).

This recap of your argument is just a rehash, but it restates the basic flawed premise in full:

"Advertisers are rational and have perfect information and will pay only for the optimal conversion strategy up to the optimal point of cost/benefit, so if we ever over or under-shoot at all, we'd lose money and we'd go right out of business!"

So, let me recap as well:

- Twitter is an advertisement platform.

- Twitter relies on dollars spent by advertisers. We agree so far, I think.

- Advertisers pay both per impression and per click.

- Advertisers have bad information, and act on social wisdom as much as or more than anything else. They cannot evaluate your performance by how well their advertisements do on your platform. This right here is the secret sauce!

- Engagement (i.e. time spent paying attention to the platform) may not raise conversion rates, (i.e. clicks per impression), but higher engagement is strongly correlated with more conversion overall. People gotta be online.

- Engagement is the core metric of your business.

To put this all very simply: if I am on twitter all day, it's a hell of a lot more time for me to run into and to think about Harry's Razors than if I never went on there at all.

That is the product you are selling to advertisers, and they will pay for it for a long time, even if it doesn't really make any sense or money.


Do you believe the same to be true for Facebook as well? Why is Twitter encouraging outrage-engagement via the (somewhat filtered) Trending section? Why aren't you actively trying to decrease political engagement if it's a net negative?


> Do you believe the same to be true for Facebook as well?

I have no idea about Facebook. I barely even use it so all my information is second-hand.

> Why is Twitter encouraging outrage-engagement via the (somewhat filtered) Trending section?

We're not, at least not intentionally. Do you think we are? If so, why?

The Trending section is still biased towards things that are being talked about frequently. Unfortunately, that means outrage-inducing topics will tend to bubble up there. We're working on making trend identification more sophisticated in order to provide better context, but it's a really hard problem with such short snippets of context.

> Why aren't you actively trying to decrease political engagement if it's a net negative?

"Political engagement" overlaps with, but is not synonymous with, "outrage engagement". Outrage engagement is a net negative; political engagement that does not devolve into misinformation and toxicity is not. We are actively trying to decrease engagement with misinformation and outrage, but it's a hard problem. Also, frankly, it's being hampered by a lack of focus internally that is leaving the teams involved with no clear direction other than "do something, now!".

Non-toxic political engagement is a net positive for us. We don't want to throw the baby out with the bathwater by shutting it all down, assuming we even could (it's fundamentally the same problem, though a little bit easier). I see how other people might see that as the better option to shut down the toxicity and misinformation, though.


> We're not, at least not intentionally. Do you think we are? If so, why?

I haven't been an active Twitter user for years, but whenever I visit it these days, Trending looks like it's at least partially optimized for outrage. I understand it's not actually "what's popular" (because otherwise it would constantly be about Justin Bieber & co), it's curated, or at least topics need to be approved in some form to be allowed to trend, right?

> Non-toxic political engagement is a net positive for us.

I've seen very few non-toxic political engagement on Twitter, but maybe we have a different understanding of what is healthy engagement. I'm not sure there is a baby in that bathwater. I'm not even sure it's bathwater.

From my perspective, it's either a shouting match with a lot of groupthink and -speech or somebody talking to their followers about how evil the outgroup is. And that's not because I'm looking at controversial things, I believe. I visit some developer's Twitter feed and one or two clicks later I'm in the middle. It's like that Wikipedia game where you start on a random article and need to make it to Hitler with the least amount of clicks, but it's less challenging because almost all clicks lead to outrage.

You seem to be more focused on whether what engages the users is factually true. Is it that advertisers don't want to be seen next to obvious fake news? Do advertisers not care to have their brand associate with not-actually-honest-but-not-fake-news-either outrage-engagement by, say, Robert Reich?


> I'm not sure there is a baby in that bathwater. I'm not even sure it's bathwater.

Brilliant. I'm swiping it.


> We are actively trying to decrease engagement with misinformation and outrage

Can you talk more about this? On its face, this seems like an explicitly anti-leftist position. Some philosophers even claim that the role of the political Left is the organization of societal rage. For instance, if Twitter had its way in the domestication of political speech on its platform, then it might have suppressed conversations about BLM and prevented it from having a chance to become mainstream.


That it drives huge engagement is an indictment of public education.


The primary driver seems to be a psychological need to feel unique, unlike "the sheep" out there https://www.discovermagazine.com/mind/why-the-pandemic-is-tu...


This is only half of it. Everyone is copying what the unique people are doing because they think this is the way to be unique.


Yes, on the other hand a lot of misinformation tactics hook into built-in human tendencies and requires more than just general education to identify and be immune.


It requires at least education into those tendencies and introspective drills to overcome them. For kids, drills would mean roleplaying with lying games, comparison shopping, that kind of thing.

We've understood those tendencies much better since Edward Bernays, but I do not know of any young-adult curriculum that teaches them. It would be good for everybody.


I think your view of what misinformation looks like is a bit outdated. It's not just dumb chainmails anymore.

One claim I saw passed around recently, for example, was that this particular video on the White House lawn was greenscreened. The people I saw discussing it were saying things like "you can tell cause of how pixelated the background is" or "look, the trees are on a 10 second loop!". I don't know what we could put in the public education curriculum to stop people from falling for this kind of thing.


Healthy skepticism is not only easy to teach but easy to drill, and yet I remember no time in my early education that it was even a topic in class, except maybe tangentially - a short story theme, a blurb about a famous scientist or inventor. I doubt that has changed.


In California at least, skeptical analysis is in the curriculum from sixth grade onwards. I was under the impression it was there in all states, although I could be mistaken.


Now that's interesting. There's a delay between when a curriculum stream is adopted and when its students start participating in civic affairs. My understanding (unconfirmed) is that participation in Twitter and Facebook is dominated by older age groups who may not have learned that material.

How long since this curriculum was adopted state-wide in California? And what's it officially called? I couldn't find any information about it when I searched for "skeptical analysis" on the California Dept of Education website.


It's officially called "expository critique" in the oldest document I'm familiar with (https://www.cde.ca.gov/be/st/ss/documents/elacontentstnds.pd...), but I don't know if it was there before 1997 or not. More recent standards e.g. (https://www.cde.ca.gov/be/st/ss/documents/finalelaccssstanda...) don't appear to use a specific term for it, but they still include things like "Delineate and evaluate the argument and specific claims in a text" and "identify false statements and fallacious reasoning".


My high school in Indiana in the 1990s taught skepticism; there were two required classes called “citizen’s government” and “social criticism”.

We watched Dr. Strangelove in Social Crit and had a two-hour discussion following. It was awesome.


"When a measure becomes a target, it ceases to be a good measure."

Engagement only measures engagement. I find Star Trek engaging, but one thing it isn’t is educational.


If this were true, what would it affect about the argument?


We'd be taxing Twitter and Facebook to make up for everybody's failure in public education, which seems unfair.


There's a lot of educated people who are nevertheless highly partisan.

Social networks pray on the deeply rooted instincts that are hard to suppress.


Honestly, I find Twitter's editorially driven hash-tag auto-complete and "trends" to be one of the biggest sources of misinformation out there.

It's laughable and frankly patronising that they even pretend to not have an opinion or a political leaning. The more they try to wiggle out of the Web of "disinformation", the more firmly they become lodged in it.


Right, this is my main problem with the big social media networks. They claim objectivity, but then show you things you don't follow. Pick one.

Facebook/Twitter cannot truthfully claim "Whoa, we can't remove holocaust deniers! We're a neutral platform!" and then turn around and show me trending topics, news, and people.

If you're so fucking neutral, then only show me things I follow, and don't insert ad content into the feed like it's real. Put if off to the side like a regular advertisement.

They want to have their cake and eat it too.


One should be wary of promoting the idea that these companies will create a valid and trustworthy tiered system of validity in news media. News media already has serious problems that are generally ignored, go unrecognized, or become part of false partisan narratives. Indicating that they are to be trusted and this other class shouldn't be might sound fine at first, but the fight will be at the line drawn between the two.

Will less popular media outlets be flagged as illegitimate? What if a site pushes a conspiracy theory that's actually well-backed but is considered harmful by one of these companies? Even assuming good faith, look at the coronavirus filtering and compare it to what the "trusted" media sources and CDC were saying back in January and February regarding masks. If you were to independently say, "hey, what they're saying about mask efficacy doesn't make much sense. Everyone should wear some kind of mouth covering just in case", would you be flagged as dangerous misinformation? Because that position would be correct and warranted but also fly in the face of the cited authorities and experts (who were curated in a biased way to begin with, ignoring Chinese scientists' recommendations and the experiences of East Asian countries with other pandemics). Remember, in January and February, those in the US and often EU were being told that masks didn't really protect you, could even increase your risk, and to specifically not purchase and wear them. This was always an inconsistent narrative, of course, because the point of this misinformation was to reserve stock for healthcare professionals because masks do work and healthcare professionals needed them.

We need better systems for addressing misinformation than placing the burden on platforms who will inevitably work in their own interest and attempt to automate processes that should be manually (and expensively) done on an individual basis.


It is ludicrous to place any trust in any validity vetting these corporations will enact. That's placing the foxes in charge of the hen house. We need independent bodies, with regulatory power, and anything less is simply enabling the oppressors and their democratic destruction.


3500 accounts? single bot will make those in several weeks


> Facebook said it had found 10 networks, some of which it had previously identified publicly.

I think this is the more important bit, I don't know how they prevent abuse by the same network again. These guys can just manufacture new identities.


908087's reply is dead but should not be. As someone who ran bots since they were simply macros this is exactly what is done.

If you find a botnet you can assume they have an additional ~50% or greater 'clean' accounts on standby for exactly this situation. These become easier to obtain over time via purchasing or black hatting accts, etc. The longer you run undetected the longer you have to stockpile clean accts as you know its simply a matter of time.

I would imagine if 100% of your revenue depends on it you'd be even more prepared.

I know back in the Diablo 2 days some of us were friendly with blizdevs who would warn us of impending banwaves, upcoming updates to server code and whatnot. I'm quite sure the same still applies today only at a far larger scale since back then online communication was still young.

Anyways all this to say if they are experienced and know its coming they wont be down for long, if at all.


Or switch to the ones they've already manufactured for this purpose.


My thoughts exactly, it will take a dedicated organization less than a day to re-create 3500 accounts


Why bother with bots when desperate peasants can sign up for free.

governments with fractional reserve banking

Anti-bot detection only affects the 99%. The 1% simply can pay for capachas or mechanical turks.


> single bot will make those in several weeks

Minutes!


but did you see the headline? it was an entire global array!


Repeat after me, social media is not reality.

Social media is the new tabloid salacious media.

Yes it can be fun, entertaining, informative and educational, it can also be propaganda, misinformation, astroturfing, and a big high school like popularity contest where people are fake and narcissistic but probably insecure.

The truth is the internet is teaching the biggest lesson ever in critical thinking and getting your information from many sources across spectrums, countries, divides and more.

Let's hope that people see it as a lesson and not somewhere they can bask in their confirmation bias all day, or make decisions based on fear, in those cases the populace is easy to manipulate.


> Social media is the new tabloid salacious media

We’re living through a modern rehash of yellow journalism [1]. Both were spurned by free societies adapting to new communication technologies.


I'd also argue that what's left of the corporate media is rapidly losing credibility, highly partisan and rarely objective. this feeds social network frenzy, a lot of which is people parroting and attempting to amplify ideas from their preferred channels. There's always been a huge corporate yellow journalism problem, now it is possible to instantly orchestrate rage and frenzy repeated across countless personal accounts. Fear and hatred is highly corrosive....


> I'd also argue that what's left of the corporate media is rapidly losing credibility, highly partisan and rarely objective.

How so, exactly? This is kind of an odd moment, where the US president is deeply incompetent and a shameless liar, and some seem to think it's biased for the mainstream media to even point those facts out.

There are definitely some aspects of the traditional media that fit your description (all cable opinion shows are best avoided by everyone), but that's by no means all of the media.


> where the US president is deeply incompetent and a shameless liar, and some seem to think it's biased for the mainstream media to even point those facts out.

Deeply incompetent is an opinion, not a fact. So yes, it is a form of bias.

Actual fact-based reporting would be articles like "President did XXXX on YYYY" and leave the opinions and talking heads out. It would also be fairly dry and not generate outrage click traffic which media companies seem to thrive on these days.


So you know what's in that "actual fact-based reporting"? Editorial decision of coverage. "What you choose to talk about" is editorial. That "fact-based reporting" is no less subject to lies of omission. And, beyond that? Contextualization of those "facts". What are these facts? What do they mean? Why are they worth reporting on? Why are they worth reading about? Are they even "facts"?

I don't blame you personally for propagating this misunderstanding of journalism as there is a lot of power and money behind not having actual journalism, but "actual fact-based reporting" is a canard designed to keep the populace dumb. Reportage--journalism--has always contextualized information and explained why you should care. That cannot, repeat cannot, be divorced from the act of reporting. A stock ticker is not journalism. You can get it if you want. But it doesn't mean squat to the overwhelming majority of people, nor is it expected to, and the dull ignorance of a wave of decontextualized "facts-only" reportage is an idea designed to create a dumber and less civically capable society.

It also happens to make the comfortable and the comparatively powerful, a class to whom I belong, feel less challenged about things. Convenient, that. But bad for us, no matter how convenient it is.


It's the same issue as social media. People love to read opinion pages so the news orgs end up providing more and more of it. The eyeballs Fox News gets is like 99% for their opinion programming.


I would find it hard to argue that any of the current POTUS work has been competent.


We just had a thread on hacker news where most folks agreed the administrations changes to the H1B program were welcome.


Even a broken clock is right twice a day. Competence is demonstrated over time, not in a vacuum of individual decisions. Someone that is competent at Chess proves it over an entire game. A novice can still make a few good moves over the course of that same match.

Pretty much any competency can be demonstrated that way.


No need to move the goalposts. The original comment wasn't arguing whether POTUS was competent or not, which is what you are trying to argue.

The original comment was arguing whether any of the POTUS work was competent. And that H1B change seems to have been competent, according to the general consensus on HN.

Is the entirety of the rest of POTUS' work incompetent? Maybe. That wasn't the original claim though.


But did POTUS do any of that? Part of his administration may have some competence, but the wast majority don’t seem to have the greater good in mind nor interested in respecting democracy. To me that is part of competence in government.


I think that a very real argument could be made that this PoTUS was elected as a direct intolerance of the corruption and intolerance of the normal citizenry of the US by the professional politicians. To expect him to be competent at the job was to be naive (which a lot of people were), but it was a knee jerk reaction to the options left to them, I think.


Trump does not have the competence to responsibly constrain a pandemic to the same deaths rates as all the other first-world countries.

He has demonstrated his incompetence in wrt to this, it is not objective.


There was an interesting article the other day how the current president of CNN emphasized Trump coverage prior to the last election because it boosted ratings. Coincidentally, he was also the president of NBC and in charge of entertainment programming when he signed Trump for The Apprentice. https://www.nytimes.com/2020/09/20/business/media/jeff-zucke...


@ardy42 I think you just proved my point. To presume to know anything about what goes on in the rarified power struggles at an intercontinental/global levels requires free speech and giving serious researchers/writers a viewable voice. There are plenty of wonderful investigative reporters (Whitney Webb is a stellar example right now for example) but the editorial employees of oligarch owned media have marching orders and objectives to meet, so voices like Whitney's are invisible in the 'mainstream media', except to be undermined if their work gets through to an audience.

We don't know anything about the political layers of the onion batting for control of the US democratically elected leadership, but we do have a daily torrent of candidate abuse, undermining and negativity from the two big corporate camps and zero coverage of anything else (Greens, Libertarians, candidates rejected by their party etc etc). The person currently in power usually gets the worst of the abuse and second guessing as who-knows-who fights to depose and replace. (I'm a registered Dem in Cal FWIW, not arguing a political point here).


> There are plenty of wonderful investigative reporters (Whitney Webb is a stellar example right now for example) but the editorial employees of oligarch owned media have marching orders and objectives to meet, so voices like Whitney's are invisible in the 'mainstream media', except to be undermined if their work gets through to an audience.

I spent about a half hour scanning some of her stuff, and I'm getting a strong conspiracy-theorist vibe from that person:

https://unlimitedhangout.com/2020/09/investigative-reports/b...:

> [Rosa Brooks] was also previously the general counsel to the President of the Open Society Institute, part of the Open Society Foundations (OSF), a controversial organization funded by billionaire George Soros. Zoe Hudson, who is TIP’s director, is also a former top figure at OSF, serving assenior policy analyst and liaison between the foundations and the U.S. government for 11 years.

> OSF ties to the TIP are a red flag for a number of reasons, namely due to the fact that OSF and other Soros-funded organizations played a critical role in fomenting so-called “color revolutions” to overthrow non-aligned governments, particularly during the Obama administration. Examples of OSF’s ties to these manufactured “revolutions” include Ukraine in 2014 and the “Arab Spring,” which began in 2011 and saw several governments in the Middle East and North Africa that were troublesome to Western interests conveniently removed from power.

The Soros stuff is a bit of a red flag. Also, Ukraine wasn't non-aligned, but aligned towards Russia. Anecdotally, the Ukrainians I know weren't fond of Yanukovych and were happy to see him go, both times.

https://twitter.com/_whitneywebb/status/1312730357175902209:

> Good ole sheep herder Bernard!

Referring to people as "sheep"?

https://twitter.com/_whitneywebb/status/1312430264870674438:

> My opinion:

> This is about ramping up fear and distracting everyone from the VERY DISTURBING revelations in just the past few days on something the creepiest part of the military you've never heard of is up to. Stay tuned!

The "this" she's referring to is Trump getting Covid-19. Apparently she thinks it's part of some kind of coverup rather than the very obvious end-result of the choices he's made for himself and his staff.


@ardy42 voices like Whitney need to be heard whether you agree or not, there are vast swathes of 'unreported' things going on right now, my original point. The mainstream media is anodyne in comparison... Separately the term 'conspiracy theorist' is both anti free speech and extremely unhealthy in a democracy. One person's investigative reporter is another persons 'conspiracy theorist'...


> @ardy42 voices like Whitney need to be heard whether you agree or not, there are vast swathes of 'unreported' things going on right now, my original point. The mainstream media is anodyne in comparison...

The mainstream media is usually anodyne in comparison to conspiracy theories, because the latter can have extra drama injected into them and push more emotional buttons.

Take that first article I linked to above. The "clear Trump win" scenario Whitney's article dwells upon and gives a sinister cast also included a large popular vote win for Biden, which she completely failed to mention and which would have undermined her "planned power grab" narrative significantly if she had. A constitutional crisis is much more understandable if the winner of the electoral college lost the popular vote badly, given it's always been a broken system and it seems illegitimate to a lot of people for the winner of an election is not be the person who got the most votes.

But it's way more dramatic if you can make it sound like the Democrats are planning on stealing the election regardless of what happens.

> Separately the term 'conspiracy theorist' is both anti free speech and extremely unhealthy in a democracy. One person's investigative reporter is another persons 'conspiracy theorist'...

Nope, sorry. I'm not impinging on anyone's free speech rights by using the term "conspiracy theorist." I'd argue it's also rather healthy term to use in a democracy, since a democracy will fail to function effectively if such thinking becomes too dominant.


One person's investigative reporter is another persons 'conspiracy theorist' depending on your beliefs and views. You are promoting your beliefs and views on this thread, and attempting to argue about the credibility of a writer you don't agree with or believe. This is what the broader thread is about, you're proving my point again.


> One person's investigative reporter is another persons 'conspiracy theorist' depending on your beliefs and views. You are promoting your beliefs and views on this thread, and attempting to argue about the credibility of a writer you don't agree with or believe. This is what the broader thread is about, you're proving my point again.

And some people think ancient aliens and UFOs are serious areas of study, but most reasonable people think it's nonsense. That disagreement on this kind of thing is possible doesn't make every perspective equally credible or vital.

I actually agree with your point:

>>>>> To presume to know anything about what goes on in the rarified power struggles at an intercontinental/global levels requires free speech and giving serious researchers/writers a viewable voice.

Where I disagree is that people like Whitney Webb can shed any more light on those power struggles than Bob Lazar. If the article of hers that I read is a good representation of her oeuvre, she offers tendentious and misleading interpretations of already public information. I think the best that can do is offer a false sense of deeper understanding.

A real deeper understanding needs someone to take serious risk, like Edward Snowden did, to bring secret things into the light.


You're all over the map with Bob Lazar & Edward Snowden here. Webb has done amazing work on uncovering Maxwell/Epstein facts and connections ignored by the corporate media. Your idea that 'reasonable people' ignore anything not promoted and discussed in the corporate media (Snowden for example) again proves my point.


I see a "[1]", but I don't see anything referencing it, did you mean to add a link?

If anyone else is unfamiliar with the term, I found a wikipedia entry: https://en.wikipedia.org/wiki/Yellow_journalism


Yes. This.

Thinking is hard and it requires training and conditioning to do. I'm not talking about what to think, but rather how to think.

It's an incredible convenience to delegate thinking, like the difference between spending 3 hours debugging some code versus having the answer readily available on StackOverflow.

Conversely, that convenience comes with a cost of a stunted intellect, where an individual can become so dependent StackOverflow that they are incapable of doing something new or difficult if it hasn't already been done.

Neither is necessarily better or worse, there are good arguments for the "hard way" and the "easy way". The important thing is recognizing the distinction and appreciating the value of both.


This is a tangent but is the meme about programmers being dependent on Stack Overflow really all that true? It's something I've heard about a lot -- I've been coding for fifteen years, professionally for seven, and certainly I have used Stack Overflow occasionally, but I go to the language documentation a lot more often, and honestly write a lot of code without looking at more than the source of the libraries I'm interacting with -- which are often proprietary so SO wouldn't be any help anyway -- or the standard library.

Who's on Stack Overflow all the time? I've never even made an account. I guess I referenced it more often when I was a younger programmer but never all that much..


The meme isn't that we're asking questions ourselves on SO, but that SO contains answers to questions that are difficult to ferret out of the documentation. If you don't know what to call something, or need to know common practice rather than every possibility, or comprehend an error message, it's a lot easier to get it from somebody else who has already solved that problem.

Of course somebody has to ask the question first, but with StackOverflow, one person can ask it, and then everybody else can find it via the magic of Google. (And I do mean "magic": beyond matching keywords, a search engine can, on a good day, point you pages popular with people having similar problems.)

I'm on StackOverflow any time I need to learn a new piece of software, which is pretty common. When I'm "just programming", I can get my work done without it due to decades of experience. But keeping current with new technologies often means getting lost in the implementation weeds, and it's great when others have already been there and can give me a shortcut out.


That is a good question. I think it really depends on your personality. Engineers tend to be conscientious, orderly, industrious types. Being industrious typically means being a self-starter who takes the initiative and is probably self-taught.

That said, intelligence (g factor) isn't simply being conscientious. Being intelligent means that your personality is flexible and you're able to apply different aspects of your personality depending on what the situation calls for. Conversely, less intelligent people are generally stuck with the personality they have and are less able to modulate their behavior depending on the context.

Anyway, I used the StackOverflow analogy because it's more likely to resonate with the HN audience. The point was more about knowledge and relying on others as a crutch to avoid having to think for yourself. It's not inherently a bad thing, but doing it too much has consequences that may not be obvious.


"Let's hope that people see it as a lesson and not somewhere they can bask in their confirmation bias"

I'm not sanguine. Be honest, which would you bet on happening in the majority of cases?

I wonder sometimes if it's almost part of some fundamental psychology?


Do you believe the general public is better at filtering out misinformation than they were 100 years ago? What about 500 years ago?

It takes time but people do seem to get better at filtering out mis-information. I believe (and we have some evidence) that we can speed this up by helping teach critical dissection of articles and content and build the skill set to weigh sources.

The problem will never go away entirely. It's an arms race between people who want to get closer to understanding reality, and the people selling a story to some end (profit, power, etc). Like all arms races, sometimes one side lags behind but they usually adapt and leap ahead.


We've never had yellow journalism powered by trillion dollar companies that are extremely good at manipulating your average Joe to keep consuming more based on their addiction algorithms.


Those were the conditions of the original yellow journalism, including people who would literally chase you down on the streets shouting headlines at you and trying to get you to buy a paper on your way to and from work. You had people in backrooms analyzing sales based on different newspaper features and writers - and figuring out what headlines would sell the most.


That's still population or city level targeting. Not individual.


the problem will never go away entirely because unless you're discussing the physical state of some matter, there really isn't any truth


+ I ate two slices of proscuitto and blue cheese pizza for lunch

+ Software engineers get paid a higher average salary than babysitters

+ Yesterday was before today (and any other tautology)

+ My feet are cold

There are plenty of truth value functions we can evaluate for non-physical states including tautologies, personal opinions, statistical norms with broad support, historical events and so on. They may have different truth value functions (how we determine the truth of my opinions on how cold my feet are is different than how we determine the truth of what I ate for lunch) but they still have truth/non-truth outcomes.

Anytime you get into awkward philosophical quandries about the state of things, state the problem to a small child and see how they answer. This technique would of really saved Anselm a whole bunch of time and trouble.


It may very well be the cosmic filter.

Humans have a "truth default" in communication. We take almost everything we hear as fact by default. Bots will have no trouble at all making the majority of people think, see, and do whatever they want.


>The truth is the internet is teaching the biggest lesson ever in critical thinking and getting your information from many sources across spectrums, countries, divides and more.

I'm afraid it's more like Snowcrash, where the Internet is akin to the "snow", the static pattern that crashes most people's brains, and only a few are naturally immune, naturally critically minded.

>Let's hope that people see it as a lesson and not somewhere they can bask in their confirmation bias all day, or make decisions based on fear, in those cases the populace is easy to manipulate.

I think this is unfortunately the case, most people are basking in their confirmation bias and social media dopamine hits. Reason is hard, finding reliable trustworthy data is hard, etc. Path of least resistance is to give in.


I have hundreds of Facebook friends. I read their posts or add my own posts perhaps twice a day. 90% are status updates and/or photos of what they are up to. I find it extremely useful. It's a micro-blogging service. When did this moniker "social media" come into being? What does that even mean? I should say that I have everything except direct posts disabled - including all "shares" and all ads - and that's why to me it's just a micro-blogging site.


Would you mind sharing how you've disabled ads and shares? I could not find such options in facebook settings. Only thing I can think of is adblock or writing my own software to pull feeds and filter them.


I use the F.B. Purity add-on. Can't imagine using Facebook without it.


People will look at you funny if you inject fentanyl in public, but we should have the same negative social reaction to social media users and the same negative social reaction to the platform makers as one does to dealers of hard drugs. The immunity from consequences written into the law is creating some toxic effects. Give a corporation a grant of legal immunity and you should not be surprised that they everything that they can to shunt negative externalities onto society to the extent that it improves their metrics (financial and otherwise).

If you gave an industrial corporation a grant of immunity for emitting pollutants, would you be surprised if they set every river they could on fire...?


We need a broadsheet social media that is (relatively) healthy for you.


IMO removing clickable links and link previews would go a long way to make social media better.


This! Also remove re-posting functionality.


Maybe posting links could just be paid for?


> Repeat after me, social media is not reality.

> Let's hope that people see it as a lesson and not somewhere they can bask in their confirmation bias all day, or make decisions based on fear, in those cases the populace is easy to manipulate.

I find a tinge of irony in this: HN being a form a social media in itself, and this statement also reflecting a general mistrust of social media within HN.


You're "not wrong", but my reading of your comments leads me to believe that you're pointing the finger exclusively (or at least, overwhelmingly and possibly disproportionately) at one side of the dispute.

> Yes it can be fun, entertaining, informative and educational, it can also be propaganda, misinformation, astroturfing, and a big high school like popularity contest where people are fake and narcissistic but probably insecure.

Social media is an epistemic cesspool, there's no doubt. But do not all of your criticisms also apply (if perhaps to a lesser degree) to all media, and politics, and individual and collective activity, and.....well, everything?

> The truth is the internet is teaching the biggest lesson ever in critical thinking and getting your information from many sources across spectrums, countries, divides and more.

I feel rather uneasy about whether this is actually as true as it may seem. The thing about "what's true" is, if you've been fooled, you do not realize it. For sure, there are some particularly bad actors online that are often concentrated in certain communities, but how might one know of other bad actors that are not in your mental inventory, or actors who are in your mental inventory labelled as trustworthy, when they are not actually highly trustworthy?

The variance in quality of epistemic and logical reasoning that can be observed, in extremely large quantities even here on HN, when comparing discussions of a technical topic compared to a political topic, is vast - and yet....does it seem likely that the people writing these things here are doing it with conscious and deliberate intent? Does it seem likely that HN folks consciously choose to become peddlers of disinformation, or might it be more likely that this is a consequence of the unusual, often counter-intuitive, and not yet understood innate nature of the human species?

> Let's hope that people see it as a lesson and not somewhere they can bask in their confirmation bias all day, or make decisions based on fear, in those cases the populace is easy to manipulate.

Indeed. Let's hope everyone can truthfully commit to doing this.

I wrote a bit more on this general idea (with respect to conspiracy theorists) here: https://news.ycombinator.com/item?id=24730753


It's wonderful to hear they've decided to shut down their services.


I would love to know the true scope of “inorganic” content on these platforms. It is surely appalling. At some point users will realize that they are wasting their life staring into a screen that presents them with mostly engineered information.


You mean like how every 'reputable' news outlet uncritically repeated the US's Iraq war propaganda for months and fired journalists who criticized it? Or do you mean people you don't like saying things on social media?


Legacy media is in the same boat, and has been for a long time, as you pointed out.

When it comes to social media I think it is sprawling. Even a topic as pedestrian as lawn care is completely overrun by inorganic material.


>Legacy media is in the same boat

And yet only independent journalists on social media are being silenced. Funny how that works out no?


Isn't this essentially a repeat of 2016? They basically did nothing until voters have made up their minds. Damage has already been done.


I guess this is the world's "Eternal September"[1]. Eventually everyone will understand how this all works and this will no longer be a significant issue.

[1] https://en.wikipedia.org/wiki/Eternal_September


Except Usenet never actually recovered and now is deader than disco. It was replaced for the most part with more aggressively moderated web forums with automated spam controls.


A 2016 study by Dov H. Levin found that, among 938 global elections examined, the United States and Russia (including its predecessor, the Soviet Union) combined had involved themselves in about one out of nine (117), with the majority of those (68%) being through covert, rather than overt, actions. ... According to the study, the U.S. intervened in 81 foreign elections between 1946 and 2000, while the Soviet Union or Russia intervened in 36.

https://en.wikipedia.org/wiki/Foreign_electoral_intervention


I don't trust Facebook or Twitter to distinguish disinformation from information. Why would anyone?


What is the alternative? Think you're the smartest person on the planet, unaffected by misinformation? "Feed me the libraries worth of bullshit from the Internet, and I will sort through this free market firehose to come to a logical, evidence based conclusion", hm?

Have you ever read spam ads on Craigslist, fake comments on news sites, bots on Tinder, etc? There are patterns, there are foul motives, and with their internal data, there is certainly technical stats to identify related accounts participating in bad faith campaigns.

I understand the gut feeling to be suspicious of any large org to filter information. People on this very thread are screaming "censorship!" as if they were Orwell in the streets warning us the end is nigh.

However, if you look at Twitter or Facebook on any given day, you can see that a lot of bullshit, emanating from every possible angle of the political spectrum, still exists. Much of it organic, much of it bots, I'm sure. Let's stop hitting the panic button every time they remove Iranian bot networks.


> I understand the gut feeling to be suspicious of any large org to filter information. People on this very thread are screaming "censorship!" as if they were Orwell in the streets warning us the end is nigh.

I mean, these companies have a monopoly or near monopoly on communications, and they have openly said they are going to arbitrarily remove information that they don't like, without any kind of transparency or accountability.

Is the fact that congress is not enacting a law supposed to mean that it is not censorship, and that we can't complain about it?

Would you be ok with other private networks doing it?

Besides, if we must have censorship, could we at least have someone more deserving of our trust do it? These policies of removing "fake news" greatly worsened COVID-19. As late as February, they were removing "disinformation" that it was infectious human to human.

Telling someone to mask up in early March could get you banned.

Posting first party evidence of the chaos in China in January could get you banned. It was fake news and disinformation. They were protecting us.

These companies should be begging us for a second chance to filter the truth for us.

Not just are these companies fallible, but they also have a very obvious neoliberal agenda. I've watched every single progressive podcast I liked get shadowbanned or outright removed from every major platform over the last year.

Why are people so eager for this?


Your first sentence contains the lie, er, problem.

>arbitrarily

What is arbitrary about it? do you assume that because you don't have a list of their rules of engagement, that you don't have the "warrant" for these takedowns yourself, that it is arbitrary and politically biased in motive?

>Telling someone to mask up in early March could get you banned.

I straight up think that's bullshit. I would want evidence of that occurring or evidence that that was ever laid out as policy.

>Would you be ok with other private networks doing it?

"Private networks" do it all the time. Newspapers decide what to print, and whether they print lies or truths, and whether they do so fairly. The best journalists put truth first. These virtual bulletin boards are, as a matter of fact, massive platforms where any small amount of gamesmanship can lead to the very quick spread of information, good-faith or not. To think they should sit and do nothing is folly.

>Posting first party evidence of the chaos in China in January could get you banned. It was fake news and disinformation. They were protecting us.

What

>Not just are these companies fallible, but they also have a very obvious neoliberal agenda. I've watched every single progressive podcast I liked get shadowbanned or outright removed from every major platform over the last year.

What

Your entire post is filled with so much unsubstantiated statement-as-fact I don't know where to begin.


> What is arbitrary about it? do you assume that because you don't have a list of their rules of engagement, that you don't have the "warrant" for these takedowns yourself, that it is arbitrary and politically biased in motive?

I'm sure that they have some internal logic behind the scenes, of course. But it is effectively arbitrary, since their process is completely opaque to us and subject to 'arbitrary' change on their part.

Even if they were completely transparent though, would that even be any better? They exercise absolute control with no accountability, over a platform that is ubiquitous and essential.

> I straight up think that's bullshit. I would want evidence of that occurring or evidence that that was ever laid out as policy.

Their policy is unchanged since then, and is quite explicit. Facebook defers 100% to the WHO. And the WHO defers to China.

Maybe I can't prove to your satisfaction that the WHO is compromised. It is possible that the WHO is just really incompetent. But it doesn't matter, they were deeply ineffective.

And anecdotally, I know people who fled China in December and January. You could leave China if you left via Wuhan airport.

We tried warning people. I tried warning people. I was banned in February for "fake news". There was a preponderance of evidence of what was happening.

Video after video of Chinese solders wearing hazmat suits rounding people up. But according to Facebook, all of it was completely fake, and we should't panic or talk about it.

And I assume you are going to disregard everything I've said, unless I reconstruct timeline of everything I've said, and spoonfeed it to you.

> "Private networks" do it all the time. Newspapers decide what to print, and whether they print lies or truths, and whether they do so fairly. The best journalists put truth first.

Let me be very clear about something: corporations exist at the pleasure of the public. In theory anyway. You have no right to incorporate, and you have to be accountable to the people in exchange the privileges that you enjoy. And most any company their size, especially a provider of infrastructure, is in bed with the government. They did not pull themselves up by their bootstraps. They massively benefit from government subsidy, and they suppress any competitors. They are not a private network.

> These virtual bulletin boards are, as a matter of fact, massive platforms where any small amount of gamesmanship can lead to the very quick spread of information, good-faith or not. To think they should sit and do nothing is folly.

OK, so if disinformation is such a problem, then why should we allow one billionaire creep to be our information policeman?

I should do it. I should be your information police officer. Why not? I've done less creepy things than Mark Zuckerberg.

> What > What


> What is the alternative?

Improve your informational diet first by consciously discounting whatever you read on Facebook, and second by spending less time on Facebook.

> Think you're the smartest person on the planet, unaffected by misinformation?

There's no need for snark.

The issue being discussed is particular to these sorts of platforms. Reputable news sources still exist.


You're substituting reality for your ideal. Sure, I believe we should have public schooling teach more raw critical thinking, rhetoric and civics. But you're not going to get the existing billion people in the online world, many of whom are some combination of hotheaded or ignorant of the wicked ways of misinformation, to suddenly "improve their informational diet".

The people who are most likely to take that advice are the ones least in need of it.

I am glad Facebook and Twitter are taking what must be very careful, researched actions against clear evidence of powerful forces using social media as a tool against reality.


> Facebook and Twitter are taking what must be very careful, researched action

Sorry but are we talking about the same companies? What track record do they have that helps you to believe this?


Because if they deleted 100% of the bullshit on their sites, tweet volume would be much lower.

My point is, they clearly aren't removing the majority of disinformed and disinforming nonsense. And I don't think every human at the two orgs are evil. If they say "Look, we found these patterns, these correlations, this forensic data that points at coordinated disinformation", my intuition is that they have done just that.


Sure, not every person is evil. You just need evil leaders and employees that need paychecks to be in a bad spot.

The core problem is that they have two conflicting incentives. They don’t want to look bad for spreading misinformation so they try to restrict it. But, they are incentivized to make money, and posts spreading disinformation are lucrative because they appeal to eyeballs. I am extremely, extremely skeptical that this is a resolvable problem for them.


> you're not going to get the existing billion people in the online world

Your original point, and my response, were both on how an individual can escape the misinformation cesspools of Facebook and Twitter.

Of course I agree it's much harder to solve the problem at the societal scale.


As a passer-by: no, I think their original point was that it's ridiculous to depend entirely on individual actions to address misinformation. On that point they've been consistent across all comments.


> Think you're the smartest person on the planet, unaffected by misinformation?

Thank goodness someone is calling this out other than me. Sites like these are filled with people who think they are smart don't need any help, and worse, expect everyone else in the world to be at their level.

I too am glad social media companies are finally waking up and taking action against this stuff.

Can you imagine a world where Google decided they wouldn't filter spam because it was too controversial?


Here's a little story about one of the smartest people that ever lived, Sir Isaac Newton, and how he went broke along with most of England:

https://www.sovereignman.com/finance/how-isaac-newton-went-f...

Note the graph there showing his purchasing events.

Newton himself was well aware of his folly afterwards. He was able to admit that he was irrational and learn form the experience.

Many people that consider themselves smart would be well improved to learn the same lesson Sir Isaac did.


>Can you imagine a world where Google decided they wouldn't filter spam because it was too controversial

Absolutely. Another solution would simply fill in the vacuum. Google—-Alphabet, rather—-is a for-profit corporation. Their initial success was the result of outperforming other services in a field so nascent that no one really knew exactly what it was. Google is no one’s friend; they’re not subject to anything except clear demarcations of legal statue and the court system.

It’s important to realize “this stuff” may, someday, very well become your stuff, my stuff and our stuff.

That’s the threat, in my opinion. Information is not harmful. Information we can each investigate and self-audit within our own brains. Google we cannot investigate, and it will never experience anything close to a public-facing audit.

Don’t capitulate your God given power away. The power to think. To question. To be wrong. To change your mind. To grow.

One man’s opinion.


> Can you imagine a world where Google decided they wouldn't filter spam because it was too controversial?

It would be like email: you can run an email client with a spam filter. If you don't like how it's filtering, you can configure it or even download a different client without abandoning your email address.


How many high profile "misinformation" email campaigns until calls for google to explicitly include political perspective in its indicia of email spam?

And then your "imagine a world..." would make sense as an analogy. And there would be people calling for Google to either return to neutrality or stop filtering spam altogether.


(or HN)

Things get a bit too Hail Corporate, especially with specific brands.


They can’t and they shouldn’t. The potential good that could come from this does not outweigh the catastrophic failure modes of this approach. Some enlightened people understood this, hundreds of years ago.

Its time for another amendment.

EDIT since this is related, another horrendous idea that recently disgraced my timeline

https://twitter.com/yelp/status/1314197509623947265?s=21


What do you propose to combat disinformation campaigns?

Look at what’s happening. The end results of allowing anything to happen on social media are not good. We’re not in a good place. So we can ignore that for rigid principles, or admit that those principles don’t stand up to the real world.

No, no one hundreds of years ago had any idea about the scale or effect of social networks. Yes there was propaganda and brainwashing, but the speed and adaptability and design of these systems is so different as to be categorically unrecognizable.

So. What do you propose?


> So. What do you propose?

Not GP, but I propose doing absolutely nothing.

There have always been tabloid newspapers, underground conspiracy theory magazines and various AM broadcasting stations peddling all kinds of nonsense.

The internet is just a new medium that’s joining the club.

There is no substitute for critical thinking. There are always going to be people who will hold opinions that are anathema to your beliefs. Often, those opinions will be the majority and you will be in the minority.

You learn early in life to ignore the crazy people on the sidewalk holding up signs and screaming about lizard-men running the government. You also learn early in life that politeness and civility will carry you a long way towards being respected in your community.

The internet is no different.

If you can’t function without a school teacher in the room, you aren’t ready for the adult world. We don’t need facebook or twitter or anyone else babysitting us as we go about our lives.


> Not GP, but I propose doing absolutely nothing.

There are multiple places on the internet where you can access completely unmoderated content. 4chan and 8chan are two popular forums, and yet they are far from mainstream.

The fact is that moderation is a feature which people want. People don't want to go through 200 spam e-mails everyday. People don't want to spend tons of time going through bullshit.

Facebook and Twitter has never been about getting "unfiltered" news. Facebook became useful because your friends was there. Twitter became useful because you could read content from famous and skilled people.

> There have always been tabloid newspapers, underground conspiracy theory magazines and various AM broadcasting stations peddling all kinds of nonsense.

And these will continue to exist on the internet. The only thing that's happening here is that Facebook and Twitter is taking a stance that they don't want to become part of this group. Is that so bad?


Again, fine in principle. Doesn’t stand up one second in the real world.

I agree that the ideal of every adult is being educated, taking time to check facts, having a critical mind.

But that’s not realistic. It’s not where we are. And the facts of democracy are that uneducated people vote. People unprepared for the internet crazies. They’re not handling Fox News as entertainment or tabloids. They think it’s real.

So, again, we can pretend that’s not happening. We can say, “well they shouldn’t be like that.” But that’s not reality.


> Again, fine in principle. Doesn’t stand up one second in the real world.

The first amendment and the resulting USA would be my prime counter example. The US wasn't an accident, freedom of speech is a hard requirement for free people and letting companies censor people "for our good" is antithetical to the nth degree.


The first amendment allows companies to "censor people," that is part of the people's freedom of speech and freedom of association. It only prevents Congress from doing so.


It ensures the freedom of the press, and had the founders envisioned a day when three corporations miles from each other had more power than the entire press and government combined to censor citizen's speech I have no doubt that the first amendment would read a bit differently today. For a lot of people, social media has become the press in some ways.


Nevertheless, there seems to be a disturbing trend in these conversations of implying that the first amendment means something it doesn't.

If people want to advocate for nationalizing social media and forcing platforms to publish speech against their will, and making it illegal for them to moderate content, then they shouldn't pretend they're doing so in the spirit of the first amendment or what the founding fathers envisioned free speech to be.


They have that right regardless, they just have to declare themselves publishers.


No they don't.

It's a commonly spread fallacy that only "publishers" are allowed to moderate content, and "platforms" not. The entire purpose of Section 230 is to allow platforms to moderate user-submitted content without facing legal liability for doing so. Social media platforms have always been allowed to do exactly what they're doing.


Flood the internet with more disinformation that is generally harmless but ends up being moderately embarrassing or inconvenient to those who believe it enough to act on it. Teach people by negative reinforcement to not assume what they read is true unless they've vetted the source.

Or: Have a counter-disinformation group that responds to political disinformation by creating more disinformation that points in the opposite direction.

To some extent all of the above is done already, organically. In fact, to the extent that Facebook et al. manage to hide the stupidest crap, that would tend to make people believe that what does stay up must be true (or, at least, must have passed Facebook's fact-checkers).


My concern is that strong campaigns against disinformation might not actually help with this problem. They could make it worse, if people start to believe that Facebook and Twitter are sanitized media outlets that won't let you say the real truth.


I don't necessarily disagree, but one big problem is that there is no "adult world" on the internet. If you can't function without a school teacher and aren't ready for the "adult world," you can nonetheless get a Twitter or Facebook account just like anybody else, and interact with the adults, and maybe even fool some of them. It takes a while to sort out who's an insane asshole. The only way to tell is by listening for a while to what's coming out of their megaphone that unfortunately millions of other people can also hear. Finally you decide they're a metaphorical non-adult and, I guess, block or unfriend them, but meanwhile millions of other non-adults are like "Yesss, finally someone said what my lizard-brain has been thinking all along!!" That's not what happens with the guy on the sidewalk holding the sign.

Edit: Just noticed you said lizard and I said lizard. Any resemblance is entirely coincidental. I meant something about the basal ganglia or limbic system, whereas yours is an exotic & fantastical sci-fi scenario. Although interestingly if you consider all the venality and viciousness in national politics, and the fact that every politician has that same part of the brain, it's actually kind of true that lizard-men are running the government!


Happy to see that at least one guy gets it. Every day I‘m more astounded how western civilizations got to be - it sure as hell wasn’t the Average’s Joe idea, and hundreds if years after most still don’t get it.


So basically you are arguing that if all the disinformation published on your site leads to terrorist attacks and worst case, a war (either civil war or think of how disinformation was used by Hitler to convince most of Germany he was doing the right thing), the most ethical thing to do is doing absolutely nothing. Almost everyone want as much freedom of speech as possible but ethically if it leads to huge number of people suffering/dying needlessly there has to be some safeguards. Also, note that FB not publishing something is not against freedom of speech, since they can always create a website to publish any disinformation they want. As long as the government doesn't make a blanket rule against publishing something, there is still freedom of speech.


Since you mention Germany: What if I told you that Weimar Germany had laws against certain types of speech, which were actually applied to suppress Nazi propaganda, and that the result was obviously ineffective and possibly counterproductive?

"In my research, I looked into what actually happened in the Weimar Republic and found that, contrary to what most people think, Germany did have hate‐speech laws that were applied quite frequently. The assertion that Nazi propaganda played a significant role in mobilizing anti‐Jewish sentiment is irrefutable. But to claim that the Holocaust could have been prevented if only anti‐Semitic speech had been banned has little basis in reality. Leading Nazis, including Joseph Goebbels, Theodor Fritsch, and Julius Streicher, were all prosecuted for anti‐Semitic speech. And rather than deterring them, the many court cases served as effective public relations machinery for the Nazis, affording them a level of attention that they never would have received in a climate of a free and open debate.

In the decade from 1923 to 1933, the Nazi propaganda magazine Der Stürmer — of which Streicher was the executive publisher — was confiscated or had its editors taken to court no fewer than 36 times. The more charges Streicher faced, the more the admiration of his supporters grew. In fact, the courts became an important platform for Streicher’s campaign against the Jews."

https://www.cato.org/policy-report/mayjune-2015/war-free-exp...


Yes, these laws were enacted after WW2 specifically because of the Holocaust, i.e. the horror that widespread hateful propaganda can cause


Pure ideology (how it ought to be) vs. immediate reality (shit is melting down in real time). It's easy to choose the former when you're not the one suffering.


Having principles hurts sometimes. Sometimes they require you to do things that are painful with the understanding that the alternative is even worse.

I'd gladly take a world with disinformation over a world in which an unaccountable third party determines what is truth, and therefore okay to say. I can work around the first problem with some effort much easier than the second one.


> a world in which an unaccountable third party determines what is truth, and therefore okay to say

We already have that - and have had for centuries - the people who control the press control what gets presented as "truth". Look at Fox or any other Murdoch property for a perfect example.

(cf "History is written by the winners" for another angle.)


And is Fox successful in determining what is okay to say? Do you see ordinary people afraid to say something because Fox said it was wrong? Facing social, professional, and even legal consequences for expressing disagreement with Fox?

There is competition in establishing the truth. Not as much as I'd like, but a decent amount. Reducing that competition would be bad.


That is exactly the problem, though, isn't it? Unaccountable third parties are literally currently controlling the truth, via disinformation. An alarming number of people are convinced in conspiracies and untruths like the deep state, QAnon, anti-vax, flat earth... How is the ability to freely and massively distribute misinformation not a means of controlling the truth?


Is there anything new about an alarming number of people believing in untruths? The trade off seems to be how much you trust normal people to correctly parse information and sort the truth from untruth, and how much you trust organizations to parse and dictate truth.

The Downside of the common man having too little oversight is more people believe things that aren't true (Flat Earth), and the downside to too much oversite is people believe different things that aren't true (PRISM, Nayirah testimony, Tuskegee Syphilis Study). The trade off of which is worse varies, but generally most individuals are fairly powerless but there are many of them, and organizations are few but wield much more influence.


Controlling the truth via disinformation.? I admit I have horrible comprehension skills but it sounds like a smart enough statement when you repeat it enough, I just can't grep any meaning in this. Do they know the truth? You throw in the conspiracy groups and you have made yourself half a statement.

They're bad we get it, Id rather be able to hear how bad they are in the light. Who was accountable when we spread lies about WMDs in Iraq? That caused magnitudes of more damage than these groups, yet no one is held accountable even with Public figures. Do you want everyone accountable for misinformation silenced or just those you disagree with?


But the alternative doesn’t stop at misinformation. This propaganda is helping elect people like Trump who has expressed a desire to silence his critics through whatever means he has. I’d rather be censored by Facebook than the government.


I mean yes having principles hurts but leaving social media as it is doesn’t make you Rosa Parks.

We see a problem, we try to fix it.

They’re not unaccountable. This whole thing is about holding them accountable!

If they were completely unaccountable then they wouldn’t do any of this. Why is Facebook taking action against QAnon now when it hasn’t in the past year? Do you think they haven’t seen the polling data for the president, house, and senate races? They’re preparing to fight for their life as a conglomerated monopoly.

What does holding them accountable look like to you?


Why should Facebook (or any other platform) be accountable to anyone in particular for what the users of the platform say?

I get that, now, they've hoisted themselves by their own petard after years of inconsistent policies and enforcement, but I mean in a general sense.

>What does holding them accountable look like to you?

The opening paragraph of section 230 of the CDA, to wit:

"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider"

There be far worse dragons than disinformation down the other roads. Let's take this to its logical conclusion - you've got people on the left upset about "conspiracy theories" (which is, itself, a pejorative term with problematic origins rooted in thought control), you've got people on the right upset about "bias" against their stances.

When both sides of the political aisle are demanding tighter control over speech for any reason, the intended result should terrify the hell out of you.

The problem you speak of is not "people say false things" (which is not solvable), the problem should rather be seen as "people demand the ability to censor people they disagree with". The solution to that problem is telling both sides to pound sand and take responsibility for their own beliefs, not foisting that responsibility off onto third parties in a way that necessarily restricts freedom of speech.


People tend to over-inflate the potential impact of fake information sources. False information tends to hurt people and we're extremely adept at recognizing false agents, whether they be random tweets, facebook posts or media outlets. Some people walk around so afraid of someone spreading a lie that could hurt them that they insist on authoritarian controls on speech but the fear is completely unfounded.

For smart intelligent people, false statements create untrustworthy agents. For oppressive despots, false statements are always the most expedient shortcuts to some desired outcome. What countries like China are doing are playing the despot and preventing their people from thinking critically as much as possible, and silicon valley thinks that's a wonderful idea.


Keep the marketplace of information free and let people adapt. In the short term, will people be manipulated? Sure. Is that desirable? No.

In the long term, as has happened before, people adapt to the new reality. Marketing has to change all the time because people wise up to manipulative strategies.

The long term solution is a more intelligent public, not censorship.


Banning ad-based news, or at least ad-funded social media. They only push garbage because it attracts eyeballs, which makes them money. Just like tabloids.


We're far from being in a perfect place, but we're in a relatively good place compared to the rest of history.

Reinstituting modernized versions of monarchs and clerics who tell us what we're allowed to know and to think.

I neither want Facebook and a few corporations to declare themselves the gatekeeper of morality and acceptable thought nor do I want the oppressive regimes and dictators use freedom of expression against us by flooding us with misinformation and lies. But forking over control to a group of tech figures like Zuckerberg, Dorsey, Bezos, or Pichai cannot be the answer to the threat of Putin and Xi Jinping.


Thomas Jefferson wrote the living should not be ruled by the dead, and that the Constitution should expire after 19 years.

James Madison wrote the Senate should protect the moneyed from democracy, and the Constitution was forever.

If you’re going to point at their ideas, don’t cherry pick nebulous middle ground cartoon versions of their actual positions. Don’t lump them all together as believing in the same values.


Those enlightened people never conceived of a world where a lie could be spread to millions of people by anyone at the click of a button or spewed in volume by bots posing as people.


Maybe they did conceive of it and they don't care. This type of action satisfies the objective of enforcing the orthodoxy by chilling any speech or action that could be remotely construed as being antagonistic, which is perhaps the true goal.


You seriously think the authors of the constitution anticipated Twitter and Facebook?


Wait... I replied to the wrong comment by accident and can't delete now. Ignore please.


Understandable. How do you propose we deal with the current wave of propaganda and misinformation that these platforms enable, especially since they’re already having a very real effect on politics all over the world?


Add critical thinking and logic lessons to school curriculum. Issue would be solved in about a generation but governments and large companies would hate that.


That will never happen in this country. The truth of the situation is, one political party benefits from this ignorance way more than the other, and they are protecting their investment in keeping the electorate ignorant.

https://www.austinchronicle.com/daily/news/2012-06-27/gop-op...

> "Knowledge-Based Education – We oppose the teaching of Higher Order Thinking Skills (HOTS) (values clarification), critical thinking skills and similar programs that are simply a relabeling of Outcome-Based Education (OBE) (mastery learning) which focus on behavior modification and have the purpose of challenging the student’s fixed beliefs and undermining parental authority."


[flagged]


This is an assertion, not a plan.

Your statement could just as easily apply to publicizing a whistleblower exposing massive corruption as it could to active collaboration with a foreign power to subvert an election.

But beyond that, what are you actually proposing to "start"?


So you're part of the "tear down journalism" crowd that are a part of the problem?


The far right have been attacking journalists as "enemies of the people" for a while now. Extreme leftists tend to hate mainstream journalism, too.


Thomas Jefferson wrote the living should not be ruled by the dead, and that the Constitution should expire after 19 years.

James Madison wrote the Senate should protect the moneyed from democracy, and the Constitution was forever.

If you’re going to point at their ideas, don’t cherry pick nebulous middle ground cartoon versions of their actual positions, and lump them together as on the same page.

Some enlightened people that knew much less about reality than we do and have been dead for 200 years.

Old ideas of American civic life have become a religion unto themselves. Jefferson wrote of that too: laws must change as human awareness grows.

It’s time to change things, I would agree. I’d start by burying old idols. Little left to be gleaned from men who died without knowing of Einstein, Godel, and the rest of modern invention.


Don’t let the perfect be the enemy of the good. The catastrophic failure mode of doing nothing is that democracies vote to stop being democratic, and do so with deafening applause. It has happened before.

Let them fight disinformation; also require them to be open about how they are doing it so they can’t become a disinformation provider of their own — there are already enough people, left and right, who think FB et al are censoring their ‘truths’ for nefarious reasons.


They're removing bots. I for one appreciate the lowered noise floor. If a platform (Facebook) is claiming that it's for people, then we should know when a post is from a person or a bot.


> does not outweigh the catastrophic failure modes

Here's the opposite catastrophic failure mode which the enlightened free speech defenders fail to notice (or do notice but don't care because they're not the affected parties - or at least doesn't think it affects them or are simply accomplices)

Truckloads and truckloads of unchecked fabricated BS drowning the discourse. Mob censorship.


> The potential good that could come from this does not outweigh the catastrophic failure modes of this approach.

Such as?


Such as, say, Fox News buying Facebook? Or Soros? Or RT buying Twitter? Or a Saudi sovereign wealth fund? Now how do you feel about Twitter and Facebook deciding what misinformation is?


> Now how do you feel about Twitter and Facebook deciding what misinformation is?

I don't see how that changes anything except the political leanings of the website operators. Can you explain what the problem is?


One problem is that the power to vet/censor particular things may be beneficial in a certain pair of hands but disastrous in another, especially when you're dealing with a platform that has extremely powerful network effects and controls so much of the information flow in society. To reiterate the point of the previous comment, would you feel comfortable if FB came under control of the Saudis while it was engaged in extensive vetting? I would not. Naval says that one test of a good system is whether you can hand the keys over to your adversary and things don't go wrong, which applies well in this case.

Another problem is that vetting (/censorship) is often supported when it's "your side" doing the vetting, but I don't trust that any politically tilted group of individuals will engage in unbiased, non-partisan vetting. FB's staff is of a certain political demographic that differs substantially to the country at large and this will likely sway the decision making away from a fair and balanced outcome.

If FB or Twitter engages in vetting I would like to see it done in a satellite office set up explicitly for that purpose where great effort is made to select individuals without extreme political leanings.


> One problem is that the power to vet/censor particular things may be beneficial in a certain pair of hands but disastrous in another

What's the disaster exactly? I keep seeing allusions to catastrophe and disaster but I am not seeing any specifics about the material effects of this catastrophe.

> would you feel comfortable if FB came under control of the Saudis while it was engaged in extensive vetting

Why would that make me uncomfortable? Like what are the Saudis going to do with Facebook that would be so terrible compared to what Facebook is already doing under current ownership? The Saudis in particular seem like a weird example considering the already prolific influence of SoftBank in U.S. tech firms.

> Another problem is that vetting (/censorship) is often supported when it's "your side" doing the vetting

I don't have a "side" with respect to what Facebook removes from its platform, I can't fathom a scenario where something being removed from Facebook can be accurately described as a catastrophe or a disaster regardless of who is doing the removing.


You can see the effects in China where state control of the narrative, achieved in large part by oversight of companies, has made their population completely unaware of the Uighur situation.

Granted, FB is a private company so the extent to which it's a concern is not as big, but given the control that FB has over information flow and the network effects of FB, similar risks are present. Think: swaying elections by permitting misinformation that supports only a certain side. It's an extraordinary amount of power that has the potential to be used maliciously. At the moment it remains a hypothetical but given the China example it is not an unreasonable concern.


I think the general concern with respect to foreign ownership of U.S. firms is reasonable to discuss, but I don't see a problem with Facebook removing content from the website regardless of the owner. Regardless of who owns Facebook there is going to be some content that will be removed, same of every platform out there, I just don't see what the harm is that something is taken down from Facebook.


I think my main concern is the potential for politically biased vetting, which could lead to an election being swayed. It's a large amount of power over a democratic outcome by a small group of individuals who are not accountable to the public and are not guaranteed to be acting in good faith.

It's possible that the fact-checking orgs that have been chosen really are neutral but given the political power they hold in their hands, and given my experience with human beings, I still do harbor some residual concern about this. They seem like a remarkably high leverage attack vector for cynical political operatives. I probably have to do some more digging about who these fact checking orgs are before I come to a concrete opinion, however.

I will however admit that the alternative is also fraught with the exact same risk. Allowing various actors to actively spread misinformation can itself sway an election. Perhaps the best solution is to have vetting that only targets the most egregious, obviously false information, where the information that was censored (unless it is violent/pornographic/criminal), especially that which is relevant to upcoming elections, is catalogued and available for viewing by the public, simultaneously for accountability, public trust and to assuage conspiracy theorists that the vetting is political.


> I think my main concern is the potential for politically biased vetting, which could lead to an election being swayed.

That ship has already sailed. The problem of politically biased vetting is a universal fact of life on literally every media platform in existence; people just need to be judicial in their consumption of media.

> It's possible that the fact-checking orgs that have been chosen really are neutral but given the political power they hold in their hands, and given my experience with human beings, I still do harbor some residual concern about this.

I am sure they will be quite flawed, but I don't see the problem with that, of course they will be flawed, why is that such a problem?


Being merely flawed is not the concern. If mistakes were made with roughly equal frequency in both directions on the political axis, that is an example of being flawed but is not a serious concern. It's potential political bias that favours a specific party or election outcome which is the concern. The power to swing elections being bestowed on a small group of unelected, unaccountable people is the concern.

It's not valid to compare vetting by the social media oligopoly with legacy media curation. Social media companies are like the new public commons through which the plurality of conversation and debate flows through nowadays. It's not practically possible to opt-out unless you want to exclude yourself from public discourse. There's also a distinct difference between legacy media bias, which is worn on the sleeve, and the shadowy undocumented impact of vettors whose bias can't be scrutinized by the public.


> The power to swing elections being bestowed on a small group of unelected, unaccountable people is the concern.

Any power they have is given freely of their own volition, nobody is compelled to use Facebook. People who rely on Facebook to stay informed about the world are responsible for their own choices, the same way someone who chooses to rely on Fox News or MSNBC is freely making their own choices.

> It's not valid to compare vetting by the social media oligopoly with legacy media curation

Yes it is. The issue in question is an organization's power to influence elections by controlling information that consumer's receive, in this respect, legacy media curation is identical to social media curation.

> Social media companies are like the new public commons through which the plurality of conversation and debate flows through nowadays

Privately owned websites aren't a public commons, the internet is a public commons and citizens have total freedom to come and go as they please or even carve out their own slice of the commons for whatever purpose they desire. There is no reason to wrest control of business prerogatives from private website owners just because lots of people share political memes on a website. If this issue is a real concern then the government should provide a 1st amendment protected platform for citizens not arbitrarily violate the free-speech rights of private companies.

> It's not practically possible to opt-out unless you want to exclude yourself from public discourse

Totally false. There are hundreds of different platforms to choose from to participate in public discourse online and its very cheap and quite common to self-host one's own blog or platform to that end.


They have to. Doing nothing is so much worse. It won't be an amendment, it'll be the end of the constitution.


Wow, I had not seen that from Yelp. That is a horrible idea, and because they control brick and mortar reputation more than anything else, there is nothing that can be done about it. Hopefully they get sued for libel and realize what a dumb approach this is.


> Today, we’re announcing a new consumer alert to stand against racism.

I read this, laughed, and closed the tab. The world has gone mad.


Temporarily it's fine I think.

Cause no one else has the skills or resources. And things are happening too fast and at huge scales to depend on local law enforcement/courts to handle it. You can see them freaking out everyday.

There was a time I thought the US and EU wld get things under control specifically after Brexit and Trump but its quite obvious govts don't have what it takes right now.

Will be interesting to see what the counter reaction is from bad actors. I highly doubt they are going into retirement now.


Or anyone really. After 2016, I stopped watching and reading the current news completely. Everyone is lying to you now. I just wait 6 months before having an opinion on anything. That gives more than enough time for most of the rumors and mistakes to iron themselves out. As an unexpected side effect, I am happier than I have ever been. The world isn't going to end and I need to stop worrying that it will.


>The world isn't going to end and I need to stop worrying that it will.

Debatable, but even if it is, you still shouldn't waste your time worrying about it.


I don't use Facebook or Twitter as a serious source of news or foundational information. It's fun sometimes. If Facebook or Twitter disappeared tomorrow, society would not regress or be hurt in any way.

Anything that collects a mass number of likely untrained/uneducated people in a common space, where they can create a post record of anything with no restriction of content to the world at large, will never be a serious source of news or foundational information. You may sift through this and get something useful or interesting once in a while, but it is simply incapable of as a whole of being anything great.

The fact that these networks exist as primarily ad businesses is probably the best thing that can become of them.

The most useful/meaningful thing totally free/open social networks can do in a general sense is provide a venue for people to post raw media of events, such as crimes, etc. which can be useful if you are near that locality, but the comments and subsequent discussion have a very low chance of being anything but emotional expressions.

I am fine with whatever actions Facebook or Twitter take to do this if they feel the need. Facebook and Twitter are not government agencies. People choose to use them. People whine and bitch about how everyone they know are on Facebook and Twitter, but alternatives do exist and it is possible to use them. You can also just text/SMS your friends and family.


People already trust these companies 100% to decide what they can read, de facto:

The evidence is that your FB messenger inbox is not filled with porn spam 400 times per second.

This is all fine and good until these centralized, sometimes-censored systems are commandeered by the state to censor person-to-person communications (whether it be on a certain topic, using certain keywords, or all messages to/from users in a certain geofence).

This is turnkey tyranny, and you wouldn't even know it had happened. The command to the service providers would be gag ordered, it wouldn't be visible, it wouldn't be reproducible, and it wouldn't make the news.

I am convinced that this represents one of the greatest threats to our society: the danger is imminent. If it hasn't happened already, it will soon.

https://sneak.berlin/20200421/normalcy-bias/

https://news.ycombinator.com/item?id=24630900


In general? Not really. On specific issues where there is overwhelming evidence and consensus coming from outside? Yes, it's good that they act on that.

It's true that absolute truths are hard to find. But absolute bullshit is much much easier to identify, and should be curtailed when it's doing harm.


Are you suggesting they shouldn’t identify user accounts in violation of terms of service (e.g. spam bots) and terminate them?


Agreed. Unless they're dismantling themselves I really don't trust this effort.


Because Mark Zuckerberg created a lot of wealth so he has proven himself to be trustworthy and he has a lot at stake. On the other hand, run-of-the-mill elected officials don't have as much money so they are not trustworthy since they don't have much to lose if things go bad.

Why would Mark Zuckerberg want to harm a country that his business depends on? Why would someone destroy their own house? It makes no sense. On the other hand, politicians have plenty of incentives to harm the country; since they can just sell out to foreign powers in order to gain favors and make a quick buck after they leave office.

Politicians cannot be trusted because they have nothing to lose from making things worse. Corporations on the other hand can be trusted because they have a lot to lose if the country goes down the drain.

In times like these, it's important that we all come together and support our corporations in order to protect our democracy.


Twitter still doesn’t have a way to report misinformation. I’ve been seeing an almost 100% success rate in accounts s report having actions taken.


Who says it's actually misinformation? The media? The localy bribed politician owning the media?


Facebook and Twitter don’t make the determinations alone. They contract internationally recognized fact checking organizations. Is it perfect? No. Do we need to curtail online disinformation at a platform-level? Yes, clearly we can’t rely on the critical thinking skills of the reader to make a rational determination — if we could, we wouldn’t have this problem in the first place.

This and several posts on this thread seem to think it’s impossible to discern fact from fiction, that every opinion, even if false and presented as factual, deserves equal time.

I fear that if we cannot have factual, cogent policy debates, our democracies are doomed.


>> They contract internationally recognized fact checking organizations. Is it perfect? No

The fact-checking organizations have themselves been independently vouched for by several highly trusted organizations. They are as close to perfect as we can possibly get.

Unfortunately there are too many bad actors in this country who are hell-bent on destroying our democracy. Huge segments of the population are actively engaged in the misinformation campaign so that's why radical steps need to be taken.

We cannot let people with ill intent ruin the country by misleading the public and stirring up controversies and conspiracy theories.

Clearly, everything is fine. The economic recovery has been very quick; everyone made a lot of profit recently and it's all going great now (stocks prices have been soaring in fact) - But you wouldn't know it from reading all these fake news on Facebook. These social manipulators need to be stopped.


Everything is not fine. The economic recovery is not quick. Stocks prices are controlled by opaque shadow forces and have little basis in reality.

Something that is grounded in reality is being able to pay down your home loan. Now that mortgage forbearance is expiring in the United States we in the mortgage industry are seeing a trend where pre-pandemic mortgage delinquency rates will not return to pre-pandemic levels for 24 months since the first wave of COVID. Compare this to 11 months of recovery after both Hurricane Harvey and Hurricane Irma. 24 months is the best case scenario right now. 2021 will be a slow and painful year for economic recovery. We still don't know if we will endure another shock.


Only after they've done their damage and likely reached critical mass to survive through this on other parts of the internet.


Does this mean they're shutting down?


I thought this as well. FB and Twitter are themselves the misinformation networks.


My dream is to work for the engineering divisions for these companies regarding automation around controlling the flow of information. It seems like such a fascinating problem


Seems like hell to me. It's obvious they're not willing to actually take action.

For example, Twitter could easily and automatically remove a ton of white supremacist accounts using machine learning but haven't. Many of these accounts follow obvious patterns that ML can easily detect. Why don't they do it?


I'm sure it's not so simple. You do the tasks you are told. There are departments you never heard of tell you "we need a backdoor in the event of an Emergency"

That department doesn't realize they were told to do this by a director who knows the true purpose is propaganda.


That is most certainly not the way engineering at Twitter works.


Good luck. I'm not convinced that's automatable.


Everything around software will eventually automatable.


See Google support.


They're building a propaganda machine the likes of which Joseph Goebbels couldn't even have dreamed. It will certainly be fascinating, so long as you can ignore the unprecedented levels of damage it does.


Yup. And the worst part is that the techniques used are entirely outside our regulatory environment. The rules that constrain a journalist don't apply to the head moderator with their finger on the trigger at Facebook. Drive engagement today, suppress voters tomorrow, justify anything under the sun before you know it.


Now they just need to dismantle the global array of advertising networks and they would have accomplished something.


Title should be "Facebook, Twitter create global array of disinformation networks"


No china? Dos not make any sense.


But but ... They are the disinformation network :(


Let me guess. ShareBlue is unaffected.


"Prosecute politicians who failed to fulfill their poll promises" --Khehar, Ex Chief Justice of India https://archive.vn/x7O4e


I keep seeing this and no-one has ever managed to give an answer to "Sure but how?" Just look at US politics right now - any Democrat who had won in 2016 would have failed to fulfil probably most of their promises but that would be down to McConnell clogging up the Senate (which was basically his poll promise.) Does the Democrat get punished for someone else blocking them? And that's just a simple trivial example...


Who the fuck are twitter and facebook to determine what is true or not. We need to stop this behavior, this is insane.


Not sure why we should trust that this will actually happen since they knew it was going on and let it occur. In the end these fake bots drive engagement and therefore share price - they have no financial incentive to stop it, unless the number of people refusing to use the platform grows. Personally I won't use Facebook, using it feels like I am actively assisting with the destruction of democracy in the United States. I can't imagine that's a common feeling.


I think twitter is dismantling itself. A couple of months ago they were auto-suspending new accounts - create account, spend ~20 mins following people and looking around, then "are you a robot", then a few minutes later "give us your phone number" else suspended. They seem to think new users are going to get so hooked in 20 mins they'll give up their phone number. I returned to an auto-suspended account a couple of weeks ago and found that "suspended" had changed to mean read-only. Bye-bye twitter.


This headline is like saying "border patrol arrests extensive cartel smuggling network" after the feds grab a few dozen drug mules. The incentives are still there. The structure is still there. The opposing operating group is still there, untouched, as is the source of "contraband". We don't know how many networks exist; how many remain; how many operate successfully within Facebook/Twitter's rules while having the same effect as the banned ones (to extend the drug comparison, these would be the causes of the opioid crisis). On a structural level, I don't see how FB/Twitter can fix this, and don't think they would if they knew.


Twitter has shown they have an extreme political bias over and over. It is insane to trust them to identify and censor "misinformation". This is just another censorship campaign hiding behind the current flavor of "think of the children".

I'd rather have these raving lunatics (and to be clear, many of these propagandists are exactly that) spouting nonsense than trust Twitter to curate the truth.


Which extremism exactly?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: