Hi! I've done a bunch of trust and safety work and I see this trope a fair bit. Please help me understand what the difference is between, say, platforming racist harassment because of a "political commitment to free speech" and platforming racist harassment because you just kinda like racist harassment?
I get that it might be different in the heads of the people who have worked very hard to create those platforms. I'm just not seeing any different in its effects on the world or on the targets of the racial harassment.
> Please help me understand what the difference is between, say, platforming racist harassment because of a "political commitment to free speech" and platforming racist harassment because you just kinda like racist harassment?
The difference is intent. Intent matters. Intent is the difference between murder and manslaughter, or between a conspiracy and mere speech.
Intent matters sometimes. To some people. But here, in either case the intent is to enable terrible people to, e.g., shout the n-word at people. So I don't see much of a difference in those terms.
Well it obviously mattered enough for the Founding Fathers of the US to enshrine freedom of speech in the Bill of Rights, and in the last 200-ish years it also mattered enough for US courts not to overturn or politicians of various parties to change it.
Now where I'm from (Germany), not just "hate speech" is against the law, but it's also unlawful to insult another person. It's complicated, but for the latter it's mostly sufficient that they feel insulted by what you said to them personally.
Now while I don't go around insulting people in person or on the Internet, I personally think - for instance - that it should be allowed to call a person an asshole, if they behave like an asshole. Yet, if I did that here, or even online to another German person, they could go to the police and press charges. If the public prosecutor is sufficiently bored, this very low barrier could also be used to dox me in an otherwise reasonably anonymous setting, since the resulting lawsuit could result in my data getting subpoenaed from, say, Twitter and my ISP. This has happened to other people here in the past.
Now while I'm neither in favor of either hate speech nor randomly and viciously insulting people online, I consider the law in Germany as outlined unreasonable in an online setting. I think freedom of speech is more important fundamentally than another person's right to not feel hurt, or for some powers that be to silence or punish me because I said something inconvenient that they merely claim to meet some of the criteria for speech that is restricted here.
Mind you, this is the case all the while freedom of speech is enshrined in the German constitution as well. But I think it is a pretty good example of why I think freedom of speech should not be curtailed just in the name of another person's feelings about said speech. Even if a person, as you do, doesn't see a direct and tangible benefit in allowing that kind of speech, I would argue that a larger fraction of people are against disallowing it, because of the indirect consequences and where that line of lawmaking leads.
Another thing to consider is this: Say you're modestly happy with the current government wherever you live, and you'd be happy for them to have an "easy" way to curtail freedom of speech. Would you also be happy for the opposing political side to do the same thing? What if some extremists came to power?
This kind of reasoning is why free speech absolutists are so staunchly defending freedom of speech, even if it may be inconvenient or insulting to themselves or others.
You're conflating a lot of things here. One is the free exchange of ideas with freedom to harass people. Another is legal versus socially accepted. A third is the difference between "the cops should be able to arrest your for X" and "I am choosing to spend my days creating a platform for X". These are all importantly different.
You're also shading over exactly who gets free speech. If digital Klansman get to freely harass black people, many of those black people will not participate in public spaces, silencing them. Indeed, that sort of ethnic cleansing is often the goal of racial abuse. See, e.g., Loewen's "Sundown Towns". So whatever "free speech absolutists" think they're up to, in practice the result is often a diminishing of the free exchange of ideas that the Founding Fathers were clearly pursuing.
I don't think this is a very accurate description of how things work in Germany. It's exceedingly rare in any Western jurisdiction for the aggrieved party to press charges. This power is usually left to government prosecutors, who are probably more impartial than the complainant.
how about this example: twitter would silence and deplatform some guy using the oh so terrible "n-word" ("because sticks and stones may..." is not a thing anymore). Now because this person is deplatformed, I cannot find his "hatespeech" when doing a quick background check, so I hire him in my company as responsible for recruiting. Now he makes sure no "n-words" get employed.
big win? whomever votes to silence the guy gets to judge.
If the only possible way you can catch a bigoted manager is by hoping that he spent a lot of time hurling racial abuse at Black people under his own name, then I think you really need to work on your management processes.
sure, I find out about it a year later, since i am a small company, and now 3 black people were not hired because of it. Is this a big win? also, I do believe that the word abuse is being devalued, and that such simple insults do not really qualify as abuse in general, lets not forget that a great deal of black people love to use the same words when talking to eachother, black celebrities get famous singing the word, self describing as such.
That is a very white understanding of what abuse means.
Also, it seems wild to me that you think a small company means you somehow have less ability to supervise your employees. What's your plan if you hire a racist who wasn't dumb enough to post openly? Just let him go to it?
There is probably a line. But you don't know where it is and neither do I. You and I might agree that X is to one side of that line, but if we ban that behavior, then we have initiated a process that we might call line-discovery -- the search for the line that X was to one side of -- and line-discovery is highly prone to outcomes that result in bans on content from the other side of that line. So we don't want to engage in line-discovery, even though there are obvious examples of things to one or the other side of the line.
You may think you can ban the obvious things without ultimately engaging in line-discovery, but, the argument goes, you are mistaken. You will ultimately find yourself doing line-discovery.
You start out with obvious-sounding prohibitions on racism and hate speech, but eventually you're arguing about, say, whether it's racist to report on polling showing that violent protests are unpopular. [0]
And that's because banning any speech always leads to line-discovery.
So it comes down to a question of which scenario is worse:
A. You ban obviously bad stuff while accepting some risk of banning things that aren't actually over the line.
B. You privilege all content to avoid that outcome.
Some people are outraged by this framing and think it's obvious that you would want to risk banning some behavior to the right side of the line if it means eliminating the most obnoxious speech. But, basically, that is not obvious to everyone, no matter how many times they are reminded that there is some really bad stuff out there. [1]
[1] Interestingly, this is really not so different from the argument about evidentiary standards for punishing criminal behavior, except in that case the politics are flipped. There conservatives would rather risk punishing some innocent people if it means the absolute worst actors are guaranteed harsh punishment, but liberals think it's worth risking some amount of literal rape and murder in order to prevent punishing the innocent. So I think, actually, both sides are entirely capable of seeing this from the other side; they just don't want to.
Yes. I am addressing the second-order effects of each motivation.
Let's grant that the harms of the kind of speech you're worried about are exactly the same in either case. [0] Platforming "racist harassment" because of a political commitment to free speech implies that other forms of controversial speech will get the same treatment, preventing the kind of line-discovery I described in my previous comment.
"Platforming racist harassment because you just kinda like racist harassment" leads to who knows what. All we know about that person is that they like racist harassment. Maybe other stuff gets banned. Maybe not. Either way, it's unlikely to be in service of avoiding harmful second-order effects.
So that's an enormous difference between the two motivations. In the first case the position is in defense of an ethic of open dialogue and an attempt to prevent second-order effects that are harmful to that dialogue.
In the second case -- who knows.
It seems to me that the first motivation is much more likely to prevent the kinds of second-order effects I'm worried about and that distinguishes it from the second one.
I get that it might be different in the heads of the people who have worked very hard to create those platforms. I'm just not seeing any different in its effects on the world or on the targets of the racial harassment.