When confronting the utopian side, by contrast, the relevant question becomes sharper: Should these companies even exist at all?
I am coming towards this conclusion as well. As individuals, the easier and more frictionless our communications with others, the greater our opportunities to live, love, learn, and collaborate. As a species, we have evolved to live in small social groups with radically different norms and life patterns. That's a survival and evolution benefit, not a bug. As long as we can remain peaceful and not harm one another, we want the maximum possible diversity in humanity, not some utopian norm for the entire species.
We have a wonderful example of large numbers of people instantly communicating with one another. They're called mobs. Because at some point in scaling, and it happens probably between 75-500 people, powerful emotions are the only thing that can sustain the instant feedback loop we've created.
We want friction in communicating. Not only do we want it, we have to have it.
What you're basically saying, I think, is that there shouldn't be any easy way for ordinary people to communicate directly with each other and organise politically. That it's too dangerous to allow them to do this without the press acting as gatekeepers first. That certainly does seem to be the underlying subtext of a lot of the recent criticism of Facebook. It's also an astounding line of thinking, because it basically says that it's politically unacceptable, even a danger to democracy, for ordinary people to actually have power and not just the illusion that they do.
(Edit: to be fair, I should probably add that the press wouldn't be the only powerful institution that has the ability to enable or gatekeep activism and organising, just one of the most important. The more fundamental difference is whether you need to have one of those powerful institutions backing your cause.)
Thanks for your comment because I am not saying that at all.
All I'm saying is that a certain amount of friction is a necessary part of our survival as a species. I'm not making any comments about the press or people organizing or any of that.
Some message boards, including HN, have a delay between the time you read a comment and when you are allowed to form a reply. This cuts way back on flame wars. It's also a nice example of purposely introducing some friction for a larger good.
I don't know how much friction is needed or where it should go. I'm just now reaching the conclusion about Facebook and social media. This utopia they've all been preaching is actually a dystopia, a horrible thing indeed. I'm nowhere near having any thoughts about a solution.
One interesting book in this context is Neiwart's "Alt America". [1] Neiwart is a journalist who spent decades tracking the armed US fringe, including the "patriot" militias, sovereign citizen movements, conspiracists, and white supremacist groups.
It's his view that the alt right didn't come out of nowhere. Instead, the low-friction communication of the Internet allowed these formerly scattered, always-falling-apart groups to connect and recruit in ways that were previously impossible.
I also don't think it's a coincidence that the rise of the alt-right happened when social media companies in some sense took friction negative. In the world of websites and blogs, you at least had to actively seek the next site, the next writer, the next article. It wasn't much work, but it was at least a little.
Youtube, Facebook, and Twitter were eager to give you an endless stream of content, so as to maximize User Active Minutes metrics. So their algorithms would pull up anything that they thought you'd be interested in. The theory being that any engagement must be good, which implies that total obsession would be best.
In retrospect, that theory seems to have some flaws.
If you'd like to join me in speculating a bit, let's take an imaginary look at ten years from now.
Good news! AI has advanced enough that it is now possible for you to have a customized bot that handles all of the crappy and mundane parts of life: scheduling travel, getting groceries, making sure the house is clean, and so on. The bot learns about you and is able to anticipate your needs.
And because we have this tech, we also have bots that will play your emotions like a finely-tuned fiddle. Think Elvis never died? Afraid that lizard-people in the government are controlling your thoughts? Well, now there's a bot that will find others like you and deliver to you a continuous stream of information confirming your beliefs -- and warning you of the dangers lurking out there.
Now guess which of these bots makes more money for its creators, the one scheduling airline flights or the one keeping you emotionally-connected with 400 people just like you and making sure that all of you are living a dramatic life on the edge where you're fighting the bad guys?
Definitely. It's the kind of manipulative dystopia that might have been designed by timeshare salesmen and propagandists. But the weird part to me is that conscious design seems basically unnecessary. It's dystopia metastasized.
I've interviewed the occasional person looking to leave Facebook, and one of the surprising things to me is how small everybody's job is. They're all optimizing micro-metrics for micro-gains, with very little thought as to the systemic impact. Those micro-metrics in turn roll up to plausible-sounding things like revenue, DAU, time on site, and engagement. Yay engagement! How could that be bad?
But I've never heard anybody from Facebook, employee or exec, say, "Gosh, could we be acting like tobacco company execs or slot machine designers?" They don't even seem to understand addiction as a topic, let alone a concern. It's very much as Upton Sinclair said: "“It is difficult to get a man to understand something when his salary depends on his not understanding it.”
Clay Shirky pointed out a while ago that the power of the Internet to connect previously isolated individuals with common interests can have the downside of making that common interest seem more normal to those people.
E.g. if I'm the only Nazi I know, there's no one to buck me up and so maybe I consider that there's something wrong with my beliefs. But add a convenient subreddit, facebook page, etc. and I get to think that there are lots of us. (We conveniently forget about the denominator.)
This phenomenon is great for society when it comes to sufferers of rare diseases, the lonely, closeted gay kid in a small town, and cult film lovers. Not so great when it comes to racists etc.
It cuts both ways though, you can't have your cake and eat it too. A counter example: if I'm the only guy I know that thinks Jim Crow laws are bad and there's nobody around to buck me up, so maybe I consider that there's something wrong with my beliefs.
If you police speech for some people and not others you will create problems, either you think that people are inherently good (therefore everbody should be able to speak until they reveal otherwise) or some kind of top down censorship should be in place to keep things "civil", now the problem is who gets to decide what "civil" is.
So its better and easier and more robust to just let people say what they want to say (even if bad) and just allow people to filter out what they don't want.
Further, "collecting" a group of scattered folks with common interests can be good even if their common interest isn't. E.g. I know that there are "cutting" and self-harm websites, and of course the practice of self-harm is awful, but those websites may be a way to reach people and help them.
As to what to regulate, I think that the anti-Holocaust denial laws in Europe make a more fraught case than anti-child pornography. Child pornography generally hurts kids even if it's not circulated.
Denying the Holocaust is stupid and offensive and upsetting to many, but it doesn't harm the speaker or the listener in the same way. I don't know, one way or another, whether these laws have led to "slippery slopes".
It does cut both ways, but that doesn't justify your "let people say what they want no matter how awful it is" conclusion.
At the margins it's certainly hard to tell sort-of-bad content from not-so-bad content. But that doesn't mean we can't find pretty clear examples. Banning child pornography, for example, doesn't seem to have harmed freedom of speech much.
You also ignore that what speech we allow influences what people will say at all. For example, if we allow violent threats and harassment (which are both definitely speech), this will be used to silence voices. And in the American context, it's pretty clear which voices get silenced. Which voices have been silenced for decades and decades.
So in practice, somebody will always be silenced on some topics. The question is more who gets silenced, and what kind of environment for discussion we want to create.
I don't think that child pornography falls under speech at all, and I don't see the connection. Child pornography sole purpose is to serve prurient interests.
Sorry I don't mean to say that speech should be a free for all, threats will not be tolerated, nor would harassment, but we already have longstanding laws and procedures to deal with those things, they are nothing new.
"it's pretty clear which voices get silenced. Which voices have been silenced for decades and decades"
I don't think it's clear at all, nobody is currently being silenced, however I can see the pattern of existing laws and policies around harassment and threats being misused to silence political opponents. This is a very bad development, and mostly recent phenomenon. I think as far as thats concerned we have crossed the rubicon so to speak, and the polarization will only accelerate.
"So in practice, somebody will always be silenced on some topics."
I don't agree. Even child pornography can be discussed however children have no agency and can't consent to their images being used in such a way. So it can never be legalized. There are many subjects that cannot be discussed (racial issues, social issues) that are considered by the left to be unmentionable even though points expressed are perfectly respectful and salient.
The US has a long track record of violence, threat of violence, and social power being used to suppress women and black people, for example. The #MeToo campaign happened recently not because women weren't being sexually exploited in the past, but because their power has finally grown enough that some of them can start to speak with less fear of consequence.
Studies show that this power imbalance continues online. Accounts with female and/or black get a lot more crap. Crap that acts to suppress not just the people receiving it, but anybody who sees it happen and thinks they might become a target of it.
So if you're running a platform, you have a pretty clear choice. Somebody's getting silenced. The question is whether you let the people who want to silence others run rampant, or whether you protect the voices they're trying to suppress.
And I'd add that the left talks about racial and social issues quite a bit, so I think your characterization there is at best misleading.
Personally, I wonder if it's had the opposite effect to some degree. Namely, because it's so easy to find people with views you find reprehensible, you think those views are more common than they actually are and that everything's getting worse as a result. Groups and individuals who'd never usually encounter each other are now seeing their enemies more than ever...
There are ways in which humans self-organise, amplifying individual intelligence and empathy, and ways in which humans self-disorganise, amplifying the naive ambition, greed, and poor choices of powerful individuals.
The challenge is to distinguish between the two.
The FB memo is self-serving nonsense, because it tries to deny there's a problem. In fact the problem existed outside FB before FB and the rest were created.
Why do we have an economy in which a social network exists primarily to amplify the power of persuasive media? Why do our political fitness functions reward this outcome, instead of the other possible outcomes that social networks could generate?
This isn't a problem that can be fixed with more or better software, because it's not a technical problem - it's about politics and values, and how our political economy rewards certain behaviours while punishing, or at least strongly disincentivising, others.
Yeah. I felt much more free and creative when writing LiveJournal posts, which were guaranteed to be seen by my 10-100ish "friends", than when writing Reddit or HN posts, which must compete with tons of other posts to get seen by 0-infinity people.
If I understand what you are saying, I think I might be able to give a personal anecdote to show that most definitely, some "friction" is required to get people organizing and socializing as they traditionally would (outside of social media), and why social media isn't enough:
In October 2017, my area was hit by the worst wildfires in California history. For days, there was no power, there was no internet (mobile or otherwise), there was no 911. We had AM/FM radio and our neighbors.
For all intents and purposes, we were back to the 1900s.
What happened? Well, people in my neighborhood (neighbors) who were all on edge - who had LARGELY ONLY EVER INTERACTED VIA SOCIAL MEDIA (myNextDoor and facebook) - suddenly found that SOCIAL MEDIA wasn't enough.
Within hours, people were organizing and talking, and sharing information in the streets. I met most of my neighbors that week that we had to go without internet access - people I have lived next to for 5 years now, but never had a conversation with. Literally everyone kept to themselves and used MyNextDoor as a proxy for traditional interaction.
Also, social media became toxic during this event - False information was spread rapidly in unofficial groups, evacuation orders which were never issued were reported as fact, etc. Our local PD ended up putting out 24x7 hourly updates to stop the flow of bad information.
TL;DR: During the October 2017 wildfires, power and internet were out for days -- the "friction" of this event (plus limited access) showed me that people in crisis / who need to organize can't do so via social media alone. Inevitably, small groups of neighbors organized physically to help ea. other out - and the disaster was the catalyst for this.
Edit 1 - Added notes on bad parts of social media during natural disaster.
PS: Apologies for the length. This was a traumatic event, and verbalizing this here has been therapeutic.
I didn't read that. Organizing a union or a political or social movement occur through deliberate, thoughtful efforts. I took his comment to indicate something along the lines of; Mobs (IRL or digital) are emotion driven and feed off of (or even are dependent upon) instantaneous, omni-directional communication. Maybe adding friction to communication has historically had the benefit of reducing mob tendencies. But friction does not mean prevented or blocked.
I haven't thought any of this over myself, but if I supported this idea, I'd offer that friction, in reducing mob tendencies, might actually make more room for the types of communications that promote rational democratic ideas.
Basically, i think you read way more into his idea of friction than was there.
The ultimate problem I have with this idea is that it only makes creating structures slow, not actually taking action. (We lost the kind of friction that would slow down the latter around the time TV and radio appeared.) So even without social media we can have hasty, knee-jerk extreme reactions to pretty much anything, they just have to go through the established structure.
Yup. The way I see this is more like adding intertia in control systems - it makes a system a bit more sluggish, but can prevent it from oscillating out of control.
Interesting, I would agree with the conclusion without the intermediary.
I don’t trust the press to do a better job of this than anyone else. The press is made of people and subject to the same flaws and desire for power — both to campaign for the freedom to do certain things, and also to restrict other things (I’m thinking of UK issues like fox hunting and one particular moral panic about the drug MDMA that turned out to be a water overdose).
That said, like the two famous Churchil quotes on democracy, I’ll say both that the best argument against democracy is a five minute conversation with the average voter, yet also that democracy is the worst form of government except for all the others. I can see democracy’s flaws, but not how to fix them. I wouldn’t even want to be in a benevolent dictatorship if I was running it.
I certainly don't trust the press to do a better job of this than anyone else, and they definitely don't do a great job of encouraging deliberation and slowness to act - if anything the exact opposite. This is more of a practical observation of how things actually are and what the actual end result of this would look like. There hasn't been the same kind of push to stamp down on hasty, mob-like action backed by the press as there has been to restrict social media and direct unmediated communication, and I can't see any such push going anywhere given the amount of media pushback that would result.
(Also, to be fair, I should probably have extended this to other powerful institutions and organisations with the ability to mediate what kinds of activism can and cannot happen, not just the press. Meant to but it didn't make it into my original comment.)
>What you're basically saying, I think, is that there shouldn't be any easy way for ordinary people to communicate directly with each other and organise politically.
This isn't an either/or.
There's a world of possibility you're missing.
Like, people should be able to freely communicate online, but maybe not on platforms that psychologically exploit them for ad revenue (and other, potentially nefarious, aims should the wrong person get hands on the levers.)
Facebook is designed to incentivize vitriol and dramatic interactions and actively prevents uniting around common interests. Facebook is a massive obstacle to robust political organization.
No, they've always been able to do what you say, even before these apps. The issue is the speed at which it happens, turns ordinary sensible people into members of a mob. That's clearly a problem.
>That certainly does seem to be the underlying subtext of a lot of the recent criticism of Facebook. It's also an astounding line of thinking, because it basically says that it's politically unacceptable, even a danger to democracy
I noticed this too. I presumed it's just the media reacting to its declining influence and facebook's increasing power over what we see.
I've also noticed a lot of hand wringing about "facebook fake news" and the importance of "responsible journalism" from the likes of the New York Times and Washington Post as if none of use remember them (among other things) vigorously beating the Iraq war drum with equally fake news in 2003.
On the one hand they've got a point. On the other hand it should be just about anybody else except them making that point.
Gustave Le Bon, the prominent French psychologist, has a copious amount of writing on the concept of mobs and their behaviour:
A crowd thinks in images, and the image itself calls up a series of other images, having no logical connection with the first...A crowd scarcely distinguishes between the subjective and the objective. It accepts as real the images invoked in its mind, though they most often have only a very distant relation with the observed facts....Crowds being only capable of thinking in images are only to be impressed by images.
Arguably, mobs have gotten a bad rap.[1] Bond argues that mobs end up being the way we see them on video largely because of how they are treated (ringed by armored cops with shields and bombarded with tear gas, say).
FedEx, firefighting teams, the Hurricane Harvey response[2], and soldiers in battle offer other examples of people instantly communicating with one another.
I don't have a brief for Facebook, and it does seem that what we (perhaps falsely) think of as "mob rule" is more prevalent in social media than face-to-face.
> We want friction in communicating. Not only do we want it, we have to have it.
I am uncomfortable with this, but I find myself agreeing. All the "problems" with Twitter stem from liking low-friction communication in theory, but hating it in practice.
I think that in some way it is workable, but people approach it in a way that sets everyone up for failure. People want their phone to ding, then make them feel good because their friends are nice to them. Twitter is not that. Twitter can never be that. Twitter is the internet. If you apply the last 3 decades of common knowledge about interacting on the internet, it's fine. But it reduces the utility (and therefore engagement) of Twitter, so the marketed idea has to be that twitter is something more.
I understand the discomfort, I feel it myself. However, I believe that friction, and adding delays, makes it more likely that people engage system 2 thinking (slow).
When communication is frictionless, we engage more of our type i system. See: drunk arguments.
Yeah, but at the same drunk arguments kinda prove my case too. Drunk arguments are fun, but they're their own thing. People worth knowing, know how to have a drunk argument and still get along with everyone during and after it.
It's an abnormal interaction, but as long as the participants know the different ground rules and constraints, you'll be fine. If you use Twitter like it's facebook, it's not going to be fun.
> We want friction in communicating. Not only do we want it, we have to have it.
This program posts news to thousands of machines throughout the entire
civilized world. Your message will cost the net hundreds if not
thousands of dollars to send everywhere. Please be sure you know what
you are doing.
Are you absolutely sure that you want to do this? [ny]
After reading his books Deep Work and Be So Good They Can’t Ignore You, I feel like Cal Newport has become sort-of a guru for me, not pure computer science, but the larger view of how to live a good life.
I am sharing this article with friends and family to maybe help them understand why I don’t want to use social media, rather, I want to talk on the phone, email directly, and travel to see people.
I manage a machine learning/AI team so I am not against technology, but technology truly needs to serve human needs.
Well Zuckerburg didn’t create the thing because he had some deep belief in high modernism or connecting people. People lie to themselves about their motivations. That’s a messy human thing too.
Am I the only one that remembers that back in, oh 2006ish, one of the core features of Facebook was a hot-or-not picture rating game? Or that one of the other big features was "poking" people?
Whatever Facebook has become, it started out as a MySpace clone, without custom CSS, created by some Ivy League young men. Mythologizing it into some world-changing, higher purpose origin story is asinine.
Agreed, I have a similar take on people who actually get so worked up by something they read on the internet as to harm people or themselves. Before 2006, the response to someone ho believed something truly, obviously false from the internet was to laugh “wait, you actually believed something you read on the inteenet?!”
I remember overhearing people laughing at how seriously some people took the relationship status posted on Facebook. People would put all manner of things on FB as one big joke. Like saying they were in a complicated relationship with their clearly platonic best friend.
Back in 2006, the web was still largely something for nerds. The average person wasn't putting up the same time online as, for example, an MMO player whereas today, the average user is online just as much or more than an "early adopter". Average people would have checked the weather, gotten directions somewhere, maybe scrolled through an article or two, checked sports scores, etc. It was always something shallow that could be handled quickly. So yeah, making fun of someone for believing Internet trash was commonplace, because it was barely a source for good information.
These days, the web is basically an essential utility to participate in society. Just look at how many people are walking down the sidewalk with their face glued to their phones. Companies have tried to leverage this connectivity to spread information. Paper news is a dying industry, because they've all gone digital. It makes sense for people to get worked up these days. Communication via the internet (or at least digitally) is the de facto method. Teachers and professors used to chastise Wikipedia since anyone could edit it. However a lot of people realize that Wikipedia's citation system makes it a great research staring point. Connectivity is ubiquitous.
This isn't the same Internet culture from 2006, and it's odd to treat it like it is.
It's not that people can't lie on the Internet. Back then, it was more or less expected that whatever was written on the Internet is no more truthful than someone telling a story. One of the major differences was that people we're necessarily tied to their online personas like we are today.
What I mean to say is that the Internet is currently more closely linked to reality than it was back then. Digital identities aren't necessarily a separate entity from who we are. It may be a version of ourselves that we want to represent, but they tend to represent a part of us. This is why you may not have cared if some dipshit harassed you online back then. The internet was a mere source of entertainment. Now that it's an arguable necessity, it's no longer just an avatar getting attacked or lied to; it's the person behind the keyboard.
I was a relatively early adopter to Facebook, though not Harvard- or Ivy League-early. It didn't have a hot-or-not style rating at that time. Poking was a feature, but I'm not sure it was ever that popular.
It's a site that predates Facebook: Facemash used "photos compiled from the online facebooks of nine Houses, placing two next to each other at a time and asking users to choose the “hotter” person"
It was definitely still part of Facebook when it started rolling out beyond colleges to high school students around 2005/2006. I remember people spending way too much time rating randos in study halls back then.
Are you sure that wasn't hotornot.com or something similar? I've been on Facebook since back when it was still segmented by college (connections with people at other colleges were clunky), and I don't recall it ever having a built-in "hot or not" feature.
I think I recall what he might be talking about. I remember several third party apps within Facebook that served a similar function being popular back in the early days of Facebook. It's totally reasonable that someone might fuzzily confuse one of those for core features after a decade or so.
I agree with the initial premise, origin story more likely was nerd bros desperately wanted to get rich and get laid. Connecting the world only came around when they were told they couldn't put that into their business plan template as their mission statement.
But I have a feeling you are confusing things and timelines. Perhaps you are remembering the actual hot or not site that did just what you remember while thinking of the facemash or whatever that Zuckerberg made prior to FB that we probably all learned about when we watched The Social Network.
People change, though. The human condition is one of constantly justifying and explaining your actions and coming up with a story to arrange everything into. I think the story of Facebook as an "idealist" company is one that Zuckerberg and friends have genuinely taken to heart.
The fact that people are taking this memo at face value is insane. Facebook is obviously much more interested in making money off of people's data through advertising than to connect people. Further, it's dangerous to even rely on such a centralized system for so much of our communication. Especially when it's an organization with an astounding amount of power and money.
Again with blaming Facebook and Google. Why not blame ourselves?
We aren't acting as adults: we can't look at something shiny and say, "I want it, but it would be bad for me, so I'll leave it alone." At least some of the dangers of these things are obvious: unnatural levels and kinds of sharing, contrived, manipulative reactions, vast potential for privacy violation and misuse of data, and tons of time lost without compensation of any kind. Why don't we all refuse?
I didn't, and I should have. I've recently dropped Facebook entirely and am paring back my reliance on Google. Looking through my Facebook data download, I'm shocked at my foolishness, especially in the first couple years: I was old enough to know better! I hope I've learned my lesson, and that others do too, before we seriously mess up our society --- if we haven't already.
Adults aren't infallible nor manipulation resistant just because they are adults.
It's true that we can do better but criminalising semi-aware victims doesn't help. Bias and gamification and manipulation and curiosity and herd mentality are a thing.
I agree. They claim that the Boz memo was effectively a straw man argument. However, after reading the memo in total and the fact that they subsequently deleted it does not give the appearance of impropriety to that claim. As an organization I don't believe Facebook had any corporate values outlining their position either.
This is not so different from the arguments behind what motivated the Unabomber. One key difference is that Ted Kaczynski saw it as necessarily political.
People should read Kaczynski's manifesto. It's rough at parts, but it's really an interesting perspective.
Why is future optimistic by default? By the way humanity is progressing, do you see any reason for hope? Do you disagree that we are in a technological runaway?
No, but it's the same argument for something like SpaceX. A quick analysis says being optimistic makes the most sense because it has the best outcomes in all situations.
If we're optimistic and correct, we're building the things that will be useful in the future. We know we have to get off the planet to survive as a species, so we have to hope that we can do so.
If we're optimistic and wrong, we're distracting ourselves until we die. Pessimistic and correct, we wait for death and die.
Pessimistic and incorrect, we watch the future run away from us while we argue over what remains on the planet to be used by the technology we stagnated in. I don't want to know what 2050 looks like if we're still dependent on oil.
The worst case only exists if we're pessimistic as a species. The best case only exists if we're optimistic.
And I'm saying we're fallible. We need mechanisms to avoid fatalism when looking at a downward slope, because history says we don't actually know how things will turn out, no matter how obvious they seem.
I read it long ago and came to the conclusion that he was essentially right with how he framed the problem but the solution he picked out of the two alternatives was the wrong one. Running away from technological progress into the wilderness somewhere might be good for you but the only long term solution that is good for everyone is to alter humanity to be better adapted to its new environment.
Though there are probably a multitude of ways of doing it, the two that stick out as most likely / agreeable to all are Gattaca and AI + Cyborg. Though I could also see strong AI either just replacing humanity[0] or simulating / uploading the minds of the good humans and merging with them.
[0] I'm not convinced that this would be a bad thing, a hypothetical AI that comes with all of the benefits of humanity with none of the rape or war might just be better ethically in a global sense.
From this perspective, the user is merely a pawn in the game of revenue projections and market expectations.
I've been in the IT/software game since the mid 80's, where fresh out of engineering college I started my first software company, writing add-ons to BBS's in C using btreive as our database and I'm not sure if a "user" has ever been anything but...
I respect your profit orientation, but it's disingenuous to suggest it's the only path.
Next door to you, the folks in what we're currently calling the FLOSS community were trying to arm and empower those same users, instead of constraining them.
It's funny, even before the Cambridge Analytica scandal started I made a presentation at a local meetup, comparing Zuckerberg to Escobar. The title is more attention-bait than anything, but I found it especially relevant in current times.
Wasn't that memo a rhetorical device by the author to raise that kind of opinion (one he does not agree with) as a conversation point inside Facebook?
I don't like Facebook as much as the next person, but being willing to write things in that style and get discussions going sounds really healthy to me.
Have the same thought quite often when google io is happening.
It was and is still typical to be shown something cool and useless which works perfect in San Francisco and thats it.
I think only 1 or 3 years ago, they talked about google services which have a light version so countries like india or areas with low bandwith can use them properly.
Not shit sherlock.
Android got bigger and fatter than after ages they announce go.
Nexus Q? 'this device has high end cpu etc. and for only 300$' and it should be in every room? Srsly?
The mindset of an silicon valley developer who has everything on hand, trusts his/her environment etc. is a very unique experience. I would compare it to an university. You working and living with your pals. After all lots of conflict potential things are gone: No one has to clean up, no one has to make meals, there is probably no theft and everyone earns more than enough money.
Than there is google io and protesters outside.
While google has an huge impact, facebook only does data handling. And it has such a huge responsibility which shouldn't be even allowed on our earth. But who are those pepoloe who build it? Those are anyone and noone. You don't need to pass any ethical moral test to become part of it.
Disturbing High Modernism? More like disturbing high mindedness. The author here is making claims that can be summarized to "I don't like Facebook and it's different from how we've usually done things, therefore we need to get rid of Facebook-like services."
The author, in other words, is trying to argue that, because they think that other people can't handle Facebook's "frictionless communication", no one should be able to use it because it'll do something bad to society. What that bad thing is isn't clearly specified, instead the author only claims that "tribalism, authoritarianism, extremism, disinformation, and hyperbolic outrage" has increased on these platforms, and implies that they think this outweighs the positives these platforms have also provided or that this is a change relative to how a world without Facebook would operate.
The author makes no concrete or quantifiable argument in this piece, only parrots the opinions of others they consider authorities. This text is the same sort of technopanic rhetoric that was used by luddites to oppose things like the printing press and electricity. We can argue about the ethics behind Facebook's method of funding itself, but the author here is conflating that with an argument against free communication.
> What that bad thing is isn't clearly specified, instead the author only claims that "tribalism, authoritarianism, extremism, disinformation, and hyperbolic outrage" has increased on these platforms
Corporations do not want to legislate, let alone enforce, more of humanity’s behavior than they need to. They have better things to do. The universe isn’t going to dent itself. But because they are outpacing traditional governments in their ability to predict & shape human behavior, they walk right into stupidly impossible situations over and over again. Software isn’t eating the world so much as larping it.
The governance problem is not about dealing with governments as such, but about accidentally adopting or creating arenas of human behavior that you then have to govern, with “externalities” that don’t remain external. Starting a ride-sharing service? Congratulations, you’re now an urban planner. Want to disrupt shipping logistics? Get ready to do your part to curb terrorism and slavery.
The problem of governance arises wherever “move fast and break things” runs right into an older saying: “you broke it, you bought it”.
Sometimes things blow up squarely on your turf. Think about what it takes to enforce a social network “real names” policy across hundreds of cultures. One account per person, one person per account, authentic names as used in real life. Zuck’s buddy Calvin can post pictures of himself holding pounds of weed but he can’t use the name he’s known by, Snoop Dogg.
This reminds me of the religion of "dataism" found in Yuval Noah Harari's Homo Deus. We're moving towards a world where the free flow of information is valued above all else.
This religion was foisted upon us...I think we need to consider more whether or not we choose to accept it.
What's your name / anywhere I can read more from you? Your HN comments seem pretty interesting. Feel free to send me an email arikrockefeller@gmail.com
The quote from the memo that really irritates me is
> The natural state of the world is not connected. It is not unified. It is fragmented by borders, languages, and increasingly by different products.
This is such a load of crap. The natural state of the world is actually very connected (whether we consciously assent to it or not), and specifically, humans are one of the most social species. Humans have deep societal and psychological needs for interpersonal interaction.
What Facebook does through "engagement" is replace richly rewarding and deeply needed face-to-face interaction with a veneer of connection. Actually, Facebook's effects are even more insidious--it addicts us to a "feeling" of connectedness with dopamine hits, which prevents us from even realizing we aren't getting what we need.
Facebook is not a community, it is software. Software can enhance what is in the real world (ie, real community), but it cannot replace it. Zuckerberg's new world order of community is a mirage, and cults have been built on lesser falsehoods. The article's link between that mirage and the modernism that gave rise to communism is not theater.
“Meanwhile, the poor Babel fish, by effectively removing all barriers to communication between different races and cultures, has caused more and bloodier wars than anything else in the history of creation.”
Kyle Kallgren's incredibly insightful discussion[1] of sci-fi from Frankenstein to Black Mirror explained one of the mechanisms that create wars out of too much interconnection: metaphor shear.
>> That feeling all users experience when you realize the metaphor you asre working in is bogus. When the computer fails you and you remember that there are a hundred translations between input and output. Codes and translations we don't have the time or patience to do ourselves. Intellectual labor that we've surrendered to a device.
The map is not the territory; the force-amplification benefits of technology also includes an imperfect model of reality.
> the poor Babel fish
From the same video essay:
>> The joke at the center of Douglas Adams Hitchhiker's Guide To The Galaxy is about metaphor shear. The answer to an important question lost on its long journey from input to output. A computer glitch so huge, so strange and so embarrassing that its programmers have to make a computer the size of a planet to file a bug report.
>> The most advanced software in the world of Black Mirror is people. So many of Brooker's stories use perfect copies of people rendered as software. Easy to download, and very easy to abuse. Of course, this technology already exists. There are already other people on our devices, rendered as text and avatars. Individual contact abstracted through one hundred layers of metaphor shear. Sometimes, that shear makes it to easy to commit inhuman acts.
Cells have membranes for a reason. Without a membrane every random chemical contaminant will enter the cell and disrupt its metabolic function.
Your CPU has a case for a reason. It won't work very well if you pop it open and touch its surface. Computers also don't work that well if they are not electrically isolated or grounded from things like electrostatic discharge.
Networks have firewalls and/or authentication mechanisms for a reason. Without them they get compromised by black hat hackers and malware.
It's not that this kind of "high modernism" denies human nature. It's that it denies something fundamental to life itself and to the maintenance of any complex system: boundaries. No form of complex structure can persist without boundaries-- usually multiple nested levels of boundaries. Removing all boundaries just causes things to collapse down to a lower level of complexity/structure.
Interesting. Going along with the biological metaphor - maybe what is needed is a) redefining the boundaries and cohesion in smaller units of organization (e.g. local communities) and b) a more organized circulatory system vs. a kind of directionless diffusion (i.e. current social media).
One of the other big criticisms of Facebook and other social networks is that they're "filter bubbles" which make sure people don't see any information or viewpoints from outside their own social circles and perspectives. The main unifying aspect that the "filter bubble" criticism and the "connecting people" criticism seem to have in common is that both involve Facebook taking away some control of the culture from the press, the media, and other powerful groups and giving it to the populace; other than that, they should be contradictory.
> The main unifying aspect that the "filter bubble" criticism and the "connecting people" criticism seem to have in common is that both involve Facebook taking away some control of the culture from the press, the media, and other powerful groups and giving it to the populace
No, it's giving it to a new powerful group, specifically, Facebook. Just as Stalin stated, “I consider it completely unimportant who in the party will vote, or how; but what is extraordinarily important is this — who will count the votes, and how,” so, too, is it true of social media; power doesn't lie with the people who can press “Like” (etc.), it lies with the people who select and continuously tweak based on observed effect the algorithm by which Likes and similar actions are used to shape the flow of information to and between users.
>power doesn't lie with the people who can press “Like” (etc.), it lies with the people who select and continuously tweak based on observed effect the algorithm by which Likes and similar actions are used to shape the flow of information to and between users.
Interesting. So no matter how much apparent power is gained by people connecting politically on social media, the "(wo)man behind the curtain" who controls the platform and it's algorithm will by default always be more powerful?
Yes, if your power comes from someone else and continues only as long as they choose to maintain it, the person directing power to you has the greater power.
Anyway, while I agree with api's comment about losing boundaries (or more accurately: having to find a new way of setting them with the improved ease of communication and tracking), I think that blaming the Babel fish is a bit of a "shooting the messenger, not the message" problem. Removing barriers to communication also reduces the ability to ignore things that have to be addressed.
Well, yes: the printing press ultimately caused Europe to end up in a hundred years of sectarian warfare that killed millions. I'm not eager to repeat that.
Well, no: the printing press didn't ultimately cause that. It enabled easier communication. In fact it arguably democratised mass communication by making it so cheap. It was what was being communicated communicated that caused the strife that led to warfare.
Part of that must have been the "fake news" of its day, like [blood libel][0], but that already existed before the printing press. The other part was a reduced ability to hush up bad things. Addressing that ultimately led to major upheavals of existing power structures, and a lot of bloodshed.
For me, a more worrisome part of the internet is the effect homogenisation, as described by McLuhan in The Gutenberg Man[1]:
> McLuhan studies the emergence of what he calls Gutenberg Man, the subject produced by the change of consciousness wrought by the advent of the printed book. Apropos of his axiom, "The medium is the message," McLuhan argues that technologies are not simply inventions which people employ but are the means by which people are re-invented. The invention of movable type was the decisive moment in the change from a culture in which all the senses partook of a common interplay, to a tyranny of the visual.
> He also argued that the development of the printing press led to the creation of nationalism, dualism, domination of rationalism, automatisation of scientific research, uniformation and standardisation of culture and alienation of individuals.
> Movable type, with its ability to reproduce texts accurately and swiftly, extended the drive toward homogeneity and repeatability already in evidence in the emergence of perspectival art and the exigencies of the single "point of view".
Another interesting tangent is (mis)communication's effect on the evolution of trust, but I don't feel like writing out a whole essay about that, so I'll refer to Nicky Case's "Evolution of Trust" and hope that the connection is obvious[2].
Somewhat unrelated, but it's quite aggravating to see the memo-writer (and the author of this article, by extension), using "de facto" good instead of "ipso facto" good.
If you choose to use flowery expressions, particularly Latin ones, you should at least be accurate.
What does it mean to "connect everyone"? It seems like a meaningless statement. Not only is it not feasible in the literal sense, but wholly undesirable.
Anyone curious about the "High Modernism" referred to in this article should definitely read Scott Alexander's review of "Seeing Like a State" [0]. It sums up some of the attitudes of the High Modernists involved in things like architecture and urban planning in the 20th century:
>First, there can be no compromise with the existing infrastructure. It was designed by superstitious people who didn’t have architecture degrees, or at the very least got their architecture degrees in the past and so were insufficiently Modern. The more completely it is bulldozed to make way for the Glorious Future, the better.
>Second, human needs can be abstracted and calculated. A human needs X amount of food. A human needs X amount of water. A human needs X amount of light, and prefers to travel at X speed, and wants to live within X miles of the workplace. These needs are easily calculable by experiment, and a good city is the one built to satisfy these needs and ignore any competing frivolities.
>Third, the solution is the solution. It is universal. The rational design for Moscow is the same as the rational design for Paris is the same as the rational design for Chandigarh, India. As a corollary, all of these cities ought to look exactly the same. It is maybe permissible to adjust for obstacles like mountains or lakes. But only if you are on too short a budget to follow the rationally correct solution of leveling the mountain and draining the lake to make your city truly optimal.
>Fourth, all of the relevant rules should be explicitly determined by technocrats, then followed to the letter by their subordinates. Following these rules is better than trying to use your intuition, in the same way that using the laws of physics to calculate the heat from burning something is better than just trying to guess, or following an evidence-based clinical algorithm is better than just prescribing whatever you feel like.
>Fifth, there is nothing whatsoever to be gained or learned from the people involved (eg the city’s future citizens). You are a rational modern scientist with an architecture degree who has already calculated out the precise value for all relevant urban parameters. They are yokels who probably cannot even spell the word architecture, let alone usefully contribute to it. They probably make all of their decisions based on superstition or tradition or something, and their input should be ignored For Their Own Good.
The result being, of course, the creation of hideous planned cities that no one wanted to live in.
I think the author completely misunderstood the Boz quote. I interpret it as him questioning the core belief that connecting people no matter the consequences is right.
The cited fragment does suggest that interpretation, that the de facto goodness should be scrutinized. But the entire memo frames terrorism and bullying as unfortunate side-effects which must be tolerated for the sake of ever greater connectivity.
--- [Newport quote in brackets] ---
Andrew Bosworth
June 18, 2016
The Ugly
We talk about the good and the bad of our work often. I want to talk about the ugly.
We connect people.
That can be good if they make it positive. Maybe someone finds love. Maybe it even saves the life of someone on the brink of suicide.
So we connect more people
[That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools.
And still we connect people.
The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good.] It is perhaps the only area where the metrics do tell the true story as far as we are concerned.
That isn’t something we are doing for ourselves. Or for our stock price (ha!). It is literally just what we do. We connect people. Period.
That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it.
The natural state of the world is not connected. It is not unified. It is fragmented by borders, languages, and increasingly by different products. The best products don’t win. The ones everyone use win.
I know a lot of people don’t want to hear this. Most of us have the luxury of working in the warm glow of building products consumers love. But make no mistake, growth tactics are how we got here. If you joined the company because it is doing great work, that’s why we get to do that great work. We do have great products but we still wouldn’t be half our size without pushing the envelope on growth. Nothing makes Facebook as valuable as having your friends on it, and no product decisions have gotten as many friends on as the ones made in growth. Not photo tagging. Not news feed. Not messenger. Nothing.
In almost all of our work, we have to answer hard questions about what we believe. We have to justify the metrics and make sure they aren’t losing out on a bigger picture. But connecting people. That’s our imperative. Because that’s what we do. We connect people.
I am coming towards this conclusion as well. As individuals, the easier and more frictionless our communications with others, the greater our opportunities to live, love, learn, and collaborate. As a species, we have evolved to live in small social groups with radically different norms and life patterns. That's a survival and evolution benefit, not a bug. As long as we can remain peaceful and not harm one another, we want the maximum possible diversity in humanity, not some utopian norm for the entire species.
We have a wonderful example of large numbers of people instantly communicating with one another. They're called mobs. Because at some point in scaling, and it happens probably between 75-500 people, powerful emotions are the only thing that can sustain the instant feedback loop we've created.
We want friction in communicating. Not only do we want it, we have to have it.