I came across an interesting piece about cyber-bullying of gay, lesbian, bisexual and transgender teens on Facebook and the steps the social network was taking to combat it. This is nothing new. As long as their are people, there will be hate speech. It’s the human condition. As a representative correctly point out, Facebook is just as vulnerable to this condition as other cross-sections of society.
What I found interesting was the hate speech/free speech discussion. When does free speech become hate speech in social networks like Facebook?” I was at first confused. Is not hate speech for the most part free speech? That is unless it incites others to imminent violence, threatens the President Of The United States and some other narrow exceptions. Why make the confusing distinction? The reason the distinction was valid in this article was that it was specifically referring to Facebook and not the brick and mortar world. Facebook is a community that attempts to emulate 1st Amendment ideals of free speech but that’s where it ends. It ends there because Facebook is a private company. As such, the 1st Amendment and traditional free speech values have no practical bearing. Free speech and hate speech are just terms of art within the network to be adjusted by the network to best suit the interests of the network. Translation? Free speech and hate speech are whatever Facebook and other social networks say they are at any given moment.
So what are they? Two years ago, I debated this very issue with Facebook representatives relating to Holocaust Denial. I felt that Facebook Groups promoting Holocaust Denial were in themselves “hateful content” as outlined in Facebook’s Terms Of Service and should have been removed from the site. Facebook felt differently. I even presented my thoughts to a group of employees at Facebook corporate offices. While not agreeing with their position, I came to understand their point of view about having internal standards to which employees can look at content and not have to make value judgments about a particular type of speech. Without such standards Facebook employees assigned to deal with these issues would be overwhelmed with disputes over content that one person may find “hateful” and another felt was legitimate expression. They could not hire enough employees to deal with these types of disagreements. As one employee put it, Facebook was looking for “binary certainty” in making these decision on “hateful content”.
There is the rub. What is the binary standard? To this day, to my knowledge, Facebook has never released or publicly stated how they evaluate “hateful content”. Where and how do they draw the line? I have an idea how they do it because I was privy to internal exchange with their employees. Why not tell everyone. Why not some transparency. The same transparency many employees acknowledged was lacking when I spoke there. Two years later, nothing much has changed with regards to “free speech” and “hate speech” The general user perception is that it is whatever Facebook says it is. At least in the brick and mortar world I can pull up The Constitution and Supreme Court opinions that guide me. That standard does not represent the beliefs of all Facebook users across the world but for better or worse that is the standard Facebook uses. It’s transparent. Emulate that aspect as well.
One Response
Lol.. people today use facebook as their diary!