Tech companies have long argued they are not responsible for the content posted by others on their platforms. Under this view, a tech company is not responsible if someone posts threats to kill others or instructions for building a bomb or details on how to hack into government computers.
Companies have stood behind this principle, especially in regards to users posting defamatory content to online forums. They have argued that they are not responsible for the content and that it would be impossible for them to police the content of their forums.
Now however, the tech companies, including social media companies, are arguing that they can and do police all speech on their platforms. This implies that they do, in fact, control and have responsibility for the speech on their platform. These actions are likely to emerge in future anti-defamation suits filed against online tech firms and they could find themselves liable for all types of infringing speech conducted on their platforms.
Tech companies can certainly condemn offensive speech. But censoring offensive speech puts tech companies into a area that may have legal ramifications.
Related from St. Louis Dispatch:
“A law that can be directed against speech found offensive to some portion of the public can be turned against minority and dissenting views to the detriment of all. The First Amendment does not entrust that power to the government’s benevolence. Instead, our reliance must be on the substantial safeguards of free and open discussion in a democratic society.” — Justice Anthony Kennedy
 Should tech companies censor all “racist” speech? How do you define “racist” speech? Who gets to decide what is “racist” speech and what is not racist speech? This may sound intuitive but it is not so simple.
The current topic du jour of offensive white supremacists and their use of tech platforms to disseminate their message of hating others is easy to understand. Most any of us find their messages to be vile and disgusting. Eventually someone, somewhere will find most any speech to be offensive – should it be regulated and censored? Who decides?
For example, I noted technical defects in the Affordable Care Act, also known as “ObamaCare”. These defects are so serious they have led to the collapse of the unsubsidized individual insurance market (about half the ACA market) throughout much of the United States (as of 2017). I began pointing out these problems in 2009 and 2010 based on detailed analysis of the ACA, an understanding of economics, and my reading of the peer reviewed published literature on health economics and health policy.
For documenting these problems and fighting for fixes I was personally labeled a racist by party enthusiasts who supported the ACA. I am not joking. Questioning a policy developed, in part, by a member of a minority group, is in their mind, a form of racism (this is a logical fallacy but they were unable to think logically).
Under the tech company’s new penchant for speech censorship, should my detailed comments and suggested fixes be censored? Not once does my document mention race, ethnic or cultural issues nor even imply such topics.
These parties said that mentioning since confirmed and valid problems and proposing solutions (which several states have adopted) to fix the ACA is racism. In reality, they were using a logical fallacy to associate informed analysis with racism, and then using a form of argument and propaganda methodology known as “name calling” to shut down speech. It’s also based on the illogical strawman argument form. Rather than argue the points raised (see the linked paper), they misrepresent the position to create a “straw man” (a fiction, to assert you are a racist) and then attack the “straw man” instead.
In this manner, many are quick to shut down speech they do not wish to hear by throwing out a “racist” label (or any other label du jour). So who gets to decide what is “racist” and what is not? Do you see the difficulty in this – when false labels can be used haphazardly to shut down informed debate?
The answer to speech is more speech, not censorship and not violence. Censorship and violence do not lead to solutions – indeed, they seem to lead to more violence.