The ability of Big Tech companies to shut out one of the world’s loudest men, Donald Trump, is astonishing. The events of last week, including the mob storming the U.S. Capitol building, forces us to ask how should we regulate Big Tech?
In the Harvard Business Review “How to Hold Social Media Accountable for Undermining Democracy”, Yael Eisenstat points out that simply silencing Donald Trump is not enough. The response “fails to address how millions of Americans have been drawn into conspiracy theories online”.
Eisenstat argues that social media companies are curating content that decides what speech to amplify, who to nudge users towards, which groups to connect people to, and who to recommend conspiracy theories to. All to keep users engaged on the platform.
How do we address this?
Eisenstat recommends that we implement regulations and laws to force social media companies to be transparent around their recommendation engines. Currently, there are no robust laws governing how social media companies treat political ads, hate speech, conspiracy theories, or incitement to violence. This is problematic. In some cases, algorithms are amplifying hate speech content and recommending hate groups to users in order to keep users engaged on the platform.
By insisting on real transparency around what these recommendation engines are doing, how the curation, amplification, and targeting are happening, companies can be held accountable for how their recommendation engines contribute to the spread of hate and conspiracy theories.
Eisenstat argues that social media companies should be obligated to be more transparent about:
- how they amplify content,
- about how their targeting tools work, and
- about how they use the data collected on us.
Although I agree with Eisenstat that we need more transparency, the regulations must balance many competing values. The right to free speech is a fundamental ingredient to a proper functioning democracy. But hate speech should not be tolerated, including “dog whistles”. A “dog whistle” is a message that is only apparent to the recipient of a specific subset of people. Forcing the removal of subtle, hidden messages demands an explanation. “Absent such explanations, any individual could stifle otherwise valid political speech by citing subliminal messages without having to justify that position.” (CHP v. City of Hamilton, 2018 ONSC 3690).
(Views are my own and do not represent the views of any organization. This article was originally posted on slaw.ca)
You must be logged in to post a comment.