Cokie and Steven Roberts: Balancing censorship and responsibility

Published 12:00 am Saturday, March 23, 2019

By Cokie Roberts and Steven V. Roberts

After a white nationalist slaughtered 50 Muslims in New Zealand, Margaret Sullivan, media critic of The Washington Post, posed this question to the digital platforms used by the assassin to spread his murderous message: “Where are the lines between censorship and responsibility?”

Those platforms — YouTube and Facebook, Twitter and Reddit — must now answer that question with clarity and candor, because their role in the massacre is undeniable. As Neal Mohan, YouTube’s chief product officer, told the Post: “This was a tragedy that was almost designed for the purpose of going viral.”

The shooter was, in effect, playing a deadly video game, live-streaming his attack while encouraging his followers to reproduce and repost the images of carnage faster than social-media platforms could remove them. The platforms tried; Facebook blocked more than 1 million instances of the 17-minute clip in the first 24 hours, but it were hopelessly outmanned. The shooter won the game — and the world heard his hateful cry.

The internet did not create white nationalism or anti-Muslim fervor. And digital tools are used every day for countless positive purposes. But as New Zealand damnably demonstrates, social-media platforms are highly vulnerable to corruption and abuse. “The internet is now the place where the seeds of extremism are planted and watered,” writes Kevin Roose, technology columnist for The New York Times.

Facebook, YouTube and the rest are not merely common carriers like the phone company, neutral pipes transmitting any and all information. They constantly make editorial and ethical decisions that influence what consumers are exposed to, so the question is how those decisions are made and what standards are used. What is the proper balance between responsibility and censorship?

As journalists who cherish the First Amendment, we always tilt against censorship. Social-media outlets — let alone the federal government — should not be the ultimate arbiter of what people know and learn. “There oughtta be a law” is the wrong answer to most problems, and certainly to this one.

One area where social-media companies must improve, however, is crisis management. Even ardent civil libertarians admit that when words and images present a “clear and present danger,” when they threaten to unleash immediate violence, society has an obligation to protect itself and contain that danger.

When the New Zealand shooter’s videos started cascading through the internet, platforms relied on a combination of artificial intelligence and human moderators to thwart their spread, and they failed miserably. Facebook didn’t even know the original video had been posted on its site until local police told the company about it.

Mohan, the YouTube executive, says a new copy of the video was uploaded every second, far too fast for the site to handle. “This incident has shown that, especially in the case of more viral videos like this one, there’s more work to be done,” he conceded.

That work requires more investment and research, better-designed software and better-trained, better-paid human moderators. But crisis management is only a small part of the problem. A much deeper issue facing digital platforms is the way they encourage and enable radicalization online.

“There’s this other piece that we really need to start grappling with as a society,” Roose said on The Daily podcast, “which is that there’s an entire generation of people who have been exposed to radical extremist politics online, who have been fed a steady diet of this stuff.”

Here’s how it works: As users explore a topic, algorithms crafted by the platform suggest new videos that draw them deeper into “rabbit holes” of twisted and tendentious ideologies. The goal is profit. Keep viewers watching, increase the time they spend online and maximize ad revenue.

But this relentless pursuit of eyeballs and earnings has devastating side effects. Not only do users see and absorb increasingly extremist ideas, they bond online with others who are drawn into the same vortex of hate and violence.

“These platforms … are playing a pivotal role in how these extremist groups gather momentum and share their ideas and coalesce into real movements and grow,” says Roose.

Here’s where the balance between censorship and responsibility must swing toward responsibility. The digital platforms should revise their strategies that maximize ad revenue but incur huge costs, the radicalization of their consumers who can then pose a threat to civil society.

If these platforms don’t act on their own, society will fight back in the form of onerous rules and regulations that restrict free speech. The only way to avoid censorship is to accept responsibility.

Steve and Cokie Roberts can be contacted at stevecokie@gmail.com.