Facebook’s counter-terrorism playbook comes into focus

Posted by on Jul 28, 2017 in IT News | 0 comments

Facebook’s counter-terrorism playbook comes into focus

When you see Facebook, it doesn’t look like Facebook is doing much. You post your update, scan your feed, and maybe like a few things. But behind the screen lies an immensely complex set of algorithms that determine what you see. And with nearly 1.9 billion users worldwide, some of that content inevitably includes violent images and extremist rhetoric. So Facebook is making its black box a bit less opaque outlining the tools it will use to deradicalize itself.

Beyond the stuff you’d expect—work closely with law enforcement, consult terrorism experts, improve content moderation—the report confirms that Facebook is using artificial intelligence to ferret out extremism. “We want Facebook to be a hostile place for terrorists,” two company execs said in a post, the first in a series called “Hard Questions.”

The report, by counterterrorism head Brian Fishman and global policy manager Monika Bikert, illustrates the challenges inherent in containing extremism, and shows that Facebook is still playing catchup. Still, counterterrorism experts praised the report and said it makes clear that Facebook finally takes the problem seriously.

“I want to be skeptical, partly because I’m a skeptic by nature,” says Colin Clarke, a RAND counterterrorism expert who served in Afghanistan with Fishman. “But I want to give credit where it’s due.”

The report follows increasing calls by WIRED and others for Facebook to provide greater insight into its moderation efforts. “I wish it had happened a little earlier,” says Michael Kenney, a counterterrorism expert at the University of Pittsburgh. “People in the counter-terrorism community have been talking about this for many years now.”

Pieces in Place

The report aside, Clarke says the clearest indication that Facebook wants to get this right came last year when it hired Fishman to lead its counterterrorism efforts. Fishman has a deep understanding of the online strategies deployed by Al-Qaeda and ISIS, and has used that expertise to help governments and nonprofits combat extremism. His background in academia will help Facebook apply policies and technologies backed by strong research.

Artificial intelligence is one of those technologies. Although Facebook already deploys such tools against copyright infringement and child pornography, the company has until now kept mum on how it might use AI to fight extremism.

In their post, Fishman and Bickert say Facebook’s AI team trains its algorithms to identify extremist images and language, automatically delete new accounts created by banned users, and identify terrorist clusters.

“We know from studies of terrorists that they tend to radicalize and operate in clusters,” the authors write. “This offline trend is reflected online as well. So when we identify Pages, groups, posts or profiles as supporting terrorism, we also use algorithms to ‘fan out’ to try to identify related material that may also support terrorism.”

Clarke and Kenney applaud the effort. “The emphasis on clusters shows they’ve clearly studied the background, and these are people who know the empirical evidence on terrorism,” says Clarke. “The amount of time it would take for a human to sift through all this stuff is not feasible.”

Humans do sift through a lot of that stuff, though. Facebook’s global moderation team reviews flagged materials and blocks accounts when necessary. (The company plans to add 3,000 people to the team in coming months.) The grueling job pays little, and comes with psychological and physical risks. The Guardian reported that for one month last year, an error in Facebook’s code revealed the identities of moderators who had banned jihadists from the site. “That is a huge mistake,” Clarke says.

Some of the moderators fled their homes, fearful of retribution. Facebook said it will consider having moderators use administrative accounts rather than their personal profiles.

No Easy Fix

All of which underscores the many ways counterterrorism efforts can go awry. Facebook is hardly alone here—it works with other social media platforms and companies to tackle the problem, using shared technology that fingerprints extremist images and videos. But as these platforms crack down, they risk simply pushing the problem somewhere else.

“There is a potential cost, as more and more people with bad intentions get pushed to the dark web does that lower our capacity to follow and monitor and potentially disrupt their activities,” Kenney says.

Facebook understands the risk, because it owns the encrypted chat application WhatsApp—a tool terrorists can use to communicate securely. “Because of the way end-to-end encryption works, we can’t read the contents of individual encrypted messages — but we do provide the information we can in response to valid law enforcement requests, consistent with applicable law and our policies,” Fishman and Bickert wrote.

And in the grand scheme of things, extremism online comprises only one part of the broader story of terrorism, which spreads mostly through face-to-face interactions.

All of which is another way of saying: It’s complicated. And every complication reveals a new complication. Will these efforts be enough to stop terrorism from spreading online? No. Will there be mistakes? Absolutely. But Facebook is addressing the problem. And it’s finally explaining how.