Internet’s democratizing power has been a blessing for individuals, activists, and small businesses across the world, while those with negative minds and goals have usurped its advantages to their own ends. In the 1980s, White supremacists used electronic bulletin boards, and the mid-1990s saw the first pro-al-Qaeda website. Counterterrorism measures taken online may not be new but must evolve urgently as digital platforms have become central to our lives. Facebook counterterrorism team understands this urgency and acts likewise.
Terrorism is “Any non-governmental organization that engages in premeditated acts of violence against persons or property to intimidate a civilian population, government, or international organization to achieve a political, religious, or ideological aim.”
This large definition adopted by Facebook includes religious extremists, violent separatists, white supremacists, militant environmental groups, and many more such violent-minded goals.
Facebook’s counterterrorism policies do not apply to governments of nation-states, who may legitimately use violence under necessary circumstances. This doesn’t include certain content around state-sponsored violence, or pro-political agenda, which would be removed under other policies, such as the graphic violence policy.
Facebook policies do not allow terrorists to use the services, and the enforcement is quite active. Facebook’s newest detection technology focuses on ISIS, al-Qaeda, and their affiliates – groups posing the broadest global threat – based on algorithmic tools.
Propaganda can now be detected quickly and at scale, thanks to the newest Detection technology implementation via the counterterrorism team, consisting of 200 people.
More and more terrorizing content is getting removed. In Quarter 1 of 2018, Facebook took action on 1.9 million pieces of ISIS and al-Qaeda content, about twice as much as the previous quarter. 99% of such Facebook ISIS and al-Qaeda content was not user reported. Alongside the use of advanced technology, help from internal reviewers was also solicited in this regard. Sometimes, for a small percentage of Facebook terrorist content where users report a profile, Page, or Group – the entire profile, Page or Group is not removed because, as a whole, they do not violate its policies, but specific contents breaching its standards are removed promptly.
Since terrorizing Content uploaded to Facebook tends to get less attention the longer it’s on the site – Facebook has prioritized the identification of such newly uploaded material. In Q1 2018, the median time for such removals, based on user reports and Facebook content, was less than one minute.
Facebook’s specialized techniques to find and remove older content helped remove 600,000 such pieces. In Q1 2018, Facebook’s historically focused technology found 970 days old contents and removed them for breach of terms. Facebook understands that its currently undertaken measures aren’t enough and intends to grow it into a more and more advanced measure for counteracting evolved threats.