Social Media Companies Scan For Potential Terrorists

Following the politically motivated shooting in Pittsburgh and the mailing of pipe bombs to political officials and journalists across the country, public outcry has risen against social-media companies. 

Suspected pipe-bomber Cesar Sayoc and Pittsburgh shooting suspect Robert Bowers used various platforms to post content indicating their potential for ideological violence. 

Some have asked why social-media companies didn’t do more, sooner, to stop the threat.

It’s a question that Facebook, Twitter, YouTube, and others have faced before, going back to 2014 when the problem was content from extremists of a different sort: violent jihadist groups such as ISIS.

Since then, many social-media giants have developed technological and policy-based ways to help prevent extremist content from proliferating across their sites and even to help law enforcement better track potential violent actors. But those efforts were aimed at foreign Islamic extremists, not domestic threats.

Monika Bickert, Facebook’s head of product policy and counterterrorism, has helped her company move farther along in this regard than some others. 

What role did Facebook play in the events that unfolded last week? A limited one: Robert Bowers, the charged Pittsburgh shooter, kept his threats and violent posts to a relatively obscure right-wing platform called Gab. Soon after he joined Gab in January, he began to post and spread images and content threatening to Jews.

Cesar Sayoc had a Facebook profile that he used to advance conspiracy theories. He also threatened people on Twitter, such as political analyst Rochelle Ritchie. Ritchie reported the threats to the platform. Last weekend, the company apologized for failing to act sooner.

Although Sayoc had a small presence on Facebook, the company might still have had a lot of information about him. Facebook monitors extremist rhetoric and content, of the Islamist variety, on sites that aren’t Facebook. It employs contractors to watch extremist chat rooms and other places so they can be ready to identify and tag threatening language, images, and content on Facebook.

Facebook’s Erin Marie Saltman, a policy manager at Facebook who oversees counterterrorism efforts in Europe, Africa, and the Middle East, disclosed this at the GLOBSEC security summit in May.

“There are a lot of people in other parts of the world that are not Facebook and not government,” Saltman said. “They are intel providers that sit and squat on a lot of these other sites and they tell us, in as close to real time as possible, when bad content is being released and so we know about it as soon as possible. 

So when the Abu Bakr al-Baghdadi speech was released a little while ago, and it wasn’t in video form, just audio, we were able to hash[tag] it before it started hitting our site.”

Predicting potentially violent behavior requires as much digitally collected data as possible, precisely the sort of data that intel vendors watching sites like Gab might notice. But when Defense One asked Facebook representatives whether they monitor sites like Gab for such content, or potential indicators of violence, they declined to say.

“As Erin mentioned, we work with intel and research firms who monitor many platforms, but we prefer not to disclose further details as bad actors actively work to circumvent our detection techniques,” a Facebook spokesperson said. 

“Since the bombing attempts, and the shooting in Pittsburgh, teams across our company have been monitoring developments in real time to understand both situations and how they relate to content on our site,” they added.

In 2011, J. Reid Meloy, a forensic psychologist and consultant to FBI’s Behavioral Analysis Units at Quantico, identified eight behaviors that can predict lone-wolf attacks based on ideological extremism. Sayoc and Bowers exhibited several of them across multiple social media sites. 

If social-media companies could search for these subtle indicators of a potentially dangerous person, behaviors, such as fixation or obsession, in the context of overtly troubling posts and comments such as direct threats, patterns could emerge to predict an individual’s behavior.

Cross-platform analysis of individuals’ data residue is what contemporary micro-targeting for advertisements is based on. It works to predict whether a person might be open to a specific product pitch but it also works to predict potentially harmful behavior. 

Facebook is already using AI to spot suicidal tendencies signaled by text patterns. The same algorithms could be applied to spot violent extremism, as could network analysis and even semantic text analysis. 

That information, coupled with the identification of violent messages or threats spread on other sites, could go a long way toward predicting and preventing violent behavior and the posting of extremist content. And it is, but it’s mostly violent Islamic behavior and content.

Consider the case of Demetrius Nathaniel Pitts, a Cleveland man recently charged with plotting a jihadist-inspired terrorist attack. Authorities monitored Pitts’s Facebook posts carefully after he commented on a photo of an al-Qaida training camp. 

His posts extorted Muslims to learn how to operate firearms, posts that law enforcement officials described as “disturbing” to USA Today. But pages urging non-Muslims (or people who are not explicitly Muslim) to own and practice with firearms are common on Facebook.

“We continually enforce our Community Standards through a combination of technology, reports from our community, and human review. This includes our hate speech policy that prohibits content that attacks people based on their race, ethnicity, national origin, religious affiliations, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability,” said the spokesperson.

Following the public outcry against the proliferation of jihadist extremist messaging, Facebook and other sites tried a technique called hashing: essentially, to mark Islamic extremist content as individuals tried to spread it from one site to another. In 2016, Facebook executives led an effort to share data on hashed images across platforms.

“It creates the equivalent of a digital fingerprint so you can know when these things are coming up. We encourage that type of sharing, the hash sharing. Anybody using types of video, photo matching, would be able to use the hashes we are trying to share,” said Saltman.

Could hashed images and data from accounts like Bowers’s and Sayoc’s be relevant to law enforcement? Potentially, but the practice of hash sharing doesn’t involve the government, said Saltman. Instead, she said, the goal was to make a “safe tech space” for technology platforms to use whatever tools they saw fit.

“This is a by-industry, for-industry effort; it doesn’t include government or NGOs. It’s really so we can create a safe space so that some of these smaller platforms that are really scared about talking outside of industry, and admitting you have a problem is step one, can come together in a safe tech space and start operationalising around some of this.”

In a conversation with New York Times reporters on Sunday, Gab founder Andrew Torba denied that he or any Gab employer should monitor content on the site. 

“Twitter and other platforms police ‘hate speech’ as long as it isn’t against President Trump, white people, Christians, or minorities who have walked away from the Democratic Party,” he wrote. “This double standard does not exist on Gab.”

Defense One:

You Might Also Read:

A Genocide Incited On Facebook:

The Weaponization Of Social Media

« British Policing Faces The Future
Blockchain Can Improve Manufacturing »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

MIRACL

MIRACL

MIRACL provides the world’s only single step Multi-Factor Authentication (MFA) which can replace passwords on 100% of mobiles, desktops or even Smart TVs.

ManageEngine

ManageEngine

As the IT management division of Zoho Corporation, ManageEngine prioritizes flexible solutions that work for all businesses, regardless of size or budget.

Practice Labs

Practice Labs

Practice Labs is an IT competency hub, where live-lab environments give access to real equipment for hands-on practice of essential cybersecurity skills.

The PC Support Group

The PC Support Group

A partnership with The PC Support Group delivers improved productivity, reduced costs and protects your business through exceptional IT, telecoms and cybersecurity services.

North Infosec Testing (North IT)

North Infosec Testing (North IT)

North IT (North Infosec Testing) are an award-winning provider of web, software, and application penetration testing.

Logpoint

Logpoint

Logpoint is a creator of innovative security platforms to empower security teams in accelerating threat detection, investigation and response with a consolidated tech stack.

Verafin

Verafin

Verafin is one of the North American leaders in fraud detection and AML software.

Netsecurity AS

Netsecurity AS

Netsecurity is a Norwegian owned company focused and specialised within IT security and cybersecurity-as-a service.

Cube 5

Cube 5

The Cube 5 incubator, located at the Horst Görtz Institute for IT Security (HGI), supports IT security startups and people interested in starting a business in IT security.

Human Security

Human Security

Human (formerly White Ops) Bot Mitigation Platform enables complete protection from sophisticated bot attacks across advertising, marketing and cybersecurity.

Blackpoint Cyber

Blackpoint Cyber

Blackpoint’s mission is to provide effective, affordable real-time threat detection and response to organizations of all sizes around the world.

Australian Cyber Collaboration Centre (Aus3C)

Australian Cyber Collaboration Centre (Aus3C)

The Australian Cyber Collaboration Centre (Aus3C) is committed to building cyber capacity and securing Australia's digital landscape.

Cybriant

Cybriant

Cybriant Strategic Security Services provide a framework for architecting, constructing, and maintaining a secure business with policy and performance alignment.

UK Cyber Security Association (UKCSA)

UK Cyber Security Association (UKCSA)

The UK Cyber Security Association (UKCSA) is a membership organisation for individuals and organisations who actively work in the cyber security industry.

Park Place Technologies

Park Place Technologies

Park Place Technologies' mission is to drive uptime, performance and value for critical IT infrastructure.

Otto

Otto

Stop Client-Side Attacks. Plug otto into your application security suite and protect your supply chain.

Harbor Networks

Harbor Networks

Harbor Networks is a communications systems integrator and managed services provider. We provide business consultation services for voice and data communication technology.

HighGround

HighGround

HighGround offer a Cyber Security Solution for everybody, regardless of skillset, to feel empowered in their security experience in reaching Cyber Resilience.

Epic Machines

Epic Machines

Epic Machines is a Value Added Reseller and Managed Security Services provider offering Security Transformation using Cloud-native solutions to commercial and government markets.

Orchestrate Technologies

Orchestrate Technologies

Orchestrate Technologies provides computer network and IT managed services for small and mid-market clients as well as small enterprise businesses.

AI Safety Institute (AISI)

AI Safety Institute (AISI)

The AI Safety Institute’s mission is to minimise surprise to the UK and humanity from rapid and unexpected advances in AI.