Facebook’s Shifting Attitude To Controversy

Would you tell Facebook you’re happy to see all the bared flesh it can show you? And that the more gratuitous violence it pumps into your News Feed the better?

Obtaining answers to where a person’s ‘line’ on viewing what can be controversial types of content lies is now on Facebook’s product roadmap, explicitly stated by CEO Mark Zuckerberg in a lengthy blog post recently, not-so-humbly entitled Building a global community.

Make no mistake, this is a huge shift from the one-size fits all ‘community standards’ Facebook has peddled for years, crashing into controversies of its own when, for example, it disappeared an iconic Vietnam war photograph of a naked child fleeing a napalm attack.

In the recent wordy essay, in which Zuckerberg generally tries to promote the grandiose notion that Facebook’s future role is to be the glue holding the fabric of global society together, even as he fails to flag the obvious paradox: that technology which helps amplify misinformation and prejudice might not be so great for social cohesion after all, the Facebook CEO sketches out an impending change to community standards that will see the site actively ask users to set a ‘personal tolerance threshold’ for viewing various types of less-than-vanilla content.

On this Zuckerberg writes:

The idea is to give everyone in the community options for how they would like to set the content policy for themselves. Where is your line on nudity? On violence? On graphic content? On profanity? What you decide will be your personal settings.

We will periodically ask you these questions to increase participation and so you don’t need to dig around to find them. For those who don’t make a decision, the default will be whatever the majority of people in your region selected, like a referendum. Of course you will always be free to update your personal settings anytime.

With a broader range of controls, content will only be taken down if it is more objectionable than the most permissive options allow. Within that range, content should simply not be shown to anyone whose personal controls suggest they would not want to see it, or at least they should see a warning first.

Although we will still block content based on standards and local laws, our hope is that this system of personal controls and democratic referenda should minimize restrictions on what we can share.

A following paragraph caveats that Facebook’s in-house AI does not currently have the ability to automatically identify every type of (potentially) problematic content. Though the engineer in Zuck is apparently keeping the flame of possibility alive, by declining to state the obvious: that understanding the entire spectrum of possible human controversies would require a truly super-intelligent AI.

(Meanwhile, Facebook’s in-house algorithms have shown themselves to be hopeless at being able to correctly ID some pretty bald-faced fakery. And he’s leaning on third party fact-checking organizations, who do employ actual humans to separate truth and lies, to help fight the spread of Fake News on the platform, so set your expectations accordingly… )

“It’s worth noting that major advances in AI are required to understand text, photos and videos to judge whether they contain hate speech, graphic violence, sexually explicit content, and more. At our current pace of research, we hope to begin handling some of these cases in 2017, but others will not be possible for many years,” is how Zuck frames Facebook’s challenge here.

The problem is this, and indeed much else in the ~5,000-word post, is mostly misdirection.

The issue is not whether Facebook will be able to do what he suggests is its ultimate AI-powered goal (i.e. scan all user-shared content for problems; categorize everything accurately across a range of measures; and then dish up exactly the stuff each user wants to see in order to keep them fully engaged on Facebook, and save Facebook from any more content removal controversies), rather the point is Facebook is going to be asking users to explicitly give it even more personal data.

Data that is necessarily highly sensitive in nature, being as the community governance issue he’s flagging here relates to controversial content. Nudity, violence, profanity, hate speech, and so on.

Yet Facebook remains an advertising business. It profiles all its users, and even tracks non-users‘ web browsing habits, continually harvesting digital usage signals to feed its ad targeting algorithms. So the obvious question is whether or not any additional data Facebook gathers from users via a ‘content threshold setting’ will become another input for fleshing out its user profiles for helping it target ads.

You might also wonder whether, given the scale of Facebook’s tracking systems and machine learning algorithms, couldn’t it essentially infer individuals’ likely tolerance for controversial content? Why does it need to ask at all?

And isn’t it also odd that Zuckerberg didn’t suggest an engineering solution for managing controversial content, given, for example, he’s been so intent on pursuing an engineering solution to the problem of Fake News. Why doesn’t he talk about how AI might also rise to the complex challenge of figuring out personal content tastes without offending people?

“To some extent they probably can already make a very educated, very good guess at [the types of content people are okay seeing],” argues Eerke Boiten, senior lecturer in computer science at the University of Kent. “But… telling Facebook explicitly what your line in the sand is on different categories of content is in itself giving Facebook a whole lot of quite high level information that they can use for profiling again.

“Not only could they derive that information from what they already have but it would also help them to fine-tune the information they already have. It works in two directions. It reinforces the profiling, and could be deduced from profiling in the first place.”

“It’s checking their inferred data is accurate,” agrees Paul Bernal, law lecturer at the University of East Anglia. “It’s almost testing their algorithms. ‘We reckon this about you, this is what you say, and this is why we’ve got it wrong’. It can actually, effectively be improving their ability to determine information on people.”

Bernal also makes the point that there could be a difference, in data protection law terms, if Facebook users are directly handing over personal information about content tolerances to Facebook (i.e. when it asks them to tell it) vs such personal information being inferred by Facebook’s indirect tracking of their usage of its platform.

“In data protection terms there is at least some question if they derive information, for example sexuality from our shopping habits, whether that brings into play all of the sensitive personal data rules. If we’ve given it consensually then it’s clearer that they have permission. So again they may be trying to head off issues,” he suggests. “I do see this as being another data-grab, and I do see this as being another way of enriching the value of their own data and testing their algorithms.”

Facebook users are able to request to see some of the personal data Facebook holds on them. But, as Boiten points out, this list is by no means complete. “What they give you back is not the full information they have on you,” he tells TechCrunch. “Because some of the tracking they are doing is really more sophisticated than that. I am absolutely, 100 per cent certain that they are hiding stuff in there. They don’t give you the full information even if you ask for it.

“A very simple example of that is that they memorise your search history within Facebook. Even if you delete your Facebook search history it still autocompletes on the basis of your past searches. So I have no doubt whatsoever that Facebook knows more than they are letting on… There remains a complete lack of transparency.”

So it at least seems fair that Facebook could take a shot at inferring users’ content thresholds, based on the mass of personal data it holds on individuals.

TechCrunch

You Might Also Read:

Facebook Algorithms Will Identify Terrorists:

Facebook & Google Are Killing Newspapers:

Tim Berners-Lee’s Vision For The Web - Things Need To Change:

Facebook Wants To Eliminate Racially Targeted Advertising:

 

« Self-driving Ubers are now in Arizona
Turn Threat Data Into Threat Intelligence »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

Alvacomm

Alvacomm

Alvacomm offers holistic VIP cybersecurity services, providing comprehensive protection against cyber threats. Our solutions include risk assessment, threat detection, incident response.

XYPRO Technology

XYPRO Technology

XYPRO is the market leader in HPE Non-Stop Security, Risk Management and Compliance.

Resecurity, Inc.

Resecurity, Inc.

Resecurity is a cybersecurity company that delivers a unified platform for endpoint protection, risk management, and cyber threat intelligence.

CSI Consulting Services

CSI Consulting Services

Get Advice From The Experts: * Training * Penetration Testing * Data Governance * GDPR Compliance. Connecting you to the best in the business.

CYRIN

CYRIN

CYRIN® Cyber Range. Real Tools, Real Attacks, Real Scenarios. See why leading educational institutions and companies in the U.S. have begun to adopt the CYRIN® system.

Stratogent

Stratogent

Stratogent does IT and Cybersecurity operations. We specialize in high-touch and high-change IT environments, especially in the biotech and pharma industry verticals.

Cyber Technology Institute - De Montfort University

Cyber Technology Institute - De Montfort University

The Cyber Technology Institute provides training and high quality research and consultancy services in the fields of cyber security, software engineering and digital forensics.

Maverick Technologies

Maverick Technologies

Maverick is an industrial automation, enterprise integration and operational consulting company. Services include industrial cyber security.

File Centre

File Centre

File Centre is a leading specialist when it comes to data backup, we offer our clients a premium backup retrieval and delivery solution.

Trusted Knight

Trusted Knight

Trusted Knight is a leading provider of security software solutions focused on defeating newly developed malware and crimeware trojans.

Vaulto Technologies

Vaulto Technologies

Vaulto protects critical business processes that are conducted via the cellular network.

Cybercrime Investigation & Coordinating Center (CICC)

Cybercrime Investigation & Coordinating Center (CICC)

The Cybercrime Investigation and Coordinating Center (CICC) is an attached agency of the Philippines Department of Information and Communications Technology (DICT).

Brighterion

Brighterion

Brighterion solutions stop payment and acquirer fraud, reduce credit risk and delinquency, fight financial crime, prevent healthcare fraud, waste and abuse, and more.

Digital Boundary Group (DBG)

Digital Boundary Group (DBG)

Digital Boundary Group (DBG) is an information technology security assurance services firm providing information technology security auditing and compliance assessment services to clients worldwide.

Seccuri

Seccuri

Seccuri is a unique global cybersecurity talent tech platform. Use our specialized AI algorithm to grow and improve the cybersecurity workforce.

Center for Information Security Awareness (CFISA)

Center for Information Security Awareness (CFISA)

CFISA was formed by a group of academics, security and fraud experts to explore ways to increase security awareness among audiences, including consumers, employees, businesses and law enforcement.

Zama

Zama

Zama - pioneering homomorphic encryption. We believe people shouldn't care about privacy. Not because it doesn't matter, but because it shouldn't be an issue!

Cyber Octet

Cyber Octet

Cyber Octet is an IT Solution, Security, Training and Services company. We provide training and services from Web Application Security to ISO 27001 implementation.

Papua New Guinea National Cyber Security Centre (PNG NCSC)

Papua New Guinea National Cyber Security Centre (PNG NCSC)

PNG NCSC is a jointly funded initiative enabling PNG to benefit with the most advanced cyber protection of its critical information and communications technology infrastructure.

Cyber & Data Protection

Cyber & Data Protection

Cyber & Data Protection Limited supports Charities, Educational Trusts and Private Schools, Hospitality and Legal organisations by keeping their data secure and usable.

Yokai

Yokai

Yokai is a secure, distributed platform for data communication with enhanced security features tailored for classified environments such as finance, defence, healthcare, cybersecurity, and more.