Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Threat Intelligence

5/14/2020
12:18 PM
Connect Directly
Facebook
Twitter
LinkedIn
RSS
E-Mail
100%
0%

Facebook Fails to Staunch Coronavirus Misinformation

The social media giant in April affixed warning labels on 50 million pieces of content

As Mark Zuckerberg this week detailed the results of the company's latest Community Standards Enforcement Report, he also revealed that Facebook is being inundated with coronavirus misinformation and disinformation—and that the company has been struggling to stop it.

Zuckerberg, who said he was proud of the work Facebook's content moderation teams have done, also acknowledged that it's not stopping enough COVID-19 misinformation. 

"Our effectiveness has certainly been impacted by having less human review during COVID-19, and we do unfortunately expect to make more mistakes until we're able ramp everything back up," he said.

The report, released Tuesday, says that improvements in the company's machine learning systems helped Facebook remove 68% more hate posts in the first three months of 2020 than it did from October to December 2019. These same systems are responsible for detecting 90% of hate speech posts before they're posted by Facebook users, the company claims. Between the company's human moderators and automated systems, Facebook says it acted on 9.6 million pieces of hateful content in the first quarter of 2020, up from 5.7 million pieces of hateful content in the fourth quarter of 2019.

On Instagram, the detection rate increased from 57.6% in the fourth quarter of 2019 to 68.9% in the first quarter of 2020, with 175,000 pieces of content removed—35,200 more than the previous quarter.

The company also published a separate blog post detailing how it has handled COVID-19 misinformation thus far: In April, it placed warning labels on 50 million pieces of content, based on 7,500 articles by 60 independent fact-checking organizations; and since March 1, Facebook has removed more than 2.5 million pieces of content for attempting to fraudulently sell face masks, hand sanitizer, disinfecting wipes, and COVID-19 test kits.

But because Facebook does not allow its human content moderators (who just won a $52 million lawsuit over mental health problems developed from reviewing that content) to access potentially sensitive Facebook data from their home computers, the company has been relying on artificial intelligence systems more than before. 

Facebook's numbers are impossible to verify since the company does not allow independent audits of its systems. And in the age of COVID-19, that takes on added importance, say experts, since medical misinformation can encourage people to take health risks that put their lives at stake.

Look no further than the viral spread of the "Plandemic" video, which asserts a fake COVID-19 narrative, and has been reposted to YouTube nearly as fast as the company can take it down for violating its COVID-19 misinformation standards. Facebook explained in a blog post from its AI team what makes automatic detection of misinformation challenging, an assertion that Bhaskar Chakravorti, senior associate dean of international business and finance at Tufts University’s Fletcher School, agreed with in an emailed statement.

"Facebook has done more than the other social media companies in controlling misinformation by turning to fact checking organizations," he wrote, but cautioned that the company is still missing "about 40 percent of misinformation" on its platform.

One way to reduce the impact of misinformation and organized disinformation from political sources would be to label advertisements and posts from political organizations, says Pablo Breuer, co-founder and vice president of the Cognitive Security Collaborative. Misinformation stands out because it's often promoted nearly simultaneously by accounts with no connection, and because of the attention it receives: Legitimate news has a baseline signal that "peaks" because it's newsworthy, then "degrades" because people have seen it, he says. 

"What happens in a lot of misinformation is that you get multiple peaks, and you get those peaks because there are bots that are putting it out there to different audiences," Breuer says. "We've known for a long time that the propagation of misinformation is different from regular information."

Ultimately, the challenge Facebook faces is existential: Its services depend on users sharing information, and the more they share, the more ads the social network can show you, he notes.

"Anything that makes you react in a visceral, emotional way, instead of a cognitive way, is good for traffic and good for the bottom line of these companies. It's detrimental to them to limit this."

 

 
Learn from industry experts in a setting that is conducive to interaction and conversation about how to prepare for that "really bad day" in cybersecurity. Click for more information and to register
 
Related Content:
 
 
Seth is editor-in-chief and founder of The Parallax, an online cybersecurity and privacy news magazine. He has worked in online journalism since 1999, including eight years at CNET News, where he led coverage of security, privacy, and Google. Based in San Francisco, he also ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
brodiegrave
50%
50%
brodiegrave,
User Rank: Apprentice
5/17/2020 | 10:18:48 AM
Re: How to censor a dumbed down society with dark, self-centered agendas?
I guess too
lancop
50%
50%
lancop,
User Rank: Moderator
5/16/2020 | 5:02:08 PM
How to censor a dumbed down society with dark, self-centered agendas?
It has got to be difficult for Facebook to figure out how to censor the posts from an increasingly dumbed down society of narcissists with various dark, self-centered agendas. A society just smart enough to click the Share button but not smart enough to do a Snopes check to weed out known lies & BS.

It's like watching lab mice self-administer cocaine by hitting a Share button. And that disproves everything we think we know about humans being intelligent, rational and evolved. 

Even worse, Facebook is financially incentivized to interfere as little as possible. Tin foil hat narratives are just awesome click bait, and they make ad dollars off of click bait. To hell with the cumulative effect on the society they live in - its cashflow, profits and a fat 401K. 

Clearly this is not going to end well for our society. There are too many sociopaths with an appetite for yelling "fire!" in crowded theatres and watching in delight as people get trampled in the stampede. I never thought when I got into personal computing as a profession that I would end up with the same feeling of remorse as the physicists who worked on atomic fission.

I guess a better understanding of history would have prepared me for humanity turning powerful technologies into weapons of self-destruction.
RonR726
100%
0%
RonR726,
User Rank: Strategist
5/15/2020 | 9:14:54 AM
F off Zuck
Who elected MZ the arbitrator of the truth! I sure didn't - welcome to the United Socialist States of America!
COVID-19: Latest Security News & Commentary
Dark Reading Staff 6/5/2020
How AI and Automation Can Help Bridge the Cybersecurity Talent Gap
Peter Barker, Chief Product Officer at ForgeRock,  6/1/2020
Cybersecurity Spending Hits 'Temporary Pause' Amid Pandemic
Kelly Jackson Higgins, Executive Editor at Dark Reading,  6/2/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: What? IT said I needed virus protection!
Current Issue
How Cybersecurity Incident Response Programs Work (and Why Some Don't)
This Tech Digest takes a look at the vital role cybersecurity incident response (IR) plays in managing cyber-risk within organizations. Download the Tech Digest today to find out how well-planned IR programs can detect intrusions, contain breaches, and help an organization restore normal operations.
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-13890
PUBLISHED: 2020-06-06
The Neon theme 2.0 before 2020-06-03 for Bootstrap allows XSS via an Add Task Input operation in a dashboard.
CVE-2020-13889
PUBLISHED: 2020-06-06
showAlert() in the administration panel in Bludit 3.12.0 allows XSS.
CVE-2020-13881
PUBLISHED: 2020-06-06
In support.c in pam_tacplus 1.3.8 through 1.5.1, the TACACS+ shared secret gets logged via syslog if the DEBUG loglevel and journald are used.
CVE-2020-13883
PUBLISHED: 2020-06-06
In WSO2 API Manager 3.0.0 and earlier, WSO2 API Microgateway 2.2.0, and WSO2 IS as Key Manager 5.9.0 and earlier, Management Console allows XXE during addition or update of a Lifecycle.
CVE-2020-13871
PUBLISHED: 2020-06-06
SQLite 3.32.2 has a use-after-free in resetAccumulator in select.c because the parse tree rewrite for window functions is too late.