Tell someone not to do something and sometimes they just want to do it more. That's what happened when Facebook put red flags on debunked fake news. Facebook's red warning flags only made the post more interesting and more likely to be shared.
Facebook ditched the red warning and replaced them with links to articles where the supposed fake news is debunked.
Now Facebook has dreamt up another couple of wheezes.
Amber Rudd MP
I was not aware of Home Office removals targets
BBC News Amber Rudd claimed she was not aware of Home Office removals targets... but a memo leak suggests otherwise
First, rather than call more attention to fake news, Facebook wants to make it easier to miss these stories while scrolling. When Facebook's third-party fact-checkers verify an article is inaccurate, Facebook will shrink the size of the link post in
News Feed. Facebook will also downrank the news to make it less likely that it will appear in news feeds at all.
Second, Facebook is now using machine learning to look at newly published articles and scan them for signs of falsehood.
'Fact checkers' will then prioritise high scoring articles so as to make more efficient use of their time.
Facebook now says it can reduce the spread of a false news story by 80%.
Today's headlines are dominated by the role of misinformation campaigns or "fake news" in undermining democracy in the West. From ongoing accusations of Russian meddling in Trump's election to Russian efforts to sway the Brexit and French
Presidential election votes, these countries are confronting "fake news" as an ongoing and urgent threat to democracy. Yet in Latin America, where misinformation campaigns have prevailed throughout the twentieth century, concerns over
"fake news" are hardly new . Latin American media concentration, disinformation campaigns, and biased coverage have long undermined informed civic discourse .
"Fake News" as a pretext for curbing free
expression in Latin America
In 2018, Mexico, Venezuela, Brazil, Colombia and Costa Rica, among others, will undergo electoral processes involving their respective presidencies. These governments are beginning to exploit
concerns over "fake news," as though it were a novel phenomenon, in order to adopt proposals to increase state control over online communications and expand censorship and Internet surveillance. Such rhetoric glosses over the fact that
propaganda from traditional Latin American media monopolies has long been the norm in the region, and that Internet companies have played a critical role in counterbalancing this power dynamic. Frank La Rue, the former UN Special Rapporteur on Free
Expression, remarked at the 2017 Internet Governance Forum on the inherent risks of importing the term "fake news" to Latin America:
I don't like the term "fake news" because I think there is a bit
of a trap in it. We are confronting campaigns of misinformation. So we should talk about information and disinformation.
La Rue believes that when distinctions between fake and real news are drawn, they are done
ultimately to dissuade the public from reading news or thinking independently. He argues that "the problem again is that fake news becomes a perfect excuse to just silence or shut down any alternative or any dissident voice." To respond to this
threat, EFF co-signed an open letter along with other 34 Latin American NGOs at the end of last year.
When Brazil set up a council to counter fake news, the Army and the Brazilian
intelligence agency--entities with a long track record of crushing minority or dissenting voices--were invited to join. The specter of "fake news" has also been a pretext for draconian bills in Brazilian parliament. The latest one, a recent
proposal of unknown authorship , led to a great controversy when it was submitted to the Communication Council of National Congress' analysis without prior notice. The text defined as a crime the creation or sharing of false news, imposing detention
penalties for those who propagate information the government deemed false. It also sought to modify a key component of Brazilian civil rights framework, the Marco Civil da Internet, by making companies liable for failing to remove or block reported posts
within 24 hours or for not providing an easy tool by which the user can check whether the news is trustworthy. Internet companies would be subject to a staggering fine of up to 5% of their revenue in the previous fiscal year if they failed to remove
content. Although the proposal was withdrawn as a reaction to public outcry, other bills with similar content remain in the parliament.
Mexico is also approaching election season; the country is set to hold the largest election in
its history. In July 2018, Mexicans will elect not only a new president but also all federal legislators and nine state governors. The country's National Election Institute (INE) has recently signed an agreement with Facebook Ireland to fight fake news.
The INE is expected to sign similar agreements with Google and Twitter. The agreement , a copy of which was obtained by the newspaper El Universal , includes the use of Facebook's tools to measure civic participation, access to real-time data of
voting results granted by INE, and the provision of a physical space in the Institute's office where, on election day, the company is expected to perform activities such as posting live videos. While neither party is meant to get involved in deciding
what is true or false, transparency is a must. Luis Fernando Garcia, of Mexican NGO Red en Defensa de los Derechos Digitales, told EFF:
We need complete transparency about the nature of the relationship between INE and
Facebook. Facebook should also refrain from adopting measures that discriminate against some media outlets and benefit others in the name of combating "fake news".
We need an Internet where we are free to
meet, create, organize, share, associate, debate and learn. And we also need elections to be free from manipulation. As we have said before , people should be empowered by the tools they use, not left passive by others' use of such technology. But
platforms should remain wary of purporting to validate news even in the face of calls to do so; if they assume this role, it will raise obvious concerns about how they'll respond to political pressures.
Like "fake news,"
policies around hate speech are often used as cover for censorship. It has served as a convenient pretext for advancing a repressive Honduran draft bill on Internet content regulation. After fraud accusations marred 2017 Honduras' presidential elections,
Honduras finds itself in a grave political crisis. Amidst the turbulence, a bill regulating online speech was introduced in the Honduran National Congress in February 2018. The bill, which was widely criticized by civil society , provides broad leeway
for Internet companies to block Internet content in the name of protecting users from hate speech, discrimination, or insults. The bill compels companies to take down third-party content within 24 hours in order not to be fined or even find their
services blocked. This pro-censorship bill has also spurred recent debates on the creation of a national cybersecurity committee assigned to deal with, among other issues, fake news.
Efforts to keep "fake news" in check
are spreading across Latin America. Disinformation campaigns cannot serve to wreck democracy and free speech. EFF will be monitoring this issue as this year's Latin American elections progress.
Facebook has revealed just how shoddy its 'fake news' and censorship process is when it censored an obvious joke after it passed through the censorship system without anyone at Facebook noticing how stupid they were being.
The Babylon Bee set off Facebook's alarm
bells by publishing a satirical piece stating that CNN had purchased an industrial-size washing machine to spin news before publication. This is obviously a joke and is clearly marked satire and is published on a site entirely devoted to satire.
But the uptight jerks over at Snopes decided to fact check the Bee's claim, to ensure that no one actually thought that CNN made a significant investment in heavy machinery.
The article was duly confirmed as fake
news resulting in Facebook saying that it would censor The Babylon Bee by denying them monetisation.
And as per the normal procedure, when alerted about stupid censorship, Facebook admitted it was a ghastly mistake and apologised profusely. Fair
enough, but in passing it still shows exactly how shoddy the process is behind the scenes.
Greater transparency for users around news broadcasters
Today we will start rolling out notices below videos uploaded by news broadcasters that receive some level of government or public funding.
goal is to equip users with additional information to help them better understand the sources of news content that they choose to watch on YouTube.
We're rolling out this feature to viewers in the U.S. for now, and we don't expect
it to be perfect. Users and publishers can give us feedback through the send feedback form. We plan to improve and expand the feature over time.
The notice will appear below the video, but above the video's title, and include a
link to Wikipedia so viewers can learn more about the news broadcaster.
Facebook says it is changing how it identifies 'fake news' stories on its platform to a more effective system.
Facebook had originally put red warning signs on disputed stories that fact-checkers found false.
Instead, now it will bring up
related articles next to the false stories that give context from fact-checkers on the stories'
Facebook said that in its tests, fewer hoax articles were shared when they had fact-checkers' articles spooled up next to them than when they were
labeled with disputed flags.
Facebook have also changed the criteria for identification as 'fake news' Previously it required 2 fact checkers to concur but under the new system related articles can be attached under the authority of just one fact
Facebook touts its partnership with outside fact-checkers as a key prong in its fight against fake news, but a major new Yale University study finds that fact-checking and then tagging inaccurate news stories on social media doesn't work.
The study ,
reported for the first time by POLITICO, found that tagging false news stories as disputed by third party fact-checkers has only a small impact on whether readers perceive their headlines as true. Overall, the existence of disputed tags made participants
just 3.7 percentage points more likely to correctly judge headlines as false, the study said.
The researchers also found that, for some groups--particularly, Trump supporters and adults under 26--flagging bogus stories could actually end up
increasing the likelihood that users will believe fake news. This because not all fake stories are fact checked, and the absence of a warning tends to add to the credibility of an unchecked, but fake, story.
Researchers Gordon Pennycook &
David G. Rand of Yale University write in their abstract:
Assessing the effect of disputed warnings and source salience on perceptions of fake news accuracy
What are effective techniques
for combatting belief in fake news? Tagging fake articles with Disputed by 3rd party fact-checkers warnings and making articles' sources more salient by adding publisher logos are two approaches that have received large-scale rollouts on social media in
Here we assess the effect of these interventions on perceptions of accuracy across seven experiments [involving 7,534 people].
With respect to disputed warnings, we find that tagging articles
as disputed did significantly reduce their perceived accuracy relative to a control without tags, but only modestly (d=.20, 3.7 percentage point decrease in headlines judged as accurate).
Furthermore, we find a backfire effect --
particularly among Trump supporters and those under 26 years of age -- whereby untagged fake news stories are seen as more accurate than in the control.
We also find a similar spillover effect for real news, whose perceived
accuracy is increased by the presence of disputed tags on other headlines.
With respect to source salience, we find no evidence that adding a banner with the logo of the headline's publisher had any impact on accuracy judgments
Together, these results suggest that the currently deployed approaches are not nearly enough to effectively undermine belief in fake news, and new (empirically supported) strategies are needed.
Presented with the study, a Facebook spokesperson questioned the researchers' methodology--pointing out that the study was performed via Internet survey, not on Facebook's platform--and added that fact-checking is just one part of the company's efforts to combat fake news. Those include disrupting financial incentives for spammers, building new products and helping people make more informed choices about the news they read, trust and share, the spokesperson said.
The Facebook spokesman added that the articles created by the third party fact-checkers have uses beyond creating the disputed tags. For instance, links to the fact checks appear in related article stacks beside other similar stories that
Facebook's software identifies as potentially false. They are powering other systems that limit the spread of news hoaxes and information, the spokesperson said.
What is 'fake news' anyway? Is it news that hides truths that are unpalatable to the politically correct? Is it reports of weapons of mass destruction in Iraq? Is it politicians outlining improvements in the economy?