Fake News?! Google and Facebook Team Up to Shut It Down.
Facebook has started rolling out its fact-checking feature to alert users to “disputed content”.
The social media giant came under fire in 2016 amid concerns it wasn’t doing enough to stem the tide of fake news during the 2016 US presidential election.
The rollout comes after the social network announced in December that it would partner with third-party fact-checkers to crack down on fake news stories on its platform. These fact-checkers include ABC News, Associated press, FactCheck.org, Politifact and Snopes. All five sources follow the Poynter International Fact-Checking Network’s code of principles.
The new feature, which was first observed by users in the US in the lead-up to St. Patrick’s day on March 17 this year. Some users trying to share a story titled “The Irish Slave trade – the slaves that time forgot” were met by a red alert stating that the article has been disputed by Snopes and the Associated Press.
Users who click “publish” anyways see another popup that reiterates the claims the article are disputed by fact-checkers. Clicking “post anyway” publishes the link to the user’s timeline, but other users will still see warnings that the story is “Disputed by Snotes.com and Associated Press.”
Users who are still intent on publishing the story will be met with yet another pop-up before they can post the link.
On March 16, the Associated Press published a Fact Check article debunking claims about the so-called “Irish slave trade” as part of the AP’s ongoing “effort to fact-check claims in suspected false news stories.”
Facebook’s fact-checking release comes on the heels of Google’s updated guidelines for their human quality raters.
The new section, called “Upsetting-Offensive” Flag, instructs raters to flag results with “upsetting or offensive content from the perspective of users in [their] locale, even if the result satisfies the user intent.”
According to the search giant’s new guidelines, upsetting or offensive content could include:
- Promoting hate or violence against a group of people based criteria including (but not limited to) race, ethnicity, gender, nationality, citizenship, disability, age, sexual orientation or military service.
- Racial slurs or extremely offensive words or phrases.
- Graphic depictions of violence, including animal cruelty and child abuse.
- How-to guides providing explicit instructions about harmful activities such as human trafficking or violent assault.
- Other types of content users in the rater’s locale would find “extremely upsetting or offensive.”
Remember that flagging a result as “Upsetting-Offensive” won’t remove a page from search results, or even demote it. The results from quality raters are used by Google’s engineers writing search algorithms and by its machine learning systems to teach them how to identify offensive content in general.
For queries that specifically search out this type of material, which Google calls “Upsetting-Offensive tolerant queries”, raters are instructed to assume educational intent when rating whether a result meets the user’s needs.
According to Search Engine Journal, Google has been testing these new guidelines since December in response to offensive content in the results for the query “did the Holocaust happen”. It seems to have made a difference.
Google is also currently overhauling its advertising policies in response to a growing boycott of its platforms.
Many leading advertisers, including the UK government, Marks & Spencer and McDonald’s have stopped advertising on Google networks after their brands appeared on YouTube videos promoting an extremist political group. In response, Google’s Chief Business Officer, Philipp Schindler, said the company will develop better controls so advertisers can see and control what content their advertising appears on.
The announced overhaul also coincides with a suspected algorithm update. Known as “Fred,” this update seems to devalue content-heavy websites that look like little more advertising and affiliate link containers.