Why Software Extensions and Social APIs Are Essential to Fight Fake News

David R. Sterry
3 min readAug 16, 2019

While planning next steps for Corgi, I considered creating a browser extension that could display a trust score on stories, tweets, and other content. A trust score that could be related to the original author of a work. A trust score that could take into account context. A trust score that could help readers identify fake news at a glance.

A browser extension is ideal because, with the right permissions, it can analyze and decorate content on any page, so adding a little ✅ or ❌ to any story can be pretty simple. To begin, people would label content as trustworthy or not and share that label. The details of how this score would work is too much to explain now but we’ll get to that in time. The point is… an extension empowers the user to call in automated help in the fight against fake news. Sadly, browser extensions only run on browsers.

According to Pew Research Center survey, 68% of users get news from social media and The Manifest 2018 Consumer Social Media Survey finds that people use apps for social media more than web browsers.

Unfortunately for our purposes, official mobile apps for services like Facebook, Reddit and Twitter do not allow extensions. Using these apps therefore limits any such automation.

And it’s understandable. If Facebook were to enable extensions, they would lose control of the end user experience, which is carefully designed, curated, and evolves to support their business over time. Extensions would invite a new cat-and-mouse game as some third-party developers behave badly. From Facebook’s perspective the costs outweigh the benefit.

Without this kind of tooling however, social media companies are complicit in the spread of falsities and people have noticed. Facebook at least has been called to task to fix the problem so they’re acting to stop news from spreading too quickly, or monitoring certain phrases, and developing machine learning models to tamp down this fake stuff before it infects too many people.

This however, is not position any of these companies want to be in. It’s costly and thankless. And no thanks! Who really wants them there?By censoring anything but the clearest abuses of their terms of service, they cloud the information space. Did that story get removed because of the algorithm… or did <repressive government> have something to do with it? You’d never know.

One solution is for social media companies to make it easier for others to help. Give us the content, some basic metadata, and make some space in your API endpoints for signatures, so our extensions can check what we’re seeing independently… both from an attribution and authenticity standpoint. If you wanted to really fix this, you could encourage authors to setup keys and sign their works before they publish.

This is a humanity-scale problem that will be greatly helped when users can run little bits of code while using social media apps to verify, analyze, and score the content they’re making so readily available. All this is not to say browser extensions aren’t worthwhile. If even a fraction of users ran some kind of automated verification, it could serve as a check on the viral spread of fake news.

If you enjoyed this post, you may be interested in the two earlier posts on Corgi, a project that aims to improve public discourse by hardening social and other media channels against fake news.

--

--

David R. Sterry

Decentralization. Freedom. Truth. GPG: D981 9683 2341 575F B403 C8CF 8029 A76D 14B2 4807