In an age when data has been used to manipulate and mislead, many people are increasingly wary of computer-generated judgments. We embrace this healthy suspicion by allowing users a full view of how credibility scores are determined.
Clicking a credibility icon allows Public Editor users to see all the logical and journalistic missteps in an article, and how they were weighted to generate a credibility score.
Contributors are trained to spot biases, even their own. And the text passages they review are removed from their original article context, allowing them to focus on the logic and inference of a sentence, not who wrote or published it. And, of course, every contributor judgment is verified in light of peers' work, with a keen eye toward avoiding known political biases.
Contributors are trained and tested before before their judgments carry weight in our system. And the weight of their judgments increases over time. So, bad actors would have to do hours of good work before they could attack our system. Even then, our system will automatically flag changes in contributor behavior and work quality, and suspend a contributor account if necessary.
Contributors do not choose which articles they review. So any team of bad actors wishing to overrun our system – after dedicating hundreds of hours of good work so they would have the weight to affect articles' scores – would have no way of systematically attacking an article they find displeasing.