I Was A Facebook Fact-Checker. It Was Like Playing A Doomed Game Of Whack-A-Mole.
Trying to stem the tsunami of fake news was like battling the Hydra — every time we cut off a virtual head, two more would grow in its place.
Facebook has always struggled to comprehend the scale of its fake news and propaganda problem. Now, it’s struggling to retain the fact-checkers it paid to try and deal with the crisis. Last week both Snopes and the Associated Press ended their partnerships with the social network, after a tense couple of years trying, without success, to tackle the epidemic.
But those partnerships should never have existed in the first place, and I say this as the former managing editor of Snopes, who Facebook first made contact with in 2016. When they first emailed me about a potential partnership, I knew it would bring much more attention to the work of our small newsroom — and much more scrutiny.
But what I didn’t realize was that we were entering a full-blown crisis, not just of “fake news,” but of journalism, democracy, and the nature of reality itself — one we’re all still trying to sort out in 2019, and which had more twists and turns than I’d ever thought possible. Looking back, my overwhelming impression of the years since 2016 is how surreal everything became.
It turned out that trying to fact-check a social media service that is used by a huge chunk of the world’s population is no easy task. We tried to make it easier by showing where disinformation would originate, but there were just too many stories. Trying to stem the tsunami of hoaxes, scams, and outright fake stories was like playing the world’s most doomed game of whack-a-mole, or like battling the Hydra of Greek myth. Every time we cut off a virtual head, two more would grow in its place. My excellent but exhausted and overworked team did as much as we could, but soon felt like we were floating around in a beat-up old skiff, trying to bail out the ocean with a leaky bucket.
Things soon got worse.
Because of my own history reporting on refugee rights, I had contacts with groups all over the world working on migration and humanitarian crises. Since early 2015, I’d been hearing bits and pieces about Myanmar and the Rohingya Muslims, and how activists on the ground — exhausted, dispirited activists who were begging any reporter they could find to help spread the word — were saying the crisis had been fueled and spread by social media. The people of Myanmar had only experienced unfettered access to the internet since around 2012, and now Facebook, through its Internet.org program that provided free mobile internet access to its site, had quickly become the only source for news for a large portion of the population. Newsfeeds in Myanmar were pushing a narrative that helped justify ethnic cleansing and other human rights violations on a massive scale. I took it to my editorial team and we put out some stories, and then I took it to Facebook.
Nothing happened, and I came to see Myanmar as something of a model for the damage algorithms and disinformation could do to our world. That's when the migraines started. I became obsessed with this connection — I dreamed about it at night, woke up thinking about it, and felt responsible for stopping a problem that few others even knew existed.
But those partnerships should never have existed in the first place, and I say this as the former managing editor of Snopes, who Facebook first made contact with in 2016. When they first emailed me about a potential partnership, I knew it would bring much more attention to the work of our small newsroom — and much more scrutiny.
But what I didn’t realize was that we were entering a full-blown crisis, not just of “fake news,” but of journalism, democracy, and the nature of reality itself — one we’re all still trying to sort out in 2019, and which had more twists and turns than I’d ever thought possible. Looking back, my overwhelming impression of the years since 2016 is how surreal everything became.
It turned out that trying to fact-check a social media service that is used by a huge chunk of the world’s population is no easy task. We tried to make it easier by showing where disinformation would originate, but there were just too many stories. Trying to stem the tsunami of hoaxes, scams, and outright fake stories was like playing the world’s most doomed game of whack-a-mole, or like battling the Hydra of Greek myth. Every time we cut off a virtual head, two more would grow in its place. My excellent but exhausted and overworked team did as much as we could, but soon felt like we were floating around in a beat-up old skiff, trying to bail out the ocean with a leaky bucket.
Things soon got worse.
Because of my own history reporting on refugee rights, I had contacts with groups all over the world working on migration and humanitarian crises. Since early 2015, I’d been hearing bits and pieces about Myanmar and the Rohingya Muslims, and how activists on the ground — exhausted, dispirited activists who were begging any reporter they could find to help spread the word — were saying the crisis had been fueled and spread by social media. The people of Myanmar had only experienced unfettered access to the internet since around 2012, and now Facebook, through its Internet.org program that provided free mobile internet access to its site, had quickly become the only source for news for a large portion of the population. Newsfeeds in Myanmar were pushing a narrative that helped justify ethnic cleansing and other human rights violations on a massive scale. I took it to my editorial team and we put out some stories, and then I took it to Facebook.
Nothing happened, and I came to see Myanmar as something of a model for the damage algorithms and disinformation could do to our world. That's when the migraines started. I became obsessed with this connection — I dreamed about it at night, woke up thinking about it, and felt responsible for stopping a problem that few others even knew existed.
What were the algorithmic criteria that generated the lists of articles for us to check? We never knew, and no one ever told us.
In case you’re curious, here’s what it was like to be an official Facebook fact-checker. We were given access to a tool that hooked into our personal Facebook accounts and was accessed that way (strike one, as far as I was concerned) and it spat out a long list of stories that had been flagged for checks. We were free to ignore the list, or mark stories as “true,” “false,” or “mixture.” (Facebook later added a “satire” category after what I like to call "the Babylon Bee incident", where a satirical piece was incorrectly labeled false.)
It was clear from the start that that this list was generated via algorithm. It contained headlines and URLs, and a graph showing their popularity and how much time they had been on the site. There were puzzling aspects to it, though. We would often get the same story over and over again from different sites, which is to be expected to a certain degree because many of the most lingering stories have been recycled again and again. This is what Facebook likes to call “engagement.”
But no matter how many times we marked them “false,” stories would keep resurfacing with nothing more than a word or two changed. This happened often enough to make it clear that our efforts weren’t really helping, and that we were being directed toward a certain type of story — and, we presumed, away from others.
What were the algorithmic criteria that generated the lists of articles for us to check? We never knew, and no one ever told us.
There was a pattern to these repeat stories though: they were almost all “junk” news, not the highly corrosive stuff that should have taken priority. We’d be asked to check if a story about a woman who was arrested for leaving her children in the car for hours while she ate at a buffet was true; meanwhile a flood of anti-semitic false George Soros stories never showed up on the list. I could never figure it out why, but perhaps it was a feature, not a bug.
It was clear from the start that that this list was generated via algorithm. It contained headlines and URLs, and a graph showing their popularity and how much time they had been on the site. There were puzzling aspects to it, though. We would often get the same story over and over again from different sites, which is to be expected to a certain degree because many of the most lingering stories have been recycled again and again. This is what Facebook likes to call “engagement.”
But no matter how many times we marked them “false,” stories would keep resurfacing with nothing more than a word or two changed. This happened often enough to make it clear that our efforts weren’t really helping, and that we were being directed toward a certain type of story — and, we presumed, away from others.
What were the algorithmic criteria that generated the lists of articles for us to check? We never knew, and no one ever told us.
There was a pattern to these repeat stories though: they were almost all “junk” news, not the highly corrosive stuff that should have taken priority. We’d be asked to check if a story about a woman who was arrested for leaving her children in the car for hours while she ate at a buffet was true; meanwhile a flood of anti-semitic false George Soros stories never showed up on the list. I could never figure it out why, but perhaps it was a feature, not a bug.
Disinformation isn’t necessarily meant for you. It’s meant for the people who lean authoritarian, the fearful conformists and the perennially anxious
And here we are today, with Snopes and the Associated Press pulling out of their partnerships within days of each other. It doesn’t surprise me to see this falling apart, because it was never a sufficient solution to a crisis that still poses a real threat to our world. If Facebook is serious about undoing some of the damage they have done, here is what they should be doing (Twitter, which is by no means innocent in this, should follow suit):
First, Facebook must jettison this idea of influencing individual emotions or crowd behavior.
Mass communication comes with a huge moral responsibility; so far they have shown themselves completely incapable of living up to it.
Second, it should make the algorithms that select what shows up in our news feeds absolutely transparent, and require users to opt-in, not opt-out. Let us all see the forces that underpin our perception of the world. We have been experimented on for far too long at this point, and it needs to change, and change now. It may sound like dystopian science fiction to say this, or perhaps the ravings of an overworked woman who has been swimming in the waters of conspiracy theories for far too long, but to the skeptics I will say this: Disinformation isn’t necessarily meant for you.
It’s meant for the people who lean authoritarian, the fearful conformists and the perennially anxious. It’s for weapons hoarders and true believers and the scary uncle that no one in the family talks to any more.
It’s the reason why Americans are still relitigating 2016 and Britons are still arguing over Brexit.
It’s why Kenya had to have an election do-over. It’s why Myanmar’s Rohingya Muslims are were ethnically cleansed. It’s how people can look at the misery and suffering of children ripped from their parents and placed into detention camps on American soil, where they’re sexually assaulted and drugged, and simply shrug. It’s redirecting every single important national and international conversation we’ve been having, for years now. It needs to end.
Finally, and most importantly: Social media companies should establish a foundation for journalism to give back some of what they have taken from us. This foundation must be open, transparent, and governed by reputable independent directors. A portion of the profits earned at the expense of the news industry should be dispersed across local newsrooms around world. In today’s media landscape, Silicon Valley has vacuumed up the news industry’s revenue while simultaneously using its newfound power to push around what’s left of the newsrooms it’s destroying — just look at how Facebook’s wildly false metrics caused organizations to “pivot to video,” with predictable results.
There’s another way the windfall revenues of social media should be invested: hire moderators, armies of them. Facebook should have the capability to beat back the disinformation it spreads, and if it claims this is impossible at the scale it operates at, then it should not be allowed to operate at that scale.
Moderators should have the resources needed to get the job done — not hundreds of low-paid contractors given a few seconds per post to make assessments that can literally mean life or death. Thousands of journalists are currently looking for work; hiring them to enthusiastically root out the lies and propaganda that are ruining so much of public life — and identify who is deliberately spreading it — might be a good start.
First, Facebook must jettison this idea of influencing individual emotions or crowd behavior.
Mass communication comes with a huge moral responsibility; so far they have shown themselves completely incapable of living up to it.
Second, it should make the algorithms that select what shows up in our news feeds absolutely transparent, and require users to opt-in, not opt-out. Let us all see the forces that underpin our perception of the world. We have been experimented on for far too long at this point, and it needs to change, and change now. It may sound like dystopian science fiction to say this, or perhaps the ravings of an overworked woman who has been swimming in the waters of conspiracy theories for far too long, but to the skeptics I will say this: Disinformation isn’t necessarily meant for you.
It’s meant for the people who lean authoritarian, the fearful conformists and the perennially anxious. It’s for weapons hoarders and true believers and the scary uncle that no one in the family talks to any more.
It’s the reason why Americans are still relitigating 2016 and Britons are still arguing over Brexit.
It’s why Kenya had to have an election do-over. It’s why Myanmar’s Rohingya Muslims are were ethnically cleansed. It’s how people can look at the misery and suffering of children ripped from their parents and placed into detention camps on American soil, where they’re sexually assaulted and drugged, and simply shrug. It’s redirecting every single important national and international conversation we’ve been having, for years now. It needs to end.
Finally, and most importantly: Social media companies should establish a foundation for journalism to give back some of what they have taken from us. This foundation must be open, transparent, and governed by reputable independent directors. A portion of the profits earned at the expense of the news industry should be dispersed across local newsrooms around world. In today’s media landscape, Silicon Valley has vacuumed up the news industry’s revenue while simultaneously using its newfound power to push around what’s left of the newsrooms it’s destroying — just look at how Facebook’s wildly false metrics caused organizations to “pivot to video,” with predictable results.
There’s another way the windfall revenues of social media should be invested: hire moderators, armies of them. Facebook should have the capability to beat back the disinformation it spreads, and if it claims this is impossible at the scale it operates at, then it should not be allowed to operate at that scale.
Moderators should have the resources needed to get the job done — not hundreds of low-paid contractors given a few seconds per post to make assessments that can literally mean life or death. Thousands of journalists are currently looking for work; hiring them to enthusiastically root out the lies and propaganda that are ruining so much of public life — and identify who is deliberately spreading it — might be a good start.
Brooke Binkowski is the managing editor of TruthOrFiction.com, and formerly served as managing editor of Snopes. She is a consulting expert witness for the Sandy Hook families in their lawsuit against Infowars.
No comments:
Post a Comment