So-called fake news is an issue du jour in Washington and beyond. While it has undoubtedly been hyped for political reasons, fake news is real, its dangers clear and present—especially in unstable environments.
Even in the United States, the Pizzagate conspiracy theory and subsequent Comet Ping Pong shooting in Washington, D.C., in December 2016 shows how quickly fake news can turn to real violence. Internationally, journalist Stephanie Busari has shown how fake news allows us to be passive in dismissing real atrocities as “hoaxes,” highlighting how Nigerian authorities’ use of the label “fake news” led to loss of life and increased suffering for the Chibok girls of northern Nigeria at the hands of Boko Haram.
Kenya is a case study on the dangers of fake news, and a timely one given the imminent re-run of the national elections there. Although the 2013 elections were relatively peaceful, most Kenyans vividly remember the terrible violence that accompanied the 2007 elections in which up to 1,500 people were killed and many thousands displaced.
Watching the local nightly news in Nairobi in the run-up to the August election, I could feel why tensions were high. Neither presidential candidate was saying the right things about respecting election results. Local party members were on talk shows night and day threatening to contest the outcome if results didn’t go their way. And fake news was rampant.
A prime example was the resurgence of stories about the Mungiki, an ethnic organization banned in Kenya after being linked to election violence in 2007 and 2008. Falsified videos of a woman claiming her apartment was under siege by Mungiki made the rounds on Facebook and WhatsApp before being discredited and removed. Four National Super Alliance groups claimed that Mungiki in police uniforms were attacking their supporters in Kibera (police eventually removed an individual who was wearing a uniform but had not attacked anyone). While the sources were different, the purpose of these and other stories was the same: to foment unrest during the election.
Fake news was both widely disseminated—including on CNN, BBC, and NTV Kenya—and widely realized to be fabricated. A study by GeoPoll and consulting firm Portland PR ahead of the August ballot found that 87 percent of Kenyans polled had seen information they suspected to be deliberately false.
A number of “top-down” approaches have been taken to combat this threat. Facebook launched a tool to track and combat fake news before the election. The Government of Kenya issued Guidelines on prevention of dissemination of undesirable bulk and premium rate political messages and political social media content via electronic communications networks, or guidelines to countering fake news.
These well-intentioned tools, however, have their drawbacks. The Facebook tool required people to tag news as fake or call out their social network for posting unverifiable news. The Kenyan government guidelines, in line with its spectacularly bureaucratic title, imposed a process that took 24 hours to implement—untenable in the era of real-time communication.
Encouragingly, though, these efforts are complemented by local examples of social media being used to ensure factual reporting during fraught political periods. As DAI closely documented during the 2013 elections, grassroots Kenyan organization Sisi ni Amani—Kenya (“We are Peace,” in Swahili) used 682,227 text messages to engage directly with individual citizens and counter misinformation on voting processes, thereby identifying tensions in specific communities and cultivating calm. As we noted at the time, “A survey of subscribers found that the text messages made them feel they were directly part of a local initiative for peace and created a sense of belonging that was lacking in other community-level publicity campaigns.”
Examples of the NIWETU team WhatsApp Group Chat.
In July, I was privileged to watch our Kenya NIWETU project team use the social media at their disposal to take on fake news. Given the dispersed location of the four NIWETU offices across the country, the project WhatsApp account has become a much-used tool for security updates and fact checking in a tense political climate—including fact checking each other (see above images). After seeing suspicious stories on Facebook, Twitter, and WhatsApp alleging riots and protests, for example, one staff member in Nairobi shared a public resource for how to access traffic camera views showing whether streets are clear. Another staff member shared what turned out to be a fake Facebook post about violence in Nairobi’s Central Business District so that better placed colleagues in the NIWETU network could verify through external sources that the post was inaccurate and no such threat existed.
In Kenya and elsewhere, we will need both top-down tools and grassroots resiliency strategies—tech-enabled and otherwise—that support information sharing, community cohesion, and trust among citizens. And as Kristen Roggemann points out, development donors would do well to fund “critical digital literacy” at levels commensurate with the expanded funding for internet access, so that as citizens we question our increasingly diverse sources of information and think critically about what we’re hearing—and sharing.
“After all,” as Roggemann writes, “what good is Facebook if it’s better at dividing people than uniting them?”