The Challenge of Disinformation for Democratic Societies

Julien Artero
5 min readAug 6, 2021

Fake news and disinformation campaigns are no longer obvious and easy to detect. They are with us for good and can hide in plain sight. They are now part of the fabric of information shared on social media every second, everywhere. The erosion of public trust in traditional institutions, such as governments and the media, creates a fantastic opportunity for the spread of fake news and disinformation. Democratic societies are now heavily challenged by this ground shaking phenomenon.

Credit:Orbon Alija

There are several aspects on how disinformation can affect democratic decision-making, public opinion and policy debates:

# Disinformation creates narrative distortions and hijacks perceptions.

# Artificial promotion of fabricated narratives by malicious actors — using sophisticated campaigns involving bots and troll-nets — makes it impossible for citizens to distinguish between what is real, and what is actually fabricated.

# The question is no longer “why am I seeing this content in my feed” but “WHO wants me to see this content in my feed”.

Disinformation is widely defined as “the purposeful dissemination of false information intended to mislead or harm” (NED, 2017). It can include authentic material — like a picture or a video — deliberately repurposed with the wrong context or misleading captions to spread false information.

Disinformation campaigns convey messages ranging from distortion of facts and fake news to conspiracy theories, absolute lies, and even hate speech. The objectives can be to manipulate public opinion, sway policy, or spread an ideology.

Disinformation amplifies radical narratives

Disinformation amplifies political polarisation, creating some sort of echo chamber phenomenon on social media. We only keep seeing more of what fits our views and preferences. According to a study conducted by MIT in 2018, fake news and false rumours reach more people, penetrate deeper into the social network, and spread much faster than accurate stories. In fact, a false story can reach up to 1,500 people six times faster than a true story would.

Disinformation aggravates social and partisan tensions by demonising political opponents or people who don’t share a certain opinion, denying a balanced and fair debate, and ultimately destroying trust in democratic institutions.

Fake news regularly outperform accurate stories in every category.

Extremist narratives and conspiracy theories can now freely compete with rational and informed arguments

According to EPC Policy Analyst Paul Butcher, the appeal of disinformation for illiberal politicians is that it is a convenient tool for extremist discourse to compete with and ultimately crowd out rational, informed debate. In a media environment where revenue depends to a great extent on the number of clicks an article can generate, there is demand for ever more dramatic or sensational headlines as news outlets compete for readers. Content that triggers a strong emotional response is prioritised over sensible, fact-based reporting. In this way, the low standards set by fake news cross over into mainstream journalism as well, with negative consequences for the public debate all around.

A new era of widespread disinformation

These trends on the relationship between disinformation and democracy have worryingly increased in the last years. After a tumultuous year punctuated by Covid-19 and the global reaction towards political unrest, the 2021 Edelman Trust Barometer revealed an epidemic of widespread mistrust of societal institutions and leaders around the world. Politicisation of scientific discourse on topics such as vaccine safety, environmental concerns, or climate change has existed for a number of years. But this has clearly worsened in 2020 and 2021, creating an optimal environment for anti-science groups to gain footing and propagate marginalised scientific theories.

Social media platforms are note neutral actors

Efforts to fight the spread of disinformation have grown during recent years, mostly through self-regulation by online platforms such as Twitter or Facebook or by government-led initiatives. A number of social media platforms have stepped up their approach to detecting, preventing and stopping the spread of disinformation. The have deployed more capabilities and technologies. According to its own security chief, Facebook takes down around one million fake accounts each day.

However social media platforms are not neutral actors. They can favour certain political agendas against others, they can be lobbied and influenced by activist and third parties, and in fact — they can sometimes be conflicted by their own business model when fighting disinformation which requires high engagement from its users.

Real-time disinformation detection

It should no longer be acceptable to navigate the internet and social media without the appropriate tools to screen potential disinformation operations at play. Social media platforms, governments, citizens, and businesses should know when certain information is being actively “pumped” to reach a certain level of popularity, and then grab the attention of the public to ultimately “influence” perceptions and decisions.

A report by the University of Baltimore and the cybersecurity company CHEQ estimates that fake news and disinformation cost the global economy almost $80 billion annually.

Combatting disinformation campaigns and harmful narratives requires a forensic approach to collecting, processing, and analysing data to provide strong evidence of artificial and malicious online manipulation.

Social media platforms have increased their knowledge, expertise, and capabilities to address disinformation, detect illegal content, and counter propaganda. They use artificial intelligence (AI) based content filtering solutions and machine learning. However, these methods will often fail to detect proactively sophisticated campaigns leveraging networks of accounts, bots, and troll-nets. In-house systems are often subject to AI-based pass-through techniques such as adversarial attacks.

In return, state sponsored actors, entities, and groups behind disinformation and influence campaigns have massively improved their performance and ability to operate undetected for longer periods. The success of such campaigns is based on audience selection, timing, and message amplification scale. Such campaigns involve serious data science research, significant infrastructure, and state of the art technology.

It’s a never-ending race

Catching up with the latest developments and techniques in the field of disinformation requires combining robust cutting-edge cyber threat intelligence, data science, AI, cyber investigation and forensic techniques. To counter disinformation, artificial online promotion, as well as extremist content, hate-speech, and radicalisation, it is essential to be able to expose the infrastructure and the sponsors of such campaigns.

Democratic societies are now facing a very powerful destructive force and need innovative solutions to detect, investigate, and counter disinformation campaigns.

Article written by Julien Artero with the contribution of Marine Pichon and Pavel Dudko.

Julien Artero is co-founder of ZeNPulsar, the first European Cyber Forensics Solution dedicated to countering disinformation and advancing online integrity.

--

--

Julien Artero

Entrepreneur, co-founded Kalita Partners a leading forensic investigation firm, and ZenPulsar a tech start-up dedicated to countering online disinformation.