The rising risk of misinformation and disinformation

The Global Risks ReportArticleJanuary 25, 2024

Misinformation and disinformation signify an increasing challenge for policymakers, companies, social stability and democracy itself.
Share this
In 2018, “misinformation” was selected as the word of the year by Dictionary.com. Since then, misinformation and disinformation have risen quickly through the ranks of the global risk agenda. They have been propelled by dwindling trust in traditional media institutions, the growing prevalence of information wars and, more recently, advances in artificial intelligence (AI).

While often used interchangeably, misinformation and disinformation have distinct definitions and implications. According to the U.N., misinformation is the accidental spread of inaccurate information. Disinformation, on the other hand, is intentional: material that is spread to deceive, with the potential to cause serious harm. 

The World Economic Forum’s Global Risks Report 2024 defines misinformation and disinformation as false information (deliberate or otherwise) widely spread through media networks. The report also notes that information disseminated by public figures, media organizations and states can shift public opinion and erode trust in authority.

These risks are not entirely new. Indeed, the Global Risks Report has been charting the rise of threats from digital misinformation for over a decade. The 2013 report highlighted how hyperconnectivity could see “digital wildfires” wreak havoc in the real world. Since then, we’ve seen these risks become fully-fledged real-world crises. And in the 2023 report, misinformation and disinformation were ranked by respondents as 11th overall for long-term risks. 

A growing role for AI  

The topic of misinformation and disinformation has implications across politics and business. AI deepfakes shared on social media have the potential to cause both short-term investor panic and long-term reputational damage. In May 2023, an AI-generated photo of an explosion at the Pentagon rattled investors and sent stocks tumbling to a session low.  
Furthermore, misinformation will be exacerbated by widespread use of chatbots powered by generative AI. Some research from a software start-up called Vectara indicates that when summarizing information, chatbots hallucinate — or provide factually inaccurate information — on average between 3 percent and 27 percent of the time. Tackling this kind of misinformation presents an uncommon challenge for organizations. ChatGPT alone has over 100 million weekly users.  Together, the World Economic Forum claims they are likely to act as a potential accelerant to the erosion of social cohesion with the potential to destabilize trust in information and political processes. The growing

future threat of disinformation and misinformation makes it more important than ever to have a risk mitigation strategy in place. Yet previous surveys revealed a lack of preparedness for cyberattacks and misinformation. Just 25 percent of respondents polled in the Global Risks Report 2022 had effective or established risk mitigation in place for cross-border cyberattacks and misinformation.  

Growing distrust in public institutions  

This growing recognition of misinformation is linked with declining trust in public institutions, creating a volatile backdrop for businesses. A report by the U.N. Department of Economic and Social Affairs shows developed countries have experienced a marked decrease in institutional trust. Trust in national government in the U.S., for example, has suffered a decline from 73 percent in 1958 to just 24 percent in 2021. Since the 1970s, Western Europe has seen a similar steady decline in public trust in national government.

And it’s not just governments that are being called into question. The same report by the U.N. Department of Economic and Social Affairs shows trust in financial institutions is also down an average of 9 percentage points, from 55 percent to 46 percent over the same period. A 2022 report from research and consulting firm Forrester found only 2 percent of financial brands were rated as “strong” by U.S. consumers, and 57 percent were considered “weak.”  

An evolving social media landscape 

Companies are also operating in a new social media platform landscape. Increasingly, wars on social media platforms are spilling into real life. The everyday technologies that connect us can be used for nefarious ends, from rioting in Ireland to violence in Myanmar and India. These threats can only really be tackled by regulation and education. The significance of this challenge for policymakers, democracy and social stability lies in how ideas are presented at scale.

For all the concern at the regulatory level, last year’s Global Risks Report predicts regulation and education “will likely fail to keep pace” with more “widespread usage of automation and machine-learning technologies, from bots that imitate human-written text to deepfakes of politicians.”  

Ultimately, while platforms claim they are working to fight the spread of misinformation and disinformation, the threat of users spreading untruths on social platforms is unlikely to go away. Researchers have identified the biggest influencer in the spread of fake news: social platforms reward users that share information more regularly. Furthermore, those who share sensational content that generates the most reactions are rewarded the most.  

Greater proactivity needed from companies 

2024 will be a watershed year for elections. Accordingly, it could also be a record-breaking year for misinformation and disinformation. Some 40 countries representing 3.2 billion people will go to the polls, including the U.S., Indonesia, the UK and Taiwan, creating more potential uncertainty and risk for companies everywhere.  

1_Flood_mitigation_1440x720.jpg

With trust in traditional institutions at an all-time low and AI capabilities at an all-time high, the next 12 months could be harder to predict than ever before. Misinformation and disinformation will likely interact with other short-term risks, exacerbating and compounding other crises. As tools get smarter, we can expect further implications for companies. Leaders will need to accurately evaluate how misinformation and disinformation can amplify other risks, while making firm plans on how to combat them. In general, companies will need to be more proactive in anticipating and mitigating the risks posed by disinformation and misinformation.