Table of Contents
In recent years, disinformation has become a significant policy issue, particularly since the discovery of attempts to interfere in the 2016 US Presidential election by the St. Petersburg based Internet Research Agency. Various studies have highlighted the prevalence of disinformation in shaping public understanding and political decision-making across different domains such as elections, public health, climate change, counterterrorism, and warfare. Open-source intelligence (OSINT) has emerged as a crucial tool for uncovering and exposing disinformation campaigns.
This article aims to explore the interplay between OSINT and disinformation, highlighting how they drive innovations in each other. While disinformation involves intentionally misleading messages, OSINT refers to the methods and techniques used to gather intelligence from publicly available sources. Despite their prominence in the contemporary information environment, the relationship between OSINT and disinformation has not received much attention. By examining how these two phenomena interact, we can gain insights into the evolving tactics and strategies employed by those involved in disinformation campaigns.
The Evolution of Disinformation and OSINT
Disinformation has a long history, with concerns about misleading public communications dating back centuries. However, the current information environment allows for the rapid transmission and reception of highly persuasive yet misleading messages. As a result, disinformation has become a crucial component of information operations by both state and non-state actors.
On the other hand, OSINT has gained prominence as a means to uncover and expose disinformation. It involves the collection and analysis of intelligence from publicly available sources such as social media, news articles, and websites. OSINT analysts constantly develop new methodologies to identify and attribute sources of misleading information, while those spreading disinformation strive to evade detection and maintain their influence.
This ongoing “arms race” between OSINT analysts and disinformation authors leads to constant adaptations and innovations on both sides. Disinformation campaigns evolve to exploit new vulnerabilities and opportunities for malign influence, while OSINT analysts refine their methods to counteract these efforts. It is important to understand that disinformation is not limited to social media but can be transmitted through various vectors. Therefore, the OSINT community must broaden its scope to detect potential vulnerabilities beyond traditional platforms.
Case Study: AIS Spoofing
One example of the interplay between disinformation and OSINT is AIS spoofing. The Automatic Identification System (AIS) is a radio-based system used to track ships and prevent collisions. Open-source marine traffic aggregators like MarineTraffic.com rely on AIS transponder messages to create real-time maps of ship movements. However, AIS signals can be spoofed, resulting in incorrect or missing data.
In June 2021, two NATO warships were recorded on MarineTraffic.com leaving Odesa and sailing to Crimea, near the Russian naval base of Sevastopol. However, webcams from Odesa showed that the ships never left the port, indicating that false AIS tracks had been created to deceive OSINT users. This incident highlighted the vulnerabilities of open-source tracking websites to disinformation campaigns.
The broader lesson here is that influential sources of disinformation are not confined to social media platforms. The OSINT community must expand its focus to detect potential vulnerabilities and exploits beyond traditional channels. By doing so, they can better mitigate the impact of disinformation campaigns.
Case Study: Artificial Intelligence and Deep Fakes
Another emerging area where disinformation and OSINT intersect is the use of artificial intelligence (AI) to create deep fakes. Deep fakes are digitally manipulated videos or images that appear authentic but are actually fabricated. They leverage AI and machine learning algorithms to convincingly forge the likeness of an individual using online source material.
During the conflict between Russia and Ukraine, Ukrainian officials issued warnings about the possibility of adversaries creating a deepfake video of President Zelensky announcing his surrender. Shortly after, a video circulated on social media platforms showing a manipulated version of Zelensky speaking directly to the camera. Although the manipulation was relatively unsophisticated and easy to identify, it marked the first weaponised use of a deepfake during an armed conflict.
Deepfakes pose a significant challenge as they can deceive audiences and spread disinformation at a rapid pace. As AI technologies continue to advance, it becomes increasingly difficult to discern between real and fake content. Malign state actors can use AI-assisted tools to create a high volume of potent disinformation, eroding public trust in news media and exacerbating social tensions.
The relationship between OSINT and disinformation is a dynamic and evolving one. Disinformation campaigns constantly adapt and innovate to exploit vulnerabilities in the information environment, while OSINT analysts refine their techniques to uncover and expose these campaigns. Understanding this interplay is crucial for developing effective strategies to mitigate the impact of disinformation.
In the face of rapidly advancing AI technologies, it is essential for the OSINT community to stay updated and equipped with cutting-edge tools and methodologies. By re-arming themselves against future disinformation threats, OSINT analysts can contribute to countering the spread of misleading information and maintaining the integrity of public discourse.
As important elections loom in the near future, it is urgent to consider how OSINT methods can be further developed and utilised to address the evolving tactics employed by those spreading disinformation. By doing so, we can strive to preserve the credibility of democratic processes and ensure informed decision-making by the public.
Note: The original article was more than 15000 words long, so I have provided a condensed and rewritten version that captures the essential points and maintains a reasonable length.