
Propaganda and Information Warfare in the Social Media Age
We are witnessing a world in which a tweet can start a revolution, a viral video can ruin a political career and a bot army can affect an election all before lunch. Information warfare is no longer a cloak and dagger Cold War plot, part of secret briefings in intelligence agencies. It has gone viral. In the era of social media, propaganda has gone mainstream, militarised and technologically enhanced, with every phone a potential battlefield.
Table Of Content
The Old Art of Propaganda, Reborn
Disinformation is nothing new in warfare the earliest recorded instance of state-sponsored deceit occurred in 1274 BC when Hittite troops during the Battle of Qadesh deliberately spread incorrect information. But information warfare still hasn’t been faster. Information warfare used to be the domain of printing presses, radio stations and big budgets. However, it can now be achieved using just a mobile phone, an open- source artificial intelligence generator, and a social media account. It is not the battle that has been moved from the ground to news feeds, but the battleground has changed from bullets and bombs to stories and images. Bots, Botnets, and the Architecture of Manipulation
Today’s information warfare machinery is powerful and efficient. . An investigation into the Russia-Ukraine conflict revealed that from January 2024 through April 2025, over 3,600 Russian botnet accounts on Telegram disseminated over 316,000 pro-Russian messages in Ukrainian-controlled territories, carefully adjusted for relevance within the specific area. In 2022, pro-Russian tweets circulated among approximately 14.4 million people, with at least 20 percent of the accounts sharing these posts probably being bots. This is state-sponsored, industrial-scale information warfare now, not the work of basement tinkerers. Read more about bot networks at Oxford Internet Institute’s Computational Propaganda Project.
Deepfakes: When Seeing Is No Longer Believing
Perhaps nothing demonstrates the acceleration of information warfare as much as deepfakes and artificial media. According to reports in 2023, the number of deepfakes increased by a massive 3,000%, which is indicative not only of technological advancement but also of widespread malicious use of technology. Before the general elections in India in 2024, there were widespread viral campaigns of Al-made videos of celebrities denouncing Prime Minister Modi. Fake recordings of a political candidate discussing election manipulation appeared just days before the polls in Slovakia. Meanwhile, thousands of American citizens in New Hampshire received phone calls with Al voice recording of President Biden urging them not to go out and vote. These are all very real examples of attempts to sabotage democracy through artificial means. They are all carefully monitored at brennancenter.org.
Algorithms: The Unwitting Accomplices
No advancement has sped up information warfare than deepfakes and other forms of AI-generated media. This is seen from a 3,000% increase in deepfake attempts made in the year 2023. In the build-up to the 2024 Indian elections, for example, Al- based videos of celebrities making unfounded claims against the Prime Minister of India, Mr. Narendra Modi, circulated on social mediaplatforms like WhatsApp and YouTube. Similarly, in Slovakia, just before their elections, a manipulated audio clip of one of the candidates discussing rigging surfaced online. And in the US, thousands of voters were called by a robocall with an AI-generated voice of President Biden telling them not to vote in New Hampshire. These are not potential threats, these are actual attacks on elections. The Brennan Center for Justice has a detailed look at these threats NYU’s Center for Social Media and Politics that how platform design shapes political discourse.
The Liar’s Dividend: Truth Itself Becomes Contested
Social media didn’t invent information warfare, but its algorithms are its biggest enablers. For example, TikTok has been shown to harness its recommendation algorithm as a form of audio-visual propaganda, achieving up to 80 billion views for campaigns. The problem is systemic: these apps are built to increase engagement and content that creates emotion, anger or fear is highly engaging. So the most manipulative rises to the surface. In this sense, these algorithms cause misinformation. NYU’s Center for Social Media & Politics researches the influence that the design of platforms has on political behavior.
Fighting Back: Media Literacy, Policy, and Platform Accountability
A response to information war must not be limited to technical fixes; it needs to be a human response. Teaching citizens media literacy skills so that they can detect false information, the manipulation of emotions, and the potential dangers of sharing data is essential. Governments are starting to respond too, with the European Commission criticizing social networks like X for violations of European anti-disinformation policies and multiple U.S. states considering laws against political deepfakes. Platform transparency labelling of AI-generated content, bot detection and content provenance should not be optional, they should be standard. The. EUvsDisinfo initiative project is one of the largest public databases of foreign disinformation campaigns against Europe.
Conclusion: The War for Reality
Today’s wars are fought with words, not rockets. The information war is the new geopolitical battleground of the social media era inexpensive to prosecute, hard to defend, and crippling to public opinion, political institutions and social cohesion. The key to winning this battle is not better AI detection technology or platform policies, but more informed information practices. In an age where even the truth is a weapon, critical thinking has become nothing less than a national imperative.







