Tech

AI Deepfakes During War: Why Meta’s Oversight Board Wants the Company to Rethink Its Approach

Artificial intelligence is greatly changing the dynamics of information dissemination in war situations. Recently, in the ongoing conflict between Israel, Iran, and the United States, it has been observed that in addition to war, another war is raging in cyberspace.

Recently, there was a deluge of war videos, missile strikes, and destruction on social media, which attracted millions of views. However, it was later discovered that these videos were actually AI-generated deepfakes.

The recent increase in AI-generated videos has sparked serious concerns for researchers, governments, and technology organizations. Recently, Meta’s Oversight Board suggested that the company should change its policy regarding AI-generated media, as it is found to be weak in handling misinformation in war situations.

AI

The recent war between these countries is not just limited to guns and bombs but also to information and AI algorithms.


What Are AI Deepfakes?

Deepfakes are videos, images or audio clips created using artificial intelligence to make something appear real even though it never happened.

AI models can now generate realistic scenes showing explosions, military operations or speeches by political leaders. In some cases, synthetic videos can even replicate a person’s voice or facial expressions.

The technology itself is not inherently harmful. It is used in filmmaking, entertainment and digital art. However, when used to spread misinformation, deepfakes can distort public perception of events.

During wartime, the stakes are especially high because misleading content can influence public opinion, diplomatic decisions and even military responses.


How the Iran–Israel Conflict Triggered a Wave of Deepfakes

The recent escalation between Israel and Iran has produced a massive amount of visual content online.

Many users shared videos claiming to show missile strikes, destroyed cities or downed fighter jets. Some clips were shared by individuals seeking attention, while others were spread as part of propaganda campaigns.

Investigations by fact-checking groups found that several viral videos were created using generative AI or taken from unrelated sources such as video games or older conflicts.

In some cases, AI-generated clips were designed to exaggerate military victories or create panic among audiences.

These videos spread quickly because they appear convincing at first glance and are often shared before verification takes place.


Why Meta’s Oversight Board Is Concerned

Meta’s Oversight Board acts as an independent body that reviews the company’s moderation decisions and provides recommendations.

The board recently warned that Meta’s current system for identifying and labeling AI-generated content is not sufficient to deal with the rapid spread of deepfakes during crises.

According to the board, the company relies too heavily on users voluntarily disclosing when content is created using AI.

In reality, most people sharing misinformation do not label it as artificial. As a result, deepfakes can circulate widely before moderators detect them.

The board has urged Meta to introduce stronger policies that make it easier for users to recognize synthetic media.


The Problem With Viral War Content

Social media algorithms reward dramatic and emotionally charged content.

A shocking video showing explosions or missile attacks is more likely to go viral than a careful explanation of events. This creates a powerful incentive for users to share sensational content—even if its authenticity is uncertain.

Some creators intentionally produce fake war videos to attract views and followers. Others share misleading clips without realizing they are artificial.

Once misinformation spreads widely, correcting it becomes extremely difficult. Even after a video is debunked, many viewers continue believing it.

This phenomenon is often referred to as the “persistence of misinformation.”


Deepfakes as a Tool of Digital Warfare

Deepfakes are increasingly becoming part of psychological and information warfare.

State-linked groups, political activists and propaganda networks may use AI-generated media to shape narratives about conflicts.

During the Iran–Israel confrontation, analysts found networks of accounts spreading manipulated videos designed to portray one side as stronger or more successful than the other.

Such campaigns aim to influence both domestic and international audiences.

The result is an online environment where it becomes harder for ordinary people to distinguish between real footage and fabricated events.


Why Detecting Deepfakes Is Difficult

Identifying AI-generated content is technically challenging.

Advanced deepfake systems can produce highly realistic visuals, including accurate lighting, shadows and motion. In many cases, only specialized software or forensic analysis can reveal that a video is synthetic.

Moreover, deepfakes often spread faster than verification efforts.

By the time fact-checkers investigate a viral video, millions of users may already have watched and shared it.

This speed advantage makes misinformation extremely difficult to control.


What the Oversight Board Wants Meta to Do

The Oversight Board has suggested several steps that Meta could take to address the problem.

One recommendation is creating a dedicated policy specifically for AI-generated content. This would make it easier for moderators to handle deepfakes and misleading synthetic media.

Another suggestion involves improving AI detection tools so that platforms can automatically identify manipulated videos.

The board also wants clearer labeling systems that inform users when content has been created or altered by artificial intelligence.

Experts believe these measures could help slow the spread of misleading war footage.


The Wider Tech Industry Response

Meta is not the only technology company facing this challenge.

Other platforms have also begun tightening rules around AI-generated war content.

For example, some social media networks now require users to label AI-generated videos. Accounts that repeatedly post unlabelled synthetic war footage may face penalties or loss of monetization privileges.

These steps show that the technology industry is increasingly aware of the dangers posed by AI misinformation during conflicts.

However, the effectiveness of these policies remains uncertain.


The Role of Media Literacy

While technology solutions are important, experts say public awareness is equally critical.

Users must learn to question dramatic videos and verify information before sharing it.

Simple checks—such as examining the source, searching for original footage and consulting reliable news outlets—can help prevent the spread of misinformation.

Education about digital media literacy may become one of the most powerful tools against deepfake propaganda.


A New Era of Information Warfare

The rise of AI-generated misinformation signals a new phase in the evolution of warfare.

Traditional propaganda relied on manipulated photographs, edited audio recordings or misleading narratives. Today, artificial intelligence allows actors to create highly realistic scenes that never happened.

As technology continues to improve, the challenge of distinguishing truth from fiction may become even harder.

This reality has forced technology companies, governments and researchers to rethink how information should be monitored and verified during crises.


Why This Debate Matters

The debate around AI deepfakes goes beyond social media moderation.

Accurate information is essential during conflicts because it influences diplomatic decisions, humanitarian responses and public understanding.

If deepfakes dominate online discourse, people may lose trust in genuine reporting and verified evidence.

The warnings from Meta’s Oversight Board highlight an urgent need for stronger safeguards in the digital information ecosystem.

As wars increasingly unfold online as well as on the battlefield, protecting the integrity of information has become a global priority.

Click Here to subscribe to our newsletters and get the latest updates directly to your inbox.

Leave a Reply

Your email address will not be published. Required fields are marked *