How terrorists use Facebook for information jihad


In the dynamic and rapidly evolving landscape of social media, Meta Platforms Inc.—previously known as Facebook—emerges as an unparalleled juggernaut. While the platform has undeniably transformed global communication, it grapples with formidable challenges, especially in the intricate domain of content moderation. Recent revelations indicate that Meta’s existing moderation protocols, executed not by in-house staff but by contracted personnel, may inadvertently serve as a breeding ground for extremist activities. This includes a novel and deeply concerning form of terrorism, colloquially termed “information jihad.”

The dark underbelly of Meta’s content moderation: Exploitation by extremist groups

One of the most alarming issues that has recently come to light is the exploitation of Meta’s content moderation system by various militant and terrorist organizations. These groups are specifically targeting and blocking reports, posts, and news articles that are unfavorable to them. This activity has largely gone unnoticed due to Meta’s lack of effective oversight, raising critical questions about the platform’s vulnerability to extremist manipulation.

A closer look at the exploitation

According to a Newsbusters report, Meta has been taking a less aggressive approach to content moderation by allowing users to “opt-out” of its fact-checking program. This move has already faced pushback and could potentially be exploited by extremist groups to spread misinformation.

Additionally, a Benzinga article highlighted that Meta has been accused of permitting pressure groups to dictate its policies in exchange for advertising revenue. This raises concerns that extremist organizations could similarly influence Meta’s content moderation policies to their advantage.

Victim testimonies

While specific victim testimonies are often scarce due to the sensitive nature of the issue, there’s mounting anecdotal evidence that points to a troubling trend. Activists, journalists, and even anti-terrorism experts find their posts on extremist activities flagged or removed, effectively silencing crucial voices.

For example, the Editor of Blitz, an internationally acclaimed counterterrorism specialist, had his posts about terrorism held for moderation, and his Facebook profile was subsequently paralyzed. This incident not only stifles free speech but also hampers efforts to expose the activities of militant and terrorist organizations.

Screenshot of blocked account

Upon checking the history we found the following:

Restriction History of Blitz Editor’s Facebook account

Other instances include a Breitbart report that highlighted a mosque losing its government grant over extremist sermons and a SiliconANGLE article discussing the role of social media in spreading disinformation campaigns. These cases underscore the urgent need for effective content moderation to prevent the manipulation of platforms like Meta for extremist agendas.

The security implications

The unchecked manipulation of Meta’s content moderation system poses a significant security risk. It not only silences voices that seek to expose extremist activities but also allows these groups to operate with relative impunity. This has far-reaching implications for national and global security due to extremist activities.

The issue of “information jihad” and its exploitation of Meta’s content moderation system is a grave concern that requires immediate attention. The lack of effective oversight and the potential for manipulation pose significant ethical and security risks that Meta must address to maintain the integrity of its platform.


Please enter your comment!
Please enter your name here