A shadowban on Twitter/X is a covert algorithmic restriction that limits an account’s visibility — suppressing content from search results, timelines, and reply threads — without notifying the affected user. Research across millions of accounts confirms the practice is real, statistically non-random, and disproportionately applied to certain user profiles. This report compiles all available quantitative data on shadowban prevalence, impact, enforcement scale, and demographic patterns.
What Is a Shadowban?
Shadowbanning (also called “ghost banning” or “stealth banning”) refers to a range of techniques that artificially limit the reach of targeted users or posts without suspending the account. The user continues to post and interact normally, but their content becomes largely invisible to non-followers and may disappear from search results, hashtag pages, and reply threads.
Twitter/X officially denies using shadowbans as a policy, instead attributing visibility fluctuations to ranking algorithms. However, a 2018 blog post from Twitter acknowledged using machine learning and human review to determine “how Tweets are organized and presented in communal places like conversations and searches” — effectively describing the same mechanism.
Types of Shadowban

| Type | Description | Avg. Engagement Drop |
| Search Suggestion Ban | Account doesn’t appear in autocomplete when users type a username | ~15% |
| Search Ban | Tweets and profile hidden from all search results | ~50% |
| Reply Deboosting | Replies collapsed under “Show more replies” and deprioritized | ~30% |
| Ghost Ban | Replies invisible to all except the poster; profile nearly unreachable | ~90% |
| Algorithmic Suppression | Subtle feed demotion; impressions reduced from ~10% to ~1% of followers without full hiding | 70–90% impression drop |
Ghost bans carry the most severe penalties — 100% of ghost-banned users also receive a search ban, and 97% also receive a typeahead (suggestion) ban, indicating a graduated enforcement response.
Shadowban Prevalence: Key Research Findings
Large-Scale Academic Audit (2.5M Profiles — Le Merrer et al., 2021)
The most comprehensive academic study on Twitter shadowbanning crawled over 2.5 million user profiles in April 2020 and found:

| Population | Total Profiles Analyzed | Shadowbanned Profiles | % Affected |
| Random users | 424,489 | 9,967 | 2.34% |
| Bot accounts | 1,179,949 | 23,358 | 1.97% |
| Famous accounts | 908,131 | 6,805 | 0.74% |
| French political deputies | 348,640 | 1,746 | 0.50% |
Key takeaways from this study:
- Random users were affected 4.7x more than elected politicians
- The “random bug” hypothesis was statistically rejected — shadowbans are not uniformly distributed and are extremely unlikely to be accidental
- Banned users had a neighbor contamination rate ~8–16x higher than non-banned users, suggesting shadowbans cluster in interaction communities (the “epidemic topology” hypothesis)
- A Random Forest ML model predicted shadowban status with 80.6% accuracy based on public profile features alone
- Top predictive features: media_count, friends_count, statuses_count, and favorite_count
Ban Type Breakdown (Same Dataset)
From 2.5M profiles, the following counts of each ban type were recorded:
- Typeahead (suggestion) bans: 41,071 profiles
- Search bans: 23,219 profiles
- Ghost bans: 3,681 profiles (most severe and rarest)
Multi-Year U.S. Audit (25,000+ Accounts — NUS/UPenn, 2020–2021)
A large-scale audit of over 25,000 U.S.-based accounts from geotagged Twitter data found that 6.2% of the 41,092 existing accounts had been shadowbanned at least once during the study period. The study used six shadowban audit runs from June 2020 to June 2021.
Additional findings:
- Users exhibiting bot-like behavior, high tweet frequency, and uncivil posts were significantly more likely to be shadowbanned
- Users with more retweets were less likely to receive suggestion bans; users with more likes were more likely
- Hashtags triggering bans in 2020 (e.g., #Pride, #BLM) were not banned by 2021, demonstrating temporal instability in enforcement rules
- Verified accounts posted 12% (26,000) of misleading tweets in the Community Notes dataset but are less likely to be shadowbanned, creating a structural loophole for misinformation
Industry-Level Estimate (2.5M Users)
A separate analysis of over 2.5 million users found approximately 2.5% of accounts had been shadowbanned at some point. This aligns closely with the random-user prevalence (2.34%) reported by the French academic study.
X Platform Enforcement Data (H2 2024)
X’s own transparency reports offer a partial view of enforcement actions, though official data does not specifically distinguish “shadowbans” (visibility restrictions) from full account suspensions.
Overall Enforcement (July–December 2024)
- 4+ million account suspensions globally in H2 2024
- 10.1 million posts removed or labeled in H2 2024
Enforcement by Violation Category (H2 2024)

| Violation Type | Account Suspensions | Posts Removed/Labeled |
| Abuse, harassment & hateful content | 940,000+ | 1.49 million |
| Child safety | 1.8 million | 2,000+ |
| Suicide/self-harm | 1,588 | 64,732 |
| Personal privacy (doxxing) | N/A | 32,543 |
A striking data point: of the 940,000+ abuse-related suspensions, only 340 were automatically flagged by machine learning systems — the overwhelming majority were manually triggered. This contrasts with X’s own transparency reports, which state automated tools are in use, a discrepancy noted by independent researchers.
Engagement & Visibility Impact
Shadowban-Specific Impact
- Shadowbanned accounts typically experience a 70–90% drop in impressions until restrictions are lifted
- From a normal range of 500–2,000 impressions per tweet, shadowbanned accounts often see impressions drop to 30–100 or fewer
- Accounts flagged with “high fake follower ratios” had an 82% correlation with shadowban status
- Light reply deboosting typically lifts in 48–96 hours; search bans can persist 3–14 days; repeated flags can suppress accounts for weeks
Platform-Wide Engagement Decline (2024)
Separate from shadowban-specific data, X’s overall engagement declined sharply in 2024, complicating isolation of shadowban effects:
- Overall user engagement rate on X dropped by ~40% year-over-year in 2024
- Average post likes fell 16% YoY to 31.46 in 2024
- Reposts fell 23% YoY to an average of 8.47
- Social media mentions collapsed 61% YoY to 1.56
Who Gets Shadowbanned? Key Triggers
Research and practitioner reports consistently identify the following behavioral triggers:
Account-Level Factors:
- No confirmed email address or profile picture
- New accounts that immediately exhibit aggressive growth behavior
- Unnatural follower/following ratio (e.g., following 10,000, having 100 followers)
- Signing up for multiple accounts simultaneously
- High “bot-likeness” score (as assessed by ML classifiers)
- Having fewer than 500 followers
Behavioral Triggers:
- Following 100+ accounts per day or mass follow/unfollow cycling
- Mass liking, retweeting, or replying through automation tools
- Repetitive content — same links, phrases, or images across multiple posts
- Fast tweeting or spamming replies
- Logging in from inconsistent geographic locations (e.g., via VPNs or proxies)
- Repeatedly tweeting or mentioning accounts that do not follow back
Content Signals:
- Posts containing flagged keywords or hashtags (highly variable over time)
- Sharing links from low-credibility domains
- Content flagged as sensitive without proper account settings
Political Bias and Algorithmic Skew
Independent research found significant political dimensions to visibility restriction on X, though these relate more to algorithmic amplification than to shadowbanning specifically.
- A field experiment with 4,965 U.S. users (2023) found that X’s “For You” algorithmic feed caused a 4.7 percentage point increase in prioritizing Republican-leaning issues compared to a chronological feed
- Users on the algorithmic feed showed a 2.5 percentage point increase in right-leaning political content exposure
- X’s “For You” algorithm reduced the visibility of traditional news outlets while amplifying political activists and commentators
- A Yale/UPenn study found accounts sharing pro-Trump or conservative hashtags were suspended at 4.4x the rate of accounts sharing pro-Biden hashtags — attributed not to ideological bias but to a higher rate of misinformation-sharing in that content group
- Sky News (2025) created nine new X accounts and found the platform’s algorithm systematically boosted right-wing and extreme content in “For You” feeds
- A 2024 audit during the U.S. presidential election found X’s algorithm skewed exposure toward a few high-popularity accounts, with right-leaning users experiencing the highest inequality of exposure
Legal Context: EU Digital Services Act (DSA)
X (as a Very Large Online Platform, or VLOP) has been under regulatory scrutiny for shadowbanning under the EU’s Digital Services Act (DSA):
- The European Commission opened a formal investigation of X in December 2023, citing content moderation and transparency failings
- X is required under the DSA to submit Statements of Reasons (SoRs) to a transparency database for moderation actions, but research found X regularly fails to comply due to underreporting
- One academic study found X “prides itself on an ‘artisanal’ approach” to moderation while simultaneously failing DSA reporting obligations
- X’s DSA reports show it claims to rely on exclusively manual moderation — contradicting its own public statements about automated ML systems
Key Data Summary

| Metric | Value | Source |
| Shadowban rate — random users (2020 audit) | 2.34% | Le Merrer et al. |
| Shadowban rate — famous accounts (2020 audit) | 0.74% | Le Merrer et al. |
| Shadowban rate — elected officials (2020 audit) | 0.50% | Le Merrer et al. |
| Accounts shadowbanned at least once (NUS/UPenn) | 6.2% of active accounts | Jaidka et al. |
| General prevalence estimate | ~2.5% | Multiple studies |
| Impression drop when shadowbanned | 70–90% | Industry data |
| Ghost ban impression drop | ~90% | Social Media Lab |
| Search ban impression drop | ~50% | Social Media Lab |
| Account suspensions on X (H2 2024) | 4+ million | X Transparency |
| Posts removed or labeled (H2 2024) | 10.1 million | X Transparency |
| X overall engagement decline (2024) | ~40% YoY | Metricool/Statista |
| ML accuracy predicting shadowban from profile features | 80.6% | Le Merrer et al. |
Limitations and Caveats
- No official data: X does not publish shadowban-specific metrics; researchers must rely on indirect detection methods that may underestimate the full scope
- Detection tool variability: Different shadowban-checking tools may return different results for the same account due to API limitations and detection methodology differences
- Temporal instability: Shadowban rules evolve; content triggering bans in one period may not trigger them in another
- Platform engagement vs. shadowban conflation: X’s broader 40% engagement decline in 2024 makes it difficult to isolate shadowban-specific reach suppression from platform-wide audience decline
- Underreporting by X: Research confirms X systematically underreports moderation actions to the EU DSA transparency database, making third-party audits the primary source of reliable data.