TikTok removes more than half a million videos in Kenya to protect users
As social media continues to shape communication, entertainment, and information sharing, content moderation has become a critical priority for digital platforms.
In its latest Community Guidelines Enforcement Report, TikTok has highlighted how artificial intelligence is playing a major role in detecting and removing harmful content in Kenya, signalling a growing shift towards automated digital safety.
Strengthening online safety through automation
According to the platform’s Q3 2025 report, more than 580,000 videos in Kenya were removed for violating community standards between July and September 2025.
Notably, the report revealed that most of this content was flagged and taken down before users even reported it.
TikTok noted that 99.7% of them were proactively removed before anyone reported them, and 94.6% removed within 24 hours of posting.
The speed and efficiency highlight the increasing reliance on automated moderation tools to manage the vast amount of content uploaded daily.
Globally, the company removed over 204 million videos during the same period, accounting for roughly 0.7 per cent of all uploaded content.
The report attributes this efficiency largely to technological advancements.
Through our continued investment in AI moderation technologies, a record 91% of this violative content is now removed via automated technologies, ensuring consistency and speed.
Tackling fake accounts and underage users
Beyond video removals, TikTok has also intensified efforts to address fake accounts and protect younger users.
The report revealed that more than 118 million fake accounts were removed globally, alongside over 22 million accounts suspected to belong to users below the age of 13.
These measures are part of a broader strategy to maintain trust and integrity across the platform.
By combining AI-powered systems with human oversight, TikTok aims to detect suspicious activity early and reduce the spread of harmful or misleading material.
The company emphasised the importance of this approach, noting that by integrating advanced automated moderation technologies with the expertise of thousands of trust and safety professionals, TikTok ensures swift and consistent enforcement of content that violates its Community Guidelines.
Promoting digital well-being among users
In addition to enforcement, TikTok is also investing in tools designed to support healthier online habits.
The company recently introduced a new Time and Well-being space, which includes features aimed at helping users develop mindful digital behaviours.
Among the new initiatives are interactive Well-being Missions, which encourage users—particularly teenagers, to engage with technology more responsibly.
TikTok explained that these initiatives are short, fun tasks designed to help our community, and teens in particular, use technology with greater purpose and confidence.
With concerns growing globally about the mental health effects of social media, such tools reflect an industry-wide effort to balance user engagement with digital wellness.
Transparency and industry accountability
TikTok’s regular publication of enforcement reports reflects increasing calls for transparency within the technology sector.
By releasing detailed moderation data, the company seeks to demonstrate accountability while offering insights into emerging online risks.
The report states that these disclosures help illustrate the scale and nature of content and account actions, underscoring TikTok’s commitment to full transparency.