TikTok to Automatically Remove Content That Violates Policy

Social

Short-video sharing app TikTok said on Friday it will use more automation to remove videos from its platform that violate its community guidelines.

At present, videos uploaded to the platform go through technology tools that work to recognize and flag any potential violations which are then reviewed by a safety team member. If a violation is identified, the video is removed and the user is notified, TikTok said.

The ByteDance-owned company added that over the next few weeks it will begin automatically removing some types of content that violate policy over minor safety, adult nudity and sexual activities, violent and graphic content and illegal activities and regulated goods.

This will be in addition to the removals confirmed by the safety team.

The company said this will help its safety team to concentrate more on highly contextual and nuanced areas, such as bullying and harassment, misinformation and hateful behavior.

TikTok also added it will send a warning in the app upon first violation. However, in case of repeated violations, the user will be notified and the account can also be permanently removed.

The changes come as social media networks, including Facebook and TikTok, have come under fire for amplifying hate speech and misinformation globally across their platforms.

© Thomson Reuters 2021


Articles You May Like

What Are Shadow IDs, and How Are They Crucial in 2022?
Cybersecurity Experts Warn of Emerging Threat of “Black Basta” Ransomware
Apple MacBook Pro (2022) 13-Inch Model With M2 Chip Has a Slower SSD Than the M1 Processor: Report
Apple iOS 16 to Bring Message Filter for Dual-SIM iPhones; Reportedly Fixing Edit Message Feature
US Tech Industry Fears Handing Over Data on Abortion to State Government After Verdict in Roe vs Wade Trail