Internet

Social media giant Meta said over 16.2 million content pieces were “actioned” on Facebook across 13 violation categories proactively in India during the month of November. Its photo-sharing platform, Instagram took action against over 3.2 million pieces across 12 categories during the same period proactively, as per data shared in a compliance report.

Under the IT rules that came into effect earlier this year, large digital platforms (with over 5 million users) have to publish periodic compliance reports every month, mentioning the details of complaints received and action taken thereon.

It also includes details of content removed or disabled via proactive monitoring using automated tools. Facebook had “actioned” over 18.8 million content pieces proactively in October across 13 categories, while Instagram took action against over 3 million pieces across 12 categories during the same period proactively.

In its latest report, Meta said 519 user reports were received by Facebook through its Indian grievance mechanism between November 1 and November 30.

“Of these incoming reports, we provided tools for users to resolve their issues in 461 cases,” the report said.

These include pre-established channels to report content for specific violations, self-remediation flows where they can download their data, avenues to address account hacked issues, etc, it added. Between November 1 and November 30, Instagram received 424 reports through the Indian grievance mechanism.

Facebook’s parent company recently changed its name to Meta. Apps under Meta include Facebook, WhatsApp, Instagram, Messenger and Oculus.

As per the latest report, the over 16.2 million content pieces actioned by Facebook during November included content related to spam (11 million), violent and graphic content (2 million), adult nudity and sexual activity (1.5 million), and hate speech (100,100).

Other categories under which content was actioned include bullying and harassment (102,700), suicide and self-injury (370,500), dangerous organisations and individuals: terrorist propaganda (71,700) and dangerous organisations and individuals: organised hate (12,400).

Categories like Child Endangerment – Nudity and Physical Abuse category saw 163,200 content pieces being actioned, while Child Endangerment – Sexual Exploitation saw 700,300 pieces and in Violence and Incitement category 190,500 pieces were actioned. “Actioned” content refers to the number of pieces of content (such as posts, photos, videos or comments) where action has been taken for violation of standards.

Taking action could include removing a piece of content from Facebook or Instagram or covering photos or videos that may be disturbing to some audiences with a warning.

The proactive rate, which indicates the percentage of all content or accounts acted on which Facebook found and flagged using technology before users reported them, in most of these cases ranged between 60.5-99.9 percent.

The proactive rate for removal of content related to bullying and harassment was 40.7 percent as this content is contextual and highly personal by nature. In many instances, people need to report this behaviour to Facebook before it can identify or remove such content. For Instagram, over 3.2 million pieces of content were actioned across 12 categories during November 2021. This includes content related to suicide and self-injury (815,800), violent and graphic content (333,400), adult nudity and sexual activity (466,200), and bullying and harassment (285,900).

Other categories under which content was actioned include hate speech (24,900), dangerous organisations and individuals: terrorist propaganda (8,400), dangerous organisations and individuals: organised hate (1,400), child endangerment – Nudity and Physical Abuse (41,100), and Violence and Incitement (27,500).

Child Endangerment – Sexual Exploitation category saw 1.2 million pieces of content being actioned proactively in November.