Credited from: SCMP
India's government announced a stringent new rule requiring social media companies to remove unlawful content within three hours of notification, significantly reducing the previous 36-hour timeframe. This regulation applies to major platforms such as Meta, YouTube, and X. The changes, set to take effect on February 20, aim to control online content amid rising governmental scrutiny over digital speech.
Critics of the new policy argue it is part of a broader effort to increase oversight on online expression, raising fears of censorship in a nation with over a billion internet users. According to experts, this compressed timeframe may compel platforms to automate content removal processes, potentially undermining human rights by prioritizing speed over thorough review, as noted by the Channel News Asia and BBC.
Moreover, the updated rules also include provisions for AI-generated content, mandating that such materials be “prominently labelled” to enhance visibility. This has raised concerns among digital rights advocates who fear that the requirement for timely compliance could lead to coercive content moderation practices and a reduction in the transparency of information available to users. This sentiment is echoed by SCMP and BBC.
As this new takedown policy prepares to roll out, various stakeholders express skepticism regarding its implementation. Experts have noted that this shift might necessitate extensive automation, limiting the opportunity for genuine human oversight in content assessment. Some legal professionals characterize this regime as potentially the “most extreme takedown regime in any democracy,” further complicating the dilemma between regulatory compliance and the right to free expression, according to Channel News Asia and BBC.