China Introduces Draft Regulations for AI Services with Human-Like Interaction - PRESS AI WORLD
PRESSAI
Technology

China Introduces Draft Regulations for AI Services with Human-Like Interaction

share-iconPublished: Sunday, December 28 share-iconUpdated: Sunday, December 28 comment-icon2 hours ago
China Introduces Draft Regulations for AI Services with Human-Like Interaction

Credited from: CHANNELNEWSASIA

  • China's new draft rules target AI services mimicking human interaction.
  • Providers must address user addiction and emotional dependence.
  • Regulatory measures include content restrictions for safety and security.
  • Public feedback is invited before finalizing the regulations.

China's cyber regulator has issued draft rules for public comment that aim to tighten oversight of artificial intelligence (AI) services designed to simulate human personalities and engage users emotionally. The proposed guidelines highlight Beijing's efforts to shape the rapid rollout of consumer-facing AI, emphasizing safety and ethical considerations. These rules cover AI products that mimic human traits and interact with users through various media, including text and audio, according to Reuters and Channel News Asia.

The regulations set forth by the Chinese government will require AI service providers to warn users against excessive use and to intervene if signs of addiction are detected. Moreover, companies will be responsible for safety across the entire product lifecycle, including the implementation of systems for algorithm review, data security, and personal information protection. This regulatory approach aims to mitigate potential psychological risks associated with AI services, as noted by India Times and Reuters.

Furthermore, the draft rules indicate that service providers should assess users' emotional states and levels of dependence on their services. If users exhibit extreme emotions or addictive behaviors, proactive measures are mandated to reduce any potential harm. The regulations also impose content restrictions, forbidding AI from generating material that may endanger national security or promote violence and obscenity, according to Channel News Asia and India Times.

SHARE THIS ARTICLE:

nav-post-picture
nav-post-picture