China's internet regulator CAC is drawing a red line for AI chatbots that simulate human intimacy.

These world-first regulations aim to prevent users—especially young people—from developing psychological dependencies on "virtual partners."

Details 

⏰ Mandatory breaks: Providers must clearly state that users are not communicating with a real person. After two hours of continuous use, a mandatory pop-up must remind users to take a break.

🚨 Human takeover for suicidal thoughts: As soon as a user expresses suicidal intent, a human must immediately take over the conversation and notify a guardian or emergency contact.

🔞 Strict youth protection: Parents must give consent before children can use AI companions. Systems must also be able to estimate users' age without explicit disclosure.

🎰 Banned content: AI chatbots may not generate content related to suicide, self-harm, gambling, violence, or obscenity. "Emotional traps" and false promises are also prohibited.

🔍 Security audits for major players: Services with over 1 million registered users or 100,000 monthly active users must undergo mandatory security reviews.

Good to know 

A study found: 45.8% of Chinese students use AI chatbots—and show significantly higher depression levels than non-users. China is responding with the world's strictest regulations to date to rein in the 515-million-user industry.

What does this mean for the AI hype?

Sources: CNBC Asia Financial, Ars Technica
Free Guide

The China Survival Guide for Western Businesses

Entity setup, WeChat strategy, hiring your first local team. 12+ years on the ground in Shanghai.