China’s Cyberspace Administration has put forward draft rules for AI generated digital humans. The proposal focuses on labels, consent, and stronger protection for minors. It targets growing use of AI avatars across social media and online platforms.
The draft came out on April 3, 2026. Public comments stay open until May 6. Regulators want a balance between AI growth and control over misuse. Every AI digital human will need a clear label. Users must be able to see when content is from AI. There is no confusion between real accounts and virtual personas.
Consent sits at the center of the rules. A real person’s face, voice, or biometric data cannot be used without permission. Each type of data needs separate approval. For minors under 14, guardians must give consent.
The draft also blocks copying real individuals without approval. This applies even when the likeness looks slightly altered. The goal is to limit deepfake misuse and identity abuse.
Rules for minors go further. The draft bans virtual romantic or family-style relationships involving users under 18. It also targets systems that either carry high financial costs and/or affect mental or physical health.
The penalties vary from 10,000 yuan (USD) to 200,000 yuan. This is equivalent to $1460 to $29,300. This follows growing public anxiety about the emotional damage of hyper-realistic AI avatars, authorities said.
A viral case added pressure. An episode on Weibo featured an elderly woman conversing with her dead son, but across from her was a CGI avatar of his face.
The system copied his speech and behavior. The video passed 90 million views and triggered debate about ethical limits.
China’s digital human industry continues to grow. Reports place its value at around 4.1 billion yuan in 2024. Growth reached about 85 percent in one year.
Other regions also act on deepfakes and AI content. The European Union plans strict AI Act rules by 2026 with mandatory labeling and large fines. The United Kingdom has laws against non consensual deepfake images with prison terms.
The United States and Australia have similar measures focused on consent and platform removal rules. Germany also studies tougher penalties after recent cases involving manipulated media.



