Impact Newswire

China Tightens Rules on Digital Humans and Child Safety

China is moving to impose tighter controls on so-called “digital humans,” introducing sweeping draft regulations that target how artificial intelligence systems interact with users, particularly children.

China Tightens Rules on Digital Humans and Child Safety

The proposed rules, issued by the Cyberspace Administration of China, aim to bring structure to a fast-growing sector that blends AI, social interaction and entertainment. At their core is a requirement that all digital human content be clearly labelled, ensuring users can distinguish between real and AI-generated personas.

The most striking provisions focus on children. The draft explicitly bans digital humans from offering “virtual intimate relationships” to users under 18, a move designed to prevent emotional dependency and psychological harm. It also prohibits services that could mislead minors or create addictive usage patterns, reflecting growing concern about how immersive AI systems can shape behaviour.

The rules go further, restricting the use of personal data to create digital avatars without consent and banning attempts to bypass identity verification systems. Content generated by digital humans must also comply with strict political and social guidelines, including prohibitions on material that threatens national security, promotes division or incites discrimination.

Service providers are expected to actively monitor user interactions, filtering out harmful material such as violent, sexually suggestive or discriminatory content. They are also encouraged to intervene when users display signs of distress or harmful behaviour, signalling a broader shift toward embedding duty of care into AI systems.

The regulations are open for public comment until May, but they already offer a clear signal of Beijing’s direction: rapid AI expansion will be matched by equally assertive governance.

Analytically, the move reflects a deeper concern about the psychological and social implications of increasingly human-like AI. Digital humans are not just tools; they simulate relationships, emotions and identity. That makes them uniquely powerful and potentially risky, especially for younger users whose cognitive and emotional development is still evolving.

China has been moving steadily in this direction. Previous proposals have sought to limit screen time, enforce “minor modes” on apps and restrict addictive digital design. The new rules extend that philosophy into the AI era, targeting not just how long children use technology, but how technology engages them.

What distinguishes this latest intervention is its focus on design and interaction. Rather than simply limiting access, regulators are attempting to shape the architecture of AI systems themselves—how they speak, respond and form connections with users. This suggests a recognition that the next wave of digital risk lies not in content alone, but in the relational dynamics created by AI.

There is also a strategic dimension. China is simultaneously pushing aggressive adoption of artificial intelligence across its economy, while ensuring that development aligns with state-defined social and political norms. The result is a dual-track approach: innovation tightly coupled with control.

Globally, this raises important questions. As AI systems become more human-like, other governments may face similar pressures to regulate emotional manipulation, addiction and data use. China’s framework, though highly specific to its political context, could serve as an early model for how states attempt to govern the psychological frontier of AI.

Get the latest new and insights that are shaping the world. Subscribe to Impact Newswire to stay informed and be part of the global conversation.

Got a story to share? Pitch it to us at info@impactnews-wire.com and reach the right audience worldwide


Discover more from Impact Newswire

Subscribe to get the latest posts sent to your email.

Scroll to Top

Discover more from Impact Newswire

Subscribe now to keep reading and get access to the full archive.

Continue reading