Meta has introduced a new artificial intelligence system that uses visual and behavioural signals to estimate whether users are underage on its platforms, including Facebook and Instagram. The system is designed to strengthen age enforcement by detecting accounts that may belong to users under 13 or those who have misrepresented their age during sign-up.

According to the company, the tool does not rely on traditional identity documents alone. Instead, it analyses patterns from user activity and content, including estimates derived from images and videos that may indicate physical development traits such as height and bone structure. These signals are combined with behavioural indicators like posting habits, account activity, and engagement patterns to make an overall assessment of likely age.
When an account is flagged as potentially belonging to a minor, Meta may apply additional restrictions or require age verification. This could include limiting access to certain features or placing the account under stricter safety settings while verification is completed.
The company says the system is part of a broader effort to improve compliance with child safety regulations and reduce the number of underage users bypassing platform rules. Meta has increasingly relied on automated systems to manage age-related enforcement as self-reported information has proven unreliable in many cases.
The AI-based approach will be integrated into Meta’s wider “teen account” framework, which already applies tighter privacy defaults, content limits, and communication restrictions for users identified as minors. The new detection layer is intended to improve the accuracy of how those accounts are identified in the first place.
Meta says the system is designed to work at scale across its global user base and will continue to be refined over time as more data is processed and edge cases are identified.
The rollout is expected to begin across Facebook and Instagram, with phased implementation depending on region and regulatory requirements.
The development adds to Meta’s broader push into AI-driven safety systems, where automated tools increasingly handle tasks that previously depended on user declarations or manual reporting.
The approach may be seen as controversial because it relies on inferring age from sensitive physical and behavioural signals rather than explicit user-provided data. Critics argue that estimating characteristics like height or bone structure from images could be inaccurate and may lead to wrongful classification of users as minors or adults. There are also concerns about how much personal information is being inferred from everyday content and whether users fully understand or consent to that level of analysis.

Stay ahead of the stories shaping our world. Subscribe to Impact Newswire for timely, curated insights on global tech, business, and innovation all in one place.
Dive deeper into the future with the Cause Effect 4.0 Podcast, where we explore the ideas, trends, and technologies driving the global AI conversation.
Got a story to share? Pitch it to us at info@impactnews-wire.com and reach the right audience worldwide
Emmanuel Abara Benson is a business journalist and editor covering artificial intelligence, global markets, and emerging technology.
He has previously worked with Business Insider Africa and Nairametrics, reporting on finance, startups, and innovation.
His work focuses on AI, digital economy, and global tech trends.
Discover more from Impact Newswire
Subscribe to get the latest posts sent to your email.



