Meta is starting to rely more on AI to figure out who might be under 13 on its platforms. The company says it will scan photos and videos for visual signals that hint at a user’s age. If those signals suggest someone is too young, their account could be flagged. The system looks at things like height, body shape, and bone structure. Not in a precise medical sense, but in a general way. Meta says the goal is to estimate age range, not identify a person.
The company also tried to clear up one concern early. This is not facial recognition, at least according to its explanation. The AI is not matching faces or trying to figure out who someone is. It is only looking for patterns that might indicate whether a user is likely a child. That said, images are only one part of the process. Meta is also checking text and activity across accounts. Posts, captions, bios, comments, and even small details can be used as signals. Something as simple as a birthday post or a mention of school grade can add context. All of this gets combined. The idea is that one signal alone is not enough, but together they can point in a clearer direction.
Right now, this system is not everywhere. Meta says it is active in a limited number of countries. A wider rollout is expected, but no exact timeline has been shared yet. If the system flags an account as possibly underage, action follows. In most cases, the account gets deactivated. The user then has to go through an age verification process to prove they meet the platform’s requirements. If they cannot, the account may be removed.
Meta says this is part of a larger effort that has been building over time. Keeping users under 13 off Facebook and Instagram has always been a rule, but enforcement has been inconsistent. AI is now being used to tighten that gap. The company is also looking beyond posts and pictures. It is paying attention to behavior. How accounts interact, what kind of content they share, and how often certain patterns repeat. These signals may not seem important on their own, but together they help shape a profile.
Alongside this, Meta is expanding something it calls “Teen Accounts.” These are accounts that come with stricter settings by default. The idea is to limit exposure to unwanted contact and reduce harmful interactions. For example, teens can only receive messages from people they already follow or know. Comments are filtered more strictly. Accounts are often set to private from the start. This system is already being expanded. Meta confirmed it is rolling out to more countries, including regions in the European Union and Brazil. Facebook is also being added to this approach, starting in the United States before moving to other regions.
Timing matters here. Meta is under pressure when it comes to child safety. Legal cases have been building, and regulators are watching closely. One recent case in New Mexico ended with a major penalty against the company. The ruling also pushed for changes in how the platform protects younger users. Meta did not agree with everything, but the pressure is still there. This is not a one-off situation. Other tech companies are facing similar questions. How do they detect underage users, and how far should they go in doing it?
Meta’s answer seems to be more automation. More AI, more scanning, more pattern detection. That comes with its own concerns. Systems like this are not perfect. Estimating age from visuals and behavior can lead to mistakes. Some users may get flagged even if they meet the rules. Still, the company appears ready to move forward with it. The scale of its platforms makes manual checks difficult, so automation becomes the default option.
For now, Meta is pushing ahead with a system that relies on signals rather than certainty. It does not claim to know exact ages. It works on probability. Whether that approach solves the problem or creates new ones will depend on how it performs once it expands further.




