Instagram is testing new ways to verify the age of people using its service, including a face-scanning artificial intelligence tool, having mutual friends verify their age or uploading an ID.
But the tools won’t be used, at least not yet, to block kids from the popular photo and video sharing app. Current testing only involves verifying that a user is 18 years of age or older.
The use of face-scanning AI, especially on teenagers, raised some alarms on Thursday, given Instagram’s mother Meta’s troubled history when it comes to protecting users’ privacy. Meta emphasized that the technology used to verify people’s age cannot recognize their identity – only their age. Once the age verification is complete, Meta said so and Yoti, the AI contractor she partnered with to perform the scans, will delete the video.
Meta, the owner of Facebook and Instagram, said that starting Thursday, if anyone tries to edit their birthdate on Instagram for anyone under 18-18 or older, they will need to verify their age using one of these. methods.
Meta continues to face questions about the negative effects of its products, especially Instagram, on some teenagers.
Technically, kids need to be at least 13 years old to use Instagram, similar to other social media platforms. But some get around this by lying about their age or having a parent do it. Meanwhile, teens aged 13-17 have additional restrictions on their accounts — for example, adults they’re not logged in to can’t message — until they turn 18.
Using loaded IDs isn’t new, but the other two options are. “We are giving people a variety of options to check their age and see what works best,” said Erica Finkle, director of data governance and public policy at Meta.
To use the face scan option, the user needs to upload a video selfie. That video is then sent to Yoti, a London-based startup that uses people’s facial features to estimate their age. Finkle said Meta is not yet trying to identify children under the age of 13 using the technology because it does not maintain data on that age group — which would be necessary to properly train the AI system. But if Yoti predicts a user is too young for Instagram, they will be asked to prove their age or remove their account, she said.
“He never exclusively recognizes anyone,” said Julie Dawson, Yoti’s director of policy and regulation. “And the image is instantly deleted as soon as we are done.”
Yoti is one of several biometric companies that are capitalizing on a push in the UK and Europe for stronger age verification technology to prevent children from accessing pornography, dating apps and other internet content aimed at adults – not to mention bottles. of alcohol and others limits items in physical stores.
Yoti has worked with several large UK supermarkets on face-scanning cameras at self-service counters. It has also started to check the age of users of the French youth-oriented video chat app Yubo.
While Instagram will likely live up to its promise to delete a candidate’s facial images and not attempt to use it to recognize individual faces, the normalization of face scanning poses other societal concerns, said Daragh Murray, a senior professor at the University of Essex. Law School.
“It’s problematic because there are a lot of known biases in trying to identify by things like age or gender,” Murray said. “You’re essentially looking at a stereotype and people differ a lot.”
A 2019 study by a US agency found that facial recognition technology often works unevenly based on a person’s race, gender, or age. The National Institute of Standards and Technology found higher error rates for younger and older people. There is still no benchmark for age estimation facial analysis, but Yoti’s own published analysis of his results reveals a similar trend, with slightly higher error rates for women and people with darker skin tones.
Meta’s face-sweeping move is a departure from what some of its tech competitors are doing. Microsoft said on Tuesday that it would stop providing its customers with facial analysis tools that “intend to infer” emotional states and identity attributes such as age or gender, citing concerns about “stereotyping, discrimination or unfair denial of service.”
Meta itself announced last year that it was shutting down Facebook’s facial recognition system and deleting the facial prints of more than 1 billion people after years of scrutiny from courts and regulators. But it signaled at the time that it would not give up on analyzing faces entirely, moving away from the broad tagging of social media photos that helped popularize the commercial use of facial recognition towards “stricter forms of personal authentication”.