:max_bytes(150000):strip_icc():format(jpeg)/GettyImages-1353071585-70e3a7ae10a84558a54ce7a89ea98c9c.jpg)
Key Takeaways
- Gaming and social media hubs are taking steps to identify young users and limit their access to mature content or ability to interact with adults.
- Privacy advocates have raised concerns about age verification tools, while platforms say they’re mindful of such issues.
Earlier this week, Roblox began requiring an age-verification process for users to keep chatting with each other. The move made the gaming hub the latest tech platform to retool its approach to keeping young users safe.
Meta (META) is now shielding teens from PG-13-esque Instagram content, while OpenAI is altering how ChatGPT interacts with minors, the companies said. Grok began restricting image generation to paid subscribers after an uproar over the tool creating images of real people, including minors, in minimal attire.
Companies offering social, gaming and other digital services are reacting to concerns that they provide too little oversight of teens—as well as to scrutiny of their proposed solutions. Age-verification tools, such as ID checks, biometric scans and assessments of users’ behavior, are also alarming some privacy advocates, such as the Electronic Frontier Foundation.
“These restrictive mandates strike at the foundation of the free and open internet,” the group said.
Why This News Matters
How this issue evolves has financial implications for companies, given that they contract vendors for age-verification services and must think about the legal ramifications of their approach.
Roblox (RBLX) on Wednesday said it is taking steps to ensure users can only chat with people around their age. People must be assessed by a “facial age estimation” tool or share a photo ID to message others, the company said. More than half of active users are opting into the process, according to Roblox, though some have complained that the tool misclassified them and barred them from messaging peers. Roblox and Persona, the vendor providing the tool, did not respond to Investopedia’s requests for further comment in time for publication.
Instagram’s roughly year-old teen accounts are popular with parents, the company said. In recent months, it’s begun preventing such users from seeing content that would get a PG-13 movie rating.
“We take a comprehensive approach to ensuring teens have age-appropriate experiences on Meta platforms,” company spokesman Edward Patterson said.
“Privacy and data protection guardrails” were build into Instagram’s approach to young users, the company said. Yoti, a vendor performing age-estimation services, deletes the selfies it analyzes and images of photo IDs within 30 days, Instagram said.
Persona, which performs age-estimation checks for Roblox and OpenAI, also deletes images once the process is complete, those companies said.
X isn’t allowing users to create images on Grok unless they have a subscription. The platform adopted this policy in the past day or two after its safety team said it removes illegal content and works with law enforcement as appropriate.
“Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” owner Elon Musk said on X.
Further changes may be in store, given how many governments are considering regulations on online platforms’ treatment of youth. New Zealand’s prime minister recently proposed banning those under 16 from social media following a ban on its use by younger teens by Australia.
And some U.S. lawmakers are considering new rules for how platforms handle young users. More than half of states have enacted laws requiring that platforms engage in some sort of age-verification, the Electronic Frontier Foundation said.

:max_bytes(150000):strip_icc()/GettyImages-1353071585-70e3a7ae10a84558a54ce7a89ea98c9c.jpg)