
Meta launches new security features on Instagram and Facebook to protect teens
Meta is introducing new safety features to protect teenagers on its platforms. These features are intended to prevent exploitative content in messages and to provide teens with more information about the people they interact with online. For example, teenagers will now see details such as when an Instagram account was created, as well as additional safety tips to help them identify potential scammers.
The platform is also making it easier for teenagers to block and report suspicious accounts with a single action. Meta stated that in June alone, users blocked accounts a million times and reported accounts another million times after seeing messages. The improvements are part of Meta’s broader strategy to address policymakers’ concerns about the company’s responsibility to protect young users from sexual exploitation.
Earlier this year, Meta took action against nearly 135,000 Instagram accounts that inappropriately approached children. These accounts exhibited behavior such as leaving inappropriate comments and requesting sexually explicit images from accounts managed by adults that featured minors.
To better protect young users, Meta automatically places the accounts of teenagers and children under the strictest safety settings for messaging. These settings filter offensive messages and limit contact from unknown accounts. Although the minimum age to use Instagram is 13, adults can create and manage accounts representing younger children, as long as the account bio clearly states that an adult is in control.
Meta’s safety measures come amid increasing scrutiny of social media platforms and their impact on children’s mental health. Several public prosecutors have accused Meta of implementing addictive features in its apps that contribute to harmful effects on young users. Last week, Meta announced the removal of about 10 million profiles in the first half of 2025. These were mainly fake profiles of users posing as well-known content creators. The tech giant aims to significantly reduce the number of spam messages in this way.
Legislation is also being developed to address concerns about children’s online safety. In the US, the Kids Online Safety Act was reintroduced in May, after stalling in 2024. The bill proposed to require social media platforms to establish a duty of care to prevent their products from harming children. The following September, Snapchat was the subject of a lawsuit filed by the state of New Mexico, alleging that the app created an environment conducive to ‘sextortion’ targeting children.
Business AM
Subscribers 0
Fans 0
Followers 0
Followers