Meta announced Thursday that it is developing new tools to protect teenage Instagram users from blackmail schemes involving sexual images and other abuse.
Such actions have been taken JAV after politicians accused the platform of harming the mental health of young people.
Meta said in a statement that it is testing an artificial intelligence (AI)-based "nudity protection" tool designed to automatically detect and block nudity images sent to minors through the app's messaging system.
"In this way, the recipient does not immediately see unwanted intimate content and can choose whether to see it or not," Capucine Tuffier, who is responsible for child protection at Meta's French unit, told AFP.
The firm says it will also send advisory messages to anyone sending or receiving such images.
Measures are also being developed against schemes where criminals demand something by threatening to reveal intimate information about the blackmailer, including preventing potential blackmailers from contacting teenagers.
In January, Meta announced it would implement measures to protect users under the age of 18 on its platforms, after dozens of US states initiated legal action accusing the company of profiting "from the pain of children."
A leaked internal study of Meta, reported by The Wall Street Journal and whistleblower, former Facebook engineer Frances Haugen, showed that the company had long been aware of the dangers its platform posed to young people's mental health.
On Thursday, Meta said its latest tools build on the company's "longstanding work to help protect young people from unwanted or potentially harmful contact."
"We're testing new features to help protect young people from blackmailing through sexual and abusive images, and to make it harder for potential scammers and criminals to communicate with teens," the company said.