The Australian federal government’s teen social media ban was top of mind on Friday as Meta hosted Australia’s first Instagram Safety Camp. The event, held in Sydney, brought together media, teachers, parents, and social influencers together to discuss the safety tools implemented on the Instagram platform.
The featured speaker at the event was Meta’s Global Head of Child Safety, Ravi Sinha. In a private chat with media, he addressed the topic of the social media ban.
“We’re thankful that there’s an ongoing conversation on these things, because I think we’re all interested in the same thing, which is the best way to keep teens safe online,” Sinha said. “What we’ve learned in our experience and in working with experts and talking to teens and parents is really that a more measured approach that graduates teens from the level of supervision that a parent has over them and the strong defaults that we set for them as they get older as teens, is a better approach than sort of an all-out ban, where, I think, what we worry about is the possibility that what you’re gonna end up doing is pushing those teens to a less safe place.”
The protections Sinha is referring to include safety notices, location notices (aiding young people from becoming the victim of sextortion scams), and nudity protection. Interestingly, in June, unwanted exposure to nudity was reduced by 40%, with Meta also reporting that in a May report in was found that people decided against forwarding a blurred image 45% of the time after seeing a forwarding warning.
These are settings turned on for teens by default, with the majority of users leaving these protections switched on. This indicates a strong level of user support for the features. So, why can’t similar features be implemented for adult users?
Sinha reports that many of the features are available for adult users: “The nudity protection is a good example. We’ve heard feedback from adults who say ‘I also don’t want to receive images like this, I also want to be reminded if I’m sending something that may be inappropriate or that I may regret’, and so we do have those features available for adults. I have on my phone a reminder that I’ve been online for X number, I think for Instagram, you know, after 15 minutes it tells me, and I think that what we try to do is build these features for our users, and then for teens more specifically, we want to make sure that they’re turned on and, you know, supplementing them with things that might be more teen specific.”
For adults, these are opt-in settings and not defaults.
Sinha countered this concern, stating that: “I think what we find is that adults tend to want a more open and kind of communicative experience. They’re less concerned about people they don’t know reaching out to them. They’re more open to seeing different types of content than we would feel is appropriate for, say, a 13-year-old, and so we try to calibrate the way in which we set these up, including what’s on by default, based on the age of the user and also based on the feedback that we’ve received.”

Ravi Sinha and Sarah Harris in conversation
There’s a growing conversation around the idea that a protection that should be implemented for young people is a ban on the algorithms that serve up content to users from accounts they do not follow. The state of New York in the US has legislated the Stop Addictive Feeds Exploitation (SAFE) for Kids act, which requires social platforms and app stores seek parental consent for children under 18 to use apps with “addictive feeds.” The idea is that a single linear feed that is only populated by accounts users make an active decision to follow creates a safer environment for users.
“What we found is when we looked at the different ways in which we could populate a feed, populating it with things that a teen is interested in … and also the content that we have on our safest content controls, we actually think that this is a safer, more positive experience for people. We find this to be the case, that there actually are things that go into making a recommendation about content where you have opportunities to make sure that the content you’re recommending is safe and appropriate. So that’s what we’re building for – to reap the benefits of those recommendations as opposed to something that’s strictly linear and chronological,” Sinha said.