A former Meta executive has claimed that the social media firm is "fundamentally misleading" the public about the safety of its photo-sharing app Instagram for teens. According to the Telegraph, Arturo Bejar, who left Meta in late 2021, went public with his concern in testimony to US senators earlier this month. He stated that Instagram is "categorically not" appropriate for children as young as 13 and claimed that the company had failed to act on his concern.
With his testimony, Mr Bejar added pressure on the tech giant over its effect on teenagers. He told US politicians that his teenage daughter and her friends had for years received unwanted sexual advances on Instagram. According to the Telegraph, he also stated that the overall design of Instagram is not conducive to encouraging teenagers to report things that make them uncomfortable.
Mr Bejar, who has worked in the technology industry since he was 15, joined the company then known as Facebook in 2009. He joined as an engineering director in the company's Protect and Care team. He left in 2015 to spend more time with his children. However, by 2019, he had grown concerned that his daughter, who was 14 at the time, was receiving unwanted sexual advances on Instagram. So he returned to Meta as a consultant to work on safety tech.
Mr Bejar's second stint at the company coincided with a series of damaging leaks about Meta. In 2021, data scientist Frances Haugen went public and handed a dossier of information to US senators. Her claim was research that appeared to show Meta knew Instagram was making teenage girls feel worse about themselves. But the company strongly denied these claims.
As Ms Haugen was going public, Mr Bejar said that he was pushing privately for a change. He claimed that he flagged his concern to Sheryl Sandberg, Facebook's operations chief, and Instagram lead Adam Mosseri. He reportedly also sent a private mail to Meta CEO Mark Zuckerberg with detailed research about how teenagers were experiencing more harm on Instagram than previously thought. However, Mr Bejar said that his concerns were brushed off and Mr Zuckerberg never replied.
He said, "I had first-hand experience of them ignoring what can be described as statistically significant research," which suggested millions of teenagers were experiencing safety issues while using Meta's apps.
Also Read | Elon Musk Knew About Tesla Autopilot Glitch, Still Let Cars Run, Says US Judge
Meta discloses figures on incidents of hate speech, bullying and harassment on its social networks as part of regular transparency reports. However, Mr Bejar argued that the numbers are "misleading and misrepresenting" the problem as they disclose just a "fraction" of the true harm. He claimed that the vast majority of negative experiences on Instagram do not break its rules and even when content does break the rules, it may not be reported.
However, speaking to the Telegraph, a Meta spokesperson said it was "absurd to suggest" there was "some sort of conflict" between its study of users' "perception" of Instagram and its transparency reports. "Prevalence metrics and user perception surveys measure two different things. We take actions based on both and work on both continues to this day," the spokesman said.
But Mr Bejar stated social media companies should be compelled to better collect and publicise data on how many children get unwanted sexual advances on their apps. He said that encrypted messages are "really important" for many people but he was less sure if they were appropriate for children.
When asked whether he felt children's safety was a top priority at Meta, Mr Bejar claimed that dozens of researchers had been laid off from its Instagram wellbeing team since he left. "That tells you a lot about priorities," he said.