News

Social media giants open to introduction of industry regulator

Google, Facebook and Twitter discussed future regulation during a parliamentary committee hearing on social media’s impact on mental health.
Google, Facebook and Twitter discussed future regulation during a parliamentary committee hearing on social media’s impact on mental health. Google, Facebook and Twitter discussed future regulation during a parliamentary committee hearing on social media’s impact on mental health.

Technology giants Google, Facebook and Twitter have said they would not oppose the introduction of a regulator to monitor their platforms.

Giving evidence to the House of Commons science and technology committee inquiry into the impact of social media and screen use on young people’s health, the companies acknowledged that future regulation was likely.

Last month, media regulator Ofcom released a document outlining how its practices could be moulded to regulate social media firms.

Karim Palant, Facebook’s public policy manager in the UK, told the committee Ofcom’s intervention was “thoughtful” in moving the debate forward.

(Dominic Lipinski/PA)
(Dominic Lipinski/PA) (Dominic Lipinski/PA)

“Our chief executives have talked about the fact that some regulations are going to be inevitable,” he said.

“I think it has to be principle-based so it has to start from ‘what are the harms that we’re trying to address?’.”

He added that because each platform was “vastly different”, any regulator would need to be flexible, but said he could see “a role for Parliament or government in talking to us about these issues”.

Google’s child safety lead for Europe, the Middle East and Africa, Claire Lilley, said input from young people would be crucial to any regulatory process.

“My plea would be that if we do go down the route of regulating, we really need, in this space, to take into account the views of children and young people in a way that hasn’t been taken into account so far,” she said.

“We shouldn’t wait for regulation before we take action, and regulation needs to be evidence-based.”

The three companies defended their work to promote and enhance the safety of younger users, including partnerships with child safety charities and the introduction of wellbeing features such as those that track and limit screen time.

Each company confirmed it was increasing the number of moderators it has looking at content, as well as using more artificial intelligence and machine-learning tools to spot and remove content, in some cases before it appears online.

Asked for other areas of the industry the committee should be aware of when undertaking the inquiry, video games were highlighted.

Twitter vice-president of public policy in Europe, Sinead McSweeney, said: “As a mother, gaming would probably be one.

“I don’t think it’s an issue of regulation and child protection, I think we need to understand it better.”

(Dominic Lipinski/PA)
(Dominic Lipinski/PA) (Dominic Lipinski/PA)

“I can see enormous benefits. I have an only child, so the extent to which he can be in community with his friends and colleagues is of immense benefit, but I think we need more research. It goes back to data and evidence about what the impact is.”

Internet and social media firms have been scrutinised for the impact their platforms have on society, with calls for them to do more to tackle dangerous and malicious content.

On Tuesday, Facebook announced new transparency tools for the UK around political advertising, with organisations wishing to post political adverts on the site now required to verify their identity and location first.