News

What are the new rules to make tech giants accountable for harmful content?

Digital Secretary Oliver Dowden and Home Secretary Priti Patel will set out a final decision on online harms proposals on Tuesday.
Digital Secretary Oliver Dowden and Home Secretary Priti Patel will set out a final decision on online harms proposals on Tuesday. Digital Secretary Oliver Dowden and Home Secretary Priti Patel will set out a final decision on online harms proposals on Tuesday.

Social media giants face tougher laws on what they can allow on their platforms in a bid to make the internet safer for children and the vulnerable.

Here we look at the details of the Government’s final decision on online harms:

– Why is the Government introducing rules for what is posted online?

The Government believes legally binding legislation is needed to make sure tech firms keep their platforms safe in a wide range of areas, from child sexual abuse content to terrorist material.

Oliver Dowden
Oliver Dowden Oliver Dowden and Home Secretary Priti Patel will outline the plans (Jonathan Brady/PA)

This follows a number of incidents in which leaders across the world believe social media failed to act fast enough in removing harmful content, such as the Christchurch mosque terrorist attack in March 2019 which was livestreamed online.

– So what are the rules?

The exact nature of the rules will be set out by Digital Secretary Oliver Dowden and Home Secretary Priti Patel on Tuesday.

What we do know is that it will target activity on social media sites, websites, apps and other services which host user-generated content or allow people to talk to others online.

Social networks will need to remove and limit the spread of illegal content such as child sexual abuse, terrorist material and suicide content.

Under the rules, tech platforms will be expected to do more to protect children from being exposed to things such as grooming, bullying and pornography.

Ofcom
Ofcom Ofcom has been confirmed as the regulator for online harms (Yui Mok/PA)

The legislation will enable Ofcom, the nominated regulator to oversee these rules, to require companies to use technology to monitor, identify and remove tightly defined categories of illegal material relating to child sexual exploitation and abuse via private messaging apps and closed social media groups – however, the Government said this should only be used as a last resort where alternative measures are not working.

– How will it work?

Companies will have different responsibilities for different categories of content and activity.

The biggest names, including Facebook, TikTok, Instagram and Twitter, are likely to fall into Category 1, which will need to assess the risk of legal content or activity on their services which carry “a reasonably foreseeable risk of causing significant physical or psychological harm to adults”.

They will then need to make clear what type of “legal but harmful” content is acceptable on their platforms in their terms and conditions.

TikTok
TikTok TikTok will be among the platforms likely to be assigned as a Category 1 platform due to its size (Peter Byrne/PA)

Firms within this category will also be required to publish transparency reports about the steps they are taking to tackle online harms.

Category 2 will apply to platforms which host dating services or pornography and private messaging apps.

– What about comments on news articles?

Comments on news articles will be exempt from the new rules in order to safeguard freedom of speech.

– What are the consequences if social media firms do not follow these rules?

Tech giants face fines of up to £18 million or 10% of annual global turnover, whichever is higher, if they fail to comply.

Ofcom will have the power to block non-compliant services from being accessed in the UK.

The Government has also threatened to add criminal sanctions on senior managers if companies do not take the new rules seriously.

– When are they expected to come into place?

The Government plans to bring the laws forward in an Online Safety Bill next year.