News

YouTube tests recommendation changes to prevent spread of borderline content

The Google-owned platform has already seen results from adjustments implemented in the US earlier this year.
The Google-owned platform has already seen results from adjustments implemented in the US earlier this year. The Google-owned platform has already seen results from adjustments implemented in the US earlier this year.

YouTube has started testing changes to the way videos are recommended to users in the UK to prevent borderline content and misinformation from spreading, the platform’s chief has revealed.

Adjustments to the algorithm have already resulted in views of this sort of content due to recommendations being cut by half in the US, where it was introduced at the beginning of 2019.

The video-sharing website, like many other online tech giants, have grappled with balancing the right to free speech against a backdrop of potentially harmful content, such as questionable miracle cures, flat earth conspiracies, and false information about historic events like 9/11.

YouTube chief executive Susan Wojcicki said the company is working to “reduce spread of content that brushes right up against our policy line” in a bid to allow quality content “more of a chance to shine”.

Tweaking how recommendations work is now being experimented in a number of English-language markets, including Ireland and South Africa.

When announcing the US trial at the end January, YouTube said it believed “limiting the recommendation of these types of videos will mean a better experience for the YouTube community”.

In Ms Wojcicki’s quarterly letter, she told creators about her aim to preserve openness on the platform, but admitted it would not be easy.

“It sometimes means leaving up content that is outside the mainstream, controversial or even offensive,” she said.

“But I believe that hearing a broad range of perspectives ultimately makes us a stronger and more informed society, even if we disagree with some of those views.”

The YouTube boss set out the four standards in the company’s approach towards responsibility – removing violating content; raising authoritative voices; reducing the spread of borderline content; and rewarding trusted creators.

In June, the site began offering users greater insight into why some videos are suggested to them, as well as adding an option to remove a channel suggestion, amid concerns about the types of material its algorithm highlights.