Survey reveals people support the call for tougher stance on social media abuse
Social media companies are not trusted by the public to deal with the problem of online abuse and hateful content, research suggests.
It also found the majority of people in the UK support more regulation on tech firms.
The study by the anti-abuse campaign group Hope not Hate found that 74 per cent of those asked said they did not trust social media companies alone to decide what is extreme content or disinformation when it appears on their platforms.
It found that the issue of online abuse remains a key one among the public, with 73 per cent of those asked saying they were worried about the amount of such content on social media.
And there is strong public support for tougher regulations compelling tech firms to take action against harmful content, with 71 per cent agreeing they should be held legally responsible for the content on their platforms and 73 per cent saying they should be made to remove such content if it appears.
It comes as the Yorkshire Evening Post launched its "Call It Out" campaign last year in light of abuse its journalists had received on social media. The campaign featured a series of interviews with high-profile politicians, sports stars and city leaders who had also been subjected to online and social media abuse.
The proposals also include plans which would force platforms to identify "legal but harmful" content and how they plan to police it on their sites, which has raised concerns from some about a possible clampdown on free speech.
But Hope not Hate's research suggests the public supports the move, with 80 per cent of those asked saying that while they believe in free speech, there must be limits to stop the spread of extremist content online.
Joe Mulhall, head of research said: "Allowing people to spew hateful and offensive content online is not a way to protect freedom of speech, but rather risks sowing divisions and amplifying the vile views of a tiny minority. At present, online speech that causes division and harm is often defended on the basis that to remove it would undermine free speech.
"In reality, allowing the amplification of such speech only erodes the quality of public debate, and causes harm to the groups such speech targets. This defence, in theory and in practice, minimises free speech overall. As our polling shows, there is clearly an overwhelming consensus that hateful content, even when legal, is too visible on social media platforms.
"The only way to really make sure that everyone has freedom of speech is to protect anyone who is currently being attacked or marginalised based on characteristics such as race, gender or sexual orientation.
"That's why continuing to include legal but harmful content in the Online Safety Bill is the best way to ensure social media companies apply effective systems and processes to reduce the promotion of hate and abuse, while preserving freedom of expression."