A specialist anti-terror team in the UK has warned more needs to be done to tackle online extremism despite more than 300,000 videos, web pages and posts being removed after they were flagged up to internet firms.
Figures released today show the national police unit reached the milestone in recent weeks, although the rate of removals prompted by its work has slowed as firms step up their own efforts.
The Counter Terrorism Internet Referral Unit (CTIRU) works with hundreds of organisations to remove content including propaganda and recruitment videos, images of executions and speeches calling for racial or religious violence.
The statistics show that, as of last month, 299,121 pieces of material had been cleared at the instigation of the unit since its launch in 2010. Officers confirmed that the number of removals has since passed the 300,000 mark.
Detective Chief Superintendent Clarke Jarrett, of the Metropolitan Police’s counter-terrorism command, said: “The 300,000 milestone is positive. It’s 300,000 pieces of material not there to radicalise people. That 300,000 isn’t a representation of what’s out there. There’s still plenty of content out there.”
From the start of January to the end of August this year, 43,151 pieces of content were removed at the request of the CTIRU. This was down by nearly half on the tally of 83,784 recorded in the equivalent period of 2016.
Det Chief Supt Jarrett acknowledged that removals instigated by the CTIRU have slowed, but said: “I think that’s a success story because we’ve now got the industry into a place where they are doing more.”
Officers in the unit trawl the web looking for material as well as investigating referrals from the public. After carrying out assessments, they contact internet providers to request the removal of harmful items.
More than 300 firms have taken down material following requests from the CTIRU. The bulk of the unit’s activity deals with Islamist-related content, but it is referring more far-right material.
The CTIRU was the first unit of its type in the world and UK police are keen for other countries to consider adopting the model.
In recent months, companies have detailed steps they are taking to tackle terrorist content. From January to June, Twitter removed just under 300,000 accounts for terror-related violations. YouTube has introduced “machine learning” to help identify extremist and terror-related material.
Facebook has revealed it is using artificial intelligence to keep terrorist content off the site.
The head of MI5 has said technology companies have an “ethical responsibility” to help confront the unprecedented threat, while Britain and France are exploring plans that could see platforms face fines if their efforts are not up to scratch.