Technology

Facebook’s AI might be helping terrorists by removing evidence

In recent years, governments have called upon tech firms including Facebook and YouTube to remove anything promotin
Published May 13, 2019

In recent years, governments have called upon tech firms including Facebook and YouTube to remove anything promoting terrorism. However, a new research suggests that this just might be promoting the act instead of subduing them.

A new investigation by The Atlantic revealed that where many tech firms have turned to artificial intelligence (AI) to remove anything promoting terrorism, these AIs might just unintentionally be helping the terrorists get away with their crimes by removing evidence.

As compared to past years, the content-filtering algorithm used by Facebook, YouTube and other platforms has gotten more advanced. The systems now automatically remove huge amounts of extremist content quickly, at times even before it reaches any single user, wrote Futurism.

Facebook co-founder calls to break the platform up due to its ‘unchecked power’

In recent times, YouTube, for instance, pulled 33 million videos off its platform in 2018, while Facebook removed 15 million pieces of ‘terrorist propaganda’ content, where 99.5% takedowns were performed by machines in the third quarter of 2018.

While this is a good thing in many aspects, it is also a loss of evidence that prosecutors could use in order to hold terrorists and other criminals accountable for their crimes – a potential issue to come.

Copyright Business Recorder, 2019

Comments

Comments are closed.