AIRLINK 71.24 Decreased By ▼ -0.45 (-0.63%)
BOP 4.98 Decreased By ▼ -0.02 (-0.4%)
CNERGY 4.35 Decreased By ▼ -0.04 (-0.91%)
DFML 27.90 Decreased By ▼ -0.65 (-2.28%)
DGKC 81.37 Decreased By ▼ -1.03 (-1.25%)
FCCL 21.43 Decreased By ▼ -0.52 (-2.37%)
FFBL 33.08 Decreased By ▼ -1.07 (-3.13%)
FFL 9.84 Decreased By ▼ -0.24 (-2.38%)
GGL 10.50 Increased By ▲ 0.38 (3.75%)
HBL 113.50 Increased By ▲ 0.50 (0.44%)
HUBC 139.25 Decreased By ▼ -1.25 (-0.89%)
HUMNL 9.03 Increased By ▲ 1.00 (12.45%)
KEL 4.50 Increased By ▲ 0.12 (2.74%)
KOSM 4.39 Decreased By ▼ -0.11 (-2.44%)
MLCF 37.49 Decreased By ▼ -0.52 (-1.37%)
OGDC 133.70 Decreased By ▼ -0.99 (-0.74%)
PAEL 26.09 Decreased By ▼ -0.53 (-1.99%)
PIAA 23.80 Decreased By ▼ -1.60 (-6.3%)
PIBTL 6.49 Decreased By ▼ -0.06 (-0.92%)
PPL 121.66 Decreased By ▼ -0.29 (-0.24%)
PRL 27.11 Decreased By ▼ -0.62 (-2.24%)
PTC 13.79 Decreased By ▼ -0.01 (-0.07%)
SEARL 54.86 Decreased By ▼ -0.03 (-0.05%)
SNGP 68.65 Decreased By ▼ -1.05 (-1.51%)
SSGC 10.33 Decreased By ▼ -0.07 (-0.67%)
TELE 8.60 Increased By ▲ 0.10 (1.18%)
TPLP 11.29 Increased By ▲ 0.34 (3.11%)
TRG 61.44 Increased By ▲ 0.54 (0.89%)
UNITY 25.20 Decreased By ▼ -0.02 (-0.08%)
WTL 1.50 Increased By ▲ 0.22 (17.19%)
BR100 7,595 Decreased By -42.7 (-0.56%)
BR30 24,864 Decreased By -108 (-0.43%)
KSE100 72,535 Decreased By -225.7 (-0.31%)
KSE30 23,525 Decreased By -99.8 (-0.42%)
Technology

Facebook increasing AI usage for content moderation

  • While there are still around 15,000 human reviewers across over 50 time zones, Artificial Intelligence (AI) now primarily helps in proactively removing content which goes against Facebook’s policies.
Published November 18, 2020

World’s largest social media platform Facebook in a recent briefing shared that technology has started to play a more central role in content moderation on the platform. This shift has taken place in order to prioritise reported content.

While there are still around 15,000 human reviewers across over 50 time zones, Artificial Intelligence (AI) now primarily helps in proactively removing content which goes against Facebook’s policies.

Between April and June this year 99.6% of fake accounts, 99.8% of spam, 99.5% of violent and graphic content, 98.5% of terrorist content, and 99.3% of child nudity and sexual exploitation content, 95% of the content Facebook removed was identified and removed by their technology - without needing someone to report to them.

Furthermore, the company shared that they now prioritize content that needs reviewing based on several factors such as virality, severity and likelihood of violation. Prioritizing content in this way, regardless of when it was shared on Facebook or whether it was reported by a user or detected by their technology, allows them to get to the highest severity content first.It also means the reviewers in their Global Operations team spend more time on complex content issues where judgment is required, and less time on lower severity reports that technology is capable of handling.

Although technology is playing an increasing role in the way Facebook moderates content, for certain posts they still use a combination of technology + reports from the community + human review to identify and review content against their Community Standards. This is done to ensure the context of the post is understood better. The technology created for this is called Whole Post Integrity Embeddings or WPIE.

Lastly, FB shared details about a newly developed technology called XLM-R that can understand text in multiple languages. This model is trained in one language and then used with other languages without the need for additional training data or content examples.

Comments

Comments are closed.