Facebook is always working to improve its detection and enforcement efforts in order to remove content that breaks its rules, and keep users safe from abuse, misinformation, scams, etc. And it’s systems have significantly improved in this respect – as explained by Facebook: “Online services have made great strides in leveraging machine-learned models to fight abuse at scale. For example, 99.5% of takedowns on fake Facebook accounts are proactively detected before users report them.” But there are still significant limitations in its processes, mostly due to the finite capacity for human reviewers to assess and pass judgment on such instances. Machine learning tools can identify a growing number of issues, but human input is still required to confirm whether many of those identified cases are correct, because computer systems often miss the complex nuance of language. But now, Facebook has a new system to assist in this respect:
A 5-Step Guide to Gaining Data and Analytics Agility. Discover the biggest roadblocks to data agility for marketers and how to avoid them to achieve great results. “CLARA (Confidence of Labels and Raters), is a system built and deployed at Facebook to estimate the uncertainty in human-generated decisions. […] CLARA is used at Facebook to obtain more accurate decisions overall while reducing operational resource use.” The system essentially augments human decision making by adding a machine learning layer on top of that which assesses each individual raters’ capacity to make the right call on content, based on their past accuracy.

via socialmediatoday: Facebook Outlines New, Machine Learning Process to Improve the Accuracy of Community Standards Enforcement

Categories: Internet