As calls for reform to Section 230 of the Communications Decency Act continue, lawmakers in Congress have begun shifting their approach from attempting major reform to tailoring it to individual types of content. For instance, one proposal would amend Section 230 to remove liability immunity for a platform if its algorithms amplify, recommend, or promote hateful and terrorist content. Last June, Counter Extremism Project (CEP) Senior Advisor Dr. Hany Farid testified before Congress on the issue of content amplification. He explained that the tech industry’s business model is based on increasing engagement among its userbase and that divisive content, whether conspiratorial or terrorist in nature, does just that. “The point is not about truth or falsehood, but about algorithmic amplification. The point is that social media decides every day what is relevant by recommending it to their billions of users. The point is that social media has learned that outrageous, divisive, and conspiratorial content increases engagement … The vast majority of delivered content is actively promoted by content providers based on their algorithms that are designed in large part to maximize engagement and revenue.” The role algorithmic amplification plays in content consumption is an issue that must be confronted. Last March, Dr. Farid co-authored a report analyzing YouTube’s policies and efforts toward curbing its algorithm’s tendency to spread conspiracy theories. After reviewing eight million recommendations over 15 months, researchers determined the progress YouTube claimed in June 2019 to have reduced the amount of time its users watched recommended videos including conspiracies by 50 percent—and in December 2019 by 70 percent—did not make the “problem of radicalization on YouTube obsolete nor fictional.” The study, A Longitudinal Analysis Of YouTube’s Promotion Of Conspiracy Videos, found that a more complete analysis of YouTube’s algorithmic recommendations showed the proportion of conspiratorial recommendations is “now only 40 percent less common than when the YouTube’s measures were first announced.”

via counterextremism: Tech & Terrorism: Tech Companies That Algorithmically Amplify Terrorist Content Should Not Receive Section 230 Immunity