The number of active hate groups in the US is falling as they find new places to hide online. The number of active hate groups in the United States has fallen by about 10 percent in the past year. This isn’t necessarily good news. There were 838 active hate groups this year, compared to 940 in 2019, according to an annual report by the Southern Poverty Law Center (SPLC). The organization attributes the drop to the fact that these groups have become more diffuse and difficult to track, largely because of changes in technology. The pandemic has also played a role in limiting in-person activities. Even then, 838 is still a very high number of active hate groups. (…) “Technology and the pandemic in the last year have changed how hate groups operate,” Margaret Huang, president of the SPLC, told reporters on Monday. “They now have the tools to disseminate their ideas beyond their members, beyond geography, and shift tactics and platforms to avoid detection. This likely represents a transition in far-right communities away from traditional organizational structures, and toward more diffused systems of decentralized radicalization.” That’s because social media platforms have made it easier than ever for extremists to recruit new adherents and push their fringe beliefs into the mainstream. This was on full display on January 6, when militant white nationalists groups that have primarily used the internet to organize — the Proud Boys, the Three Percenters, and the Oath Keepers — stormed the Capitol alongside MAGA moms, QAnon adherents, and other groups brought together in recent years by their love of conspiracy theories and Donald Trump. Many members of all these groups had met online before the event, and their attack on the Capitol showed their alarming capacity for offline violence. That public show of force was decades in the making — neo-Nazis have been using the internet since the early ’80s to recruit new followers. You can draw a line from the first neo-Nazi online bulletin boards to the online hate forum Stormfront in the ‘90s to the alt-right movement that helped Donald Trump rise to power in 2016. Over the years, these groups used an evolving set of organizing techniques to spread extremist messages to larger and more mainstream groups of people online. They found ways to game the algorithmic feeds of Facebook, Twitter, and YouTube, so that their new audiences didn’t necessarily know they were being radicalized. And there’s reason to believe this is only the beginning, since these platforms tend to amplify provocative content. “Twitter, Facebook, and YouTube provided a safe space for these different strains of far-right thought to mix and breed. For years this stuff was allowed to spread algorithmically, and communities were able to form and self-radicalize,” Robert Evans, an investigative journalist who studies far-right groups, told Recode. “All that culminated on January 6 — although, of course, that will not prove to be the end of any of the chains of violence we’ve seen evolve over the last six years.”

via How neo-Nazis used the internet to instigate a right-wing extremist crisis