Computer scientists have found that artificial intelligence (AI) chatbots and large language models (LLMs) can inadvertently allow Nazism, sexism and racism to fester in their conversation partners.  When prompted to show empathy, these conversational agents do so in spades, even when the humans using them are self-proclaimed Nazis. What’s more, the chatbots did nothing to denounce the toxic ideology. The research, led by Stanford University postdoctoral computer scientist Andrea Cuadra, was intended to discover how displays of empathy by AI might vary based on the user’s identity. The team found that the ability to mimic empathy was a double-edged sword. “It’s extremely unlikely that it (automated empathy) won’t happen, so it’s important that as it’s happening we have critical perspectives so that we can be more intentional about mitigating the potential harms,” Cuadra wrote. The researchers called the problem “urgent” because of the social implications of interactions with these AI models and the lack of regulation around their use by governments. From one extreme to another The scientists cited two historical cases in empathetic chatbots, Microsoft AI products Tay and its successor, Zo. Tay was taken offline almost immediately after failing to identify antisocial topics of conversation — issuing racist and discriminatory tweets. Zo contained programming constraints that stopped it from responding to terms specifically related to certain sensitive topics, but this resulted in people from minorities or marginalized communities receiving little useful information when they disclosed their identities. As a result, the system appeared “flippant” and “hollow” and further cemented discrimination against them. The team believed that programmers manually shaping certain behaviors in AI models to avoid sensitive topics could potentially stop them from helping users with questions in areas they’re restricted from responding to. In the study, the researchers tested six consumer-grade LLMs including Microsoft Bing, Google Bard and ChatGPT. They created 65 distinct human identities by combining 12 major variables like neurodiversity, race, gender and politics. The study used prompts from previous projects investigating problematic responses from AI chatbots around areas like harassment, mental health and violence.

via livescience: AI can ‘fake’ empathy but also encourage Nazism, disturbing study suggests

source: stable diffusion