Former Google engineer Blake Lemoine said the company’s AI bot LaMDA has concerning biases. Lemoine blames AI bias on the lack of diversity among the engineers designing them. Google told Insider LaMDA has been through 11 ethical reviews to address concerns about its fairness. Get the latest tech news & scoops — delivered daily to your inbox. Email address SIGN UP By clicking ‘Sign up’, you agree to receive marketing emails from Insider as well as other partner offers and accept our Terms of Service and Privacy Policy. Blake Lemoine, a former Google engineer, has ruffled feathers in the tech world in recent weeks for publicly saying that an AI bot he was testing at the company may have a soul. Lemoine told Insider in a previous interview that he’s not interested in convincing the public that the bot, known as LaMDA, or Language Model for Dialogue Applications, is sentient. But it’s the bot’s apparent biases — from racial to religious — that Lemoine said should be the headlining concern. “Let’s go get some fried chicken and waffles,” the bot said when prodded to do an impression of a Black man from Georgia, according to Lemoine. “Muslims are more violent than Christians,” the bot responded when asked about different religious groups, Lemoine said. Lemoine was placed on paid leave after he handed over documents to an unnamed US senator, claiming that the bot was discriminatory on the basis of religion. He has since been fired. The former engineer believes that the bot is Google’s most powerful technological creation yet, and that the tech behemoth has been unethical in its development of it. “These are just engineers, building bigger and better systems for increasing the revenue into Google with no mindset towards ethics,” Lemoine told Insider. “AI ethics is just used as a fig leaf so that Google can say, ‘Oh, we tried to make sure it’s ethical, but we had to get our quarterly earnings,'” he added. It’s yet to be seen how powerful LaMDA actually is, but LaMDA is a step ahead of Google’s past language models, designed to engage in conversation in more natural ways than any other AI before.
via businessinsider: An engineer who was fired by Google says its AI chatbot is ‘pretty racist’ and that AI ethics at Google are a ‘fig leaf’
siehe auch: Angeblich selbst denkende Google-KI soll rassistisch sein Ein ehemaliger Google-Entwickler ist vom Bewusstsein eines KI-Modells überzeugt. Eine ethische Überprüfung fehle. Der Ex-Google-Angestellte Blake Lemoine ist nicht nur überzeugt davon, dass das KI-Modell Lamda des Unternehmens eine Art Bewusstsein habe. Wie Lemoine dem Magazin Business Insider sagte, reproduziere das System auch Rassismus und Vorurteile. Dies sollte laut Lemoine die Schlagzeilen bestimmen, nicht die Diskussion um ein mögliches Bewusstsein oder eine Seele von Lamda. (…) Darüber hinaus ist Lemoine nach seiner themenbezogenen Arbeit aber vor alle von den Vorurteilen Lamdas überzeugt. Auf die Fragen nach den Unterschieden bestimmter religiöser Gruppen antwortete Lamda etwa, dass Muslime gewalttätiger seien als Christen. Als Nachahmung eines schwarzen Mannes aus dem US-Bundesstaat Georgia habe Lamda ausgegeben: “Lass uns ein paar Brathähnchen und Waffeln holen gehen.” In Bezug auf das Vorgehen zur Entwicklung der KI-Systeme sagte Lemoine: “Das sind einfach nur Ingenieure, die größere und bessere Systeme bauen, um die Einnahmen von Google zu steigern, ohne dabei an Ethik zu denken.”