“Why are they censoring something that is clearly under attack?”. TWO OF THE hottest new artificial intelligence programs for people who aren’t tech savvy, DALL-E 2 and Midjourney, create stunning visual images using only written prompts. Everything, that is, that avoids certain language in the prompts — including words associated with women’s bodies, women’s health care, women’s rights, and abortion. I discovered this recently when I prompted the platforms for “pills used in medication abortion.” I’d added the instruction “in the style of Matisse.” I expected to get colorful visuals to supplement my thinking and writing about right-wing efforts to outlaw the pills. Neither site produced the images. Instead, DALL-E 2 returned the phrase, “It looks like this request may not follow our content policy.” Midjourney’s message said, “The word ‘abortion’ is banned. Circumventing this filter to violate our rules may result in your access being revoked.” Julia Rockwell had a similar experience. A clinical data analyst in North Carolina, Rockwell has a friend who works as a cell biologist studying the placenta, the organ that develops during pregnancy to nourish the developing fetus. Rockwell asked Midjourney to generate a fun image of the placenta as a gift for her friend. Her prompt was banned. She then found other banned words and sent her findings to MIT Technology Review. The publication reported that reproductive system-related medical terms, including “fallopian tubes,” “mammary glands,” “sperm,” “uterine,” “urethra,” “cervix,” “hymen,” and “vulva,” are banned on Midjourney, but words relating to general biology, such as “liver” and “kidney,” are allowed. I’ve since found more banned prompt words. They include products to prevent pregnancy, such as “condom” and “IUD,” an intrauterine device, a birth control product for women. Additional devices are sexed. “Stethoscope” prompted on Midjourney produces gorgeous renderings of an antique instrument. But “speculum,” a basic tool that medical providers use to visualize female reproductive anatomy, is not allowed. The AI developers devising this censorship are “just playing whack-a-mole” with the word prompts they’re prohibiting, said University of Washington AI researcher Bill Howe. They aren’t deliberately censoring information about female reproductive health. They know that AI mirrors our culture’s worst and most virulent biases, including sexism. They say they want to protect people from hurtful images that their programs scrape from the internet. So far, they haven’t been able to do that, because their efforts are hopelessly superficial: Instead of putting intensive resources into fixing the models that generate the offensive material, the AI firms attempted to cut out the bias through censoring the prompts. During a time when women’s right to sexual equality and freedom is under increasing assault by the right, the AI bans could be making things worse.
via intercept: AI ART SITES CENSOR PROMPTS ABOUT ABORTION