Artist using AI to question AI Ethics and reveal latent biases in AI systems

There is much at stake in the architecture and contents of the training sets used in AI. They can promote or discriminate, approve or reject, render visible or invisible, judge or enforce.

“You open up a database of pictures used to train artificial intelligence systems. At first, things seem straightforward. You’re met with thousands of images: apples and oranges, birds, dogs, horses, mountains, clouds, houses, and street signs. But as you probe further into the dataset, people begin to appear: cheerleaders, scuba divers, welders, Boy Scouts, fire walkers, and flower girls. Things get strange: A photograph of a woman smiling in a bikini is labeled a “slattern, slut, slovenly woman, trollop.” A young man drinking beer is categorized as an “alcoholic, alky, dipsomaniac, boozer, lush, soaker, souse.” A child wearing sunglasses is classified as a “failure, loser, non-starter, unsuccessful person.” You’re looking at the “person” category in a dataset called ImageNet, one of the most widely used training sets for machine learning. 

Something is wrong with this picture. 

Where did these images come from? Why were the people in the photos labelled this way? What sorts of politics are at work when pictures are paired with labels, and what are the implications when they are used to train technical systems?” Full article by Kate Crawford and Trevor Paglen

Trevor Paglen talks to The Art Angle’s Andrew Goldstein, and although the episode was recorded before George Floyd’s murder sparked nationwide demonstrations for racial justice, Paglen’s work is more timely than ever for its probing of surveillance, authoritarianism, and the ways both are being simultaneously empowered and cloaked by A.I.

Jun 15th, 2020

By Catherine Thomas