LIVE STREAMING
BERLIN, GERMANY - AUGUST 03: Passersby walk under a surveillance camera which is part of facial recognition technology test at Berlin Suedkreuz station on August 3, 2017 in Berlin, Germany. The technology is claimed could track terror suspects and help prevent future attacks. (Photo by Steffi Loos/Getty Images)
BERLIN, GERMANY - AUGUST 03: Passersby walk under a surveillance camera which is part of facial recognition technology test at Berlin Suedkreuz station on August 3, 2017 in Berlin, Germany. The technology is claimed could track terror suspects and help…

How would a Latino be classified by an Artificial Intelligence system?

An artistic work investigates how artificial intelligence categorizes us in a questionable way.

MORE IN THIS SECTION

Gifts to Avoid Giving

Thanksgiving: how did it go?

Black Friday: Anti-Inflation

Green-Boned Dinosaur

Salt Museum in the USA

Hispanic culture on cinema

HHM Authors to Note

Celebrating Latino Artists

SHARE THIS CONTENT:

We know all know that artificial intelligence (AI) and facial recognition are perfect tools to unlock your iPhone. The new technological systems are a novelty, however, what mortals don’t understand is how policies are governed and created to categorize facial recognition through AI and its algorithms.

Trevor Paglen and Kate Crawford, two artists who question the boundaries between science and ideology, created ImageNet Roulette, a database where the user can upload images and be tagged by an AI system to understand how this technology categorizes us. The results could be entertaining or really prejudiced, sexist or even racist.

ImageNet Roulette was created to understand how human beings are classified by machine learning systems. The collection of this information was sent to a neural network that was able to select from various categories that described a 'Person' through a set of data available on ImageNet.

ImageNet is the platform with the largest image database in the world and was created in 2009 by Princeton and Stanford universities for research and training of technologies for machine learning.

The classifications gathered in ImageNet Roulette created by Paglen and Crawford are part of the Training Humans exhibition hosted at the Prada Foundation in Milan until 2020. This exhibition aims to make visitors reflect on the power in technology and asks questions like: exactly who has the power to build and benefit from these artificial systems?

But is it dangerous for an AI to use our images to classify us?

Through the artistic project, ImageNet Roulette makes it clear that some classifications are relatively harmless - even fun. However, in the images of people with dark skin, the labels produced adjectives such as "mulatto", "orphan" and even "rape suspect.” That's when all the fun of AI vanishes; especially if we take into account a large majority of Latinos in the world have a complexion mixed between white and black. 

These categories were added by the original ImageNet database, back in 2009, not by the creators of ImageNet Roulette.

Trevor Paglen and Kate Crawford's project demonstrates how biased these algorithms can be in artificial intelligence. The data to perform Training Humans were collected from several sources: the ImageNet database, the opinions of Mechanical Turks workers at Amazon, and general-use dictionaries.

Unfortunately, algorithms are not the problem, but the biases of their creators: human beings.

  • LEAVE A COMMENT:

  • Join the discussion! Leave a comment.

  • or
  • REGISTER
  • to comment.
  • LEAVE A COMMENT:

  • Join the discussion! Leave a comment.

  • or
  • REGISTER
  • to comment.