Understanding bias and fairness in AI facial recognition systems

No Thumbnail Available

URL

Journal Title

Journal ISSN

Volume Title

School of Business | Bachelor's thesis
Electronic archive copy is available locally at the Harald Herlin Learning Centre. The staff of Aalto University has access to the electronic bachelor's theses by logging into Aaltodoc with their personal Aalto user ID. Read more about the availability of the bachelor's theses.

Date

2023

Major/Subject

Mcode

Degree programme

Tieto- ja palvelujohtaminen

Language

en

Pages

24+6

Series

Abstract

Facial recognition systems (FRS) have been developed since the 1960s. For humans, it is typically easy to recognize a person by their face. However, for computers, this presents a difficult pattern-recognizing task. Modern facial recognition systems fuel many aspects of society, such as automatic passport control at borders, Apple’s Face ID for unlocking your phone, CCTV, targeted marketing and more. Several large companies, such as Google, IBM and Face++ have released facial recognition software that anyone may purchase and use. Most facial recognition systems today utilize machine learning (ML). Facial recognition systems have been scrutinized for their biases against Black people in evaluating recidivism with the COMPAS system and their false positive rates (FPR) when recognizing people of colour, specifically Black females. For companies developing new facial recognition systems it is important to be aware of potential challenges and address those swiftly, as consumer trust into new technologies is an important consideration. When evaluating biases in training sets, it is important to be aware of the risks of how facial recognition systems and data sets may be used in nefarious ways. Transgender people face discrimination and threats, and publishing datasets with images of transgender people may put their lives at risk. Evaluating fairness in algorithms necessarily includes difficult considerations of values and definitions of complex social constructs, as these are codified into algorithmic values when a system is created. Understanding how these systems work and how they may be biased is paramount for members of society, as this type of technology is becoming more prevalent than ever due to advances in computer hardware and imaging technology, and is largely silent, that is, invisible. For facial recognition system developers, it is important to be aware of the way algorithmic bias become embedded into a system to avoid and mitigate them. This literature review explores the inherent bias in facial recognition systems, the ethical challenges of their use as well as the subfield of automatic gender recognition (AGR).

Description

Thesis advisor

Tomi Seppälä, Tomi Seppälä

Keywords

machine learning, AI, facial recognition, fairness, bias

Other note

Citation