October 25, 2020

The biases in our algorithm

Social media platforms have repeatedly been criticised for employing algorithms that favour certain kinds of content or certain groups of people over others, through what is called “Algorithm Bias”. Vox Recode explains it as, “these systems can be biased based on who builds them, how they’re developed, and how they’re ultimately used.” The phenomenon is not restricted only to social media as machine learning and Artificial Intelligence (AI) is now part of most technology and digital platforms that people navigate. The bias in this technology is widespread, and in many cases has serious repercussions.

Various users on Twitter posted threads of, what they claim is the platform’s algorithm bias over the past weekend. This  led to a global discussion on how social media platforms fail to ensure their technology, and in extension their services, are free from all kinds of prejudice. The threads that were posted highlighted how the preview of the photos posted on Twitter prioritised faces of white people over the faces of people of colour.

The discussion was started by a PhD student who was helping a faculty member in figuring out why Zoom kept removing his head when he tried to use virtual backgrounds on the video calling app. He soon realised that Zoom’s face-detection algorithm was removing faces of Black people.

However, in the process he also realised that this algorithm bias was not restricted to only one platform, instead, Twitter was doing something similar by prioritising the white student’s face to be shown in the photo’s preview of the tweet despite his attempt to change the orientation of the photos.

It led to a series of threads by people testing this algorithm bias with variations in photos, their orientation, format, and the subjects in the photo. Some also tried it with fictional characters to test if it still prefers those with seemingly lighter skinned people over darker skin tones and Black people. Unsurprisingly, most of the time, the preview of the tweets showed images of white people fairly deducing that the bias does in fact exist.

Alternate Explanation:

However, some tried to find alternative explanations to this to check whether something else other than algorithm bias was at play. A Twitter user @IDoTheThinking suggests that this might have more to do with contrasting colours in the photo than skin colour itself. They propose that Black people mostly have dark hair, and when wearing dark clothing against a dark background, machine learning overlooks the blending colours and looks at the image with contrasting colours. 

They also test whether solid colours are treated the same way, and find that the results in fact are similar.

Responding to one of the threads, Dantley Davis, the Chief Design Officer at Twitter, says that it’s their fault, and urges that the next step is fixing the problem.

As old as tech:

The problem, however, is not new and has in fact been highlighted by people of colour, specifically Black people, in the past. An MIT graduate Joy Buolamwini in her TED talk in 2017 shares that while working on facial analysis software, the software did not detect her face. It was at this time she realised that the people who developed the software had not trained it with a broad range of skin tones and facial features.

Although technology makes the work and the world easier and quicker for everyone, it’s imperative to acknowledge that it is being developed and coded by humans whose biases are prone to be replicated in the work that they are producing. Algorithm and machine learning bias comes from humans that write their codes and train them to operate, hence they are susceptible to the biases of these coders and developers.

These are not isolated incidents and are certainly not the only ones that have been flagged in the past couple of years. These biases have been witnessed in a vast array of technology employing algorithms of various kinds. 

In 2017, Gizmodo reported on a footage shared by a Facebook employee in Nigeria showing a Black man trying to get soap from a soap dispenser, but failed to do so. Whereas, his white colleague is able to get soap instantly. The Black man is then shown to use a white napkin to get soap from the soap dispenser, and this time he’s able to get it as well.

Similar incidents have been detected in wearables like fitness trackers, heart-rate monitors, in health care treatments, beauty pageants, among many other reported examples. One of the many research concludes that face detection algorithms do not only prioritise light skin, but also misgenders faces of Black subjects. For instance, in 2017, Joy in her research paper Gender Shades found that Microsoft face detection software misgendered 93.6 percent faces of darker skinned subjects that were studied.

Algorithmic bias often has serious implications where the technology is unable to assist those that look a certain way, have certain facial features, or have more melanin in their skin. Many incidents, researches and analysis point at the fact that technology is inherently biased towards lighter skin, and results in discrimination and unfair practices targeting those who are already marginalised. With racial justice and many other civil liberty movements ongoing across the world, the increased use of face detection and machine learning technology is proving to have a drastic impact on those protesting for their rights on the streets. Wrongful arrests and charges, economic loss, and social stigmatisation are just some of the implications that those falling victim to this bias would have to deal with.

It’s imperative that while analysis is being conducted to highlight the shortfalls of technology, algorithm and machine learning developers move towards training AI programs in a way that their own biases do not influence the process, and subsequently the implementation of the technology. Transparency in the process of training the AI could be the first step while moving towards inclusive testing of the program in order to ensure that it does not unfairly ostracise the already vulnerable.

Featured Image by Daryan Shamkhali on Unsplash

Written by

Hija is a Programs Manager at Media Matters for Democracy. She combines her experience in digital rights in Pakistan to lead digital rights and internet governance advocacy of MMfD. She tweets at @hijakamran

No comments

leave a comment