Bias in Algorithms.

Robin Winn
4 min readMay 26, 2021

Let’s get one obvious detail out of the way that I think we can all agree on: as long as humans, us imperfect and subjective beings, continue to write algorithms, bias in them will never truly go away. Both our explicit and subconscious biases affect algorithms in a wide variety of ways, most of which we’re not even aware of until the algorithm goes public and feedback starts pouring in from people who are negatively affected. However, by then it may be too late to reverse course, and even if the option was available, who knows what further problems implementing a “fix” would inadvertently cause, and who it could affect.

Robin Hauser (2018) discusses biases in her TED talk, and explains how they can hold just as much bias as humans, if not more, but changing their biases can be even harder than trying to change those of humans. She brings up an AI project Microsoft made called Tay, who was an AI running a twitter account who was designed to talk like a typical teenage girl while also constantly taking in new information through interactions with other twitter users. Well, you don’t need to have been on the internet for long to figure out how poorly that went. Indeed, by the time she was shut down the very same day by her creators, she was tweeting such things as “hitler did nothing wrong” and “I f***ing hate feminists and they should all die and burn in hell”. (Sidenote: Youtuber Internet Historian made a hilarious yet informative video about Tay called “Tay A.I.|The People’s Chatbot”, I suggest checking it out) Microsoft’s reaction to their failed experiment is something of note: they were surprised. Despite the lack of constraints and failsafes put in place to prevent this from happening (or at least prevent it from happening in just a few hours), they expected, since they had put out a similar chatbot in China who was massively successful, that Tay would be foolproof and a success as well. They didn’t account for any cultural differences, any differences in how people in China use the internet vs how people in the US do, anyone deciding to come in and purposely start messing around with Tay (which is obviously what did end up happening to help propel her towards this behavior so rapidly), or any of the other countless, near-infinite variables that were left with no oversight. Microsoft’s bias, in this case that Tay would obviously be as much of a success as their last chatbot, left them wide open for trolls and the alt-right to swoop in and make Tay into their image, to the point that there was no other option than to take her offline entirely. While this may seem like a stupid and easily-avoidable mistake for such a giant company to make, its also understandable when you remind yourself of the fact that they didn’t account for any cultural or internet usage differences between China and the US and instead considered them one in the same. With that mindset in place, its pretty easy, in fact too easy, to see how they just considered Tay a sequel AI who would go as smoothly as the original. Scary to think how, if they had just one cultural expert on the team, or even just someone who had experience in both China’s and the US’s internet spaces, this whole embarrassment of an experiment could have been easily avoided.

In the TED talk hosted by Kriti Sharma (2018), she talks about how the bias we insert into AI then reinforces said bias in a never-ending feedback loop. She uses the example of digital personal assistants, who majorly have female names and voices, thanks to their creators to some degree holding the bias that PAs are inherently female. These obedient and dependent AIs then in turn help enforce the belief, even subconsciously, that women are “made” to hold these subservient and submissive roles. While this may not make too much of an impact on society as a whole, it can go a long way towards influencing specific members of society, particularly sexists or people who were already turning towards those beliefs into feeling more secure and thus less likely to change. This is far from the only example of AI taking in the ethics and beliefs of their creators and parroting them back to increasing degrees, and the horrible effect this cycle has, pushing out every other opinion in favor of the “correct” ones, enforcing more and more the beliefs of the creator, both conscious and subconscious on a greater and greater extent, and at the point that it starts to become too extremes, it may be too late to reverse course, or even manage it.

Sources:

TED Conferences, LLC. (2018). Kriri Sharma: How to keep human bias out of Ai. TED. https://www.ted.com/talks/kriti_sharma_how_to_keep_human_bias_out_of_ai.

YouTube. (2018). Robin Hauser: Can we protect Ai from our biases? YouTube. https://www.youtube.com/watch?v=eV_tx4ngVT0.

--

--

Robin Winn
0 Followers

She/they/he/anything, currently taking CS 340, can't wait to have some fun on here :)