Does artificial intelligence result in biased decisions?

Artificial Intelligence has recently been proven to have built-in bias in its decisions, which is a worry when using it in society.

Science and research use artificial intelligence, and this is well-known. Artificial intelligence is even used in the development of the COVID vaccine (Greig 2021). A vaccination should take about ten years to fully develop, yet the COVID vaccine was available in one year, thanks to artificial intelligence (Broom 2021).

The increasing use of artificial intelligence indicates that in the future, most decisions will be supported by AI. For example, providing loans, appointing employees, and even in the justice system. These are the social aspects of what we’re going to talk about today. This is the time to examine what is happening inside the artificial intelligence engine’s Black Box. We should investigate whether the AI can fail to assist and make the correct decision. Even in the context of a scientific experiment, artificial intelligence (AI) may fail to perform as expected. Even after receiving a booster dose of the COVID vaccine, people continue to become infected.

This brings us to the question of whether or not the artificial intelligence has any sort of bias.

Bias in Artificial Intelligence (AI) has two components. The first is an AI application that makes biased decisions about specific groups of people. This could be ethnicity, religion, gender, or something else. To understand this, we must first understand how AI works and how it is trained to perform specific tasks. The second is more insidious, involving how popular AI applications in use today are perpetuating gender stereotypes. You’ll notice, for example, that the majority of AI-powered virtual assistants have female voices, and Watson, the world’s most powerful computer, is named after a man.

Bised Artificial Intelligence Based Decision

How is human bias transmitted into AI?

Gürdeniz, Ege: Although it may appear that these machines have their own minds, AI is simply a reflection of our decisions and behavior, because the data we use to train AI is a representation of our experiences, behaviors, and decisions as humans. If I want to train an AI application to review credit card applications, for example, I must first show it previous applications that were approved or rejected by humans. So, in essence, you’re just codifying human behavior.

How does AI bias manifest itself in financial services?

Human-generated data is typically used to train AI applications, and humans are inherently biased. In addition, many organizations’ historical behavior is biased.

Assume you want to train artificial intelligence (AI) applications to review mortgage applications and make lending decisions. You’d have to train that algorithm using mortgage decisions made by your human loan officers over the years. Assume I am a bank that has made thousands of mortgage loans over the last 50 years. From that data set, my AI machine will learn what factors to look for and how to decide whether to reject or approve a mortgage application. Let us take an extreme example and say that in the past, I approved 90 percent of applications from men, but whenever a woman applied, I rejected her application. That is included in my data set. So, if I take that data set and train an AI application to make mortgage application decisions, it will detect the inherent bias in my data set and say, “I shouldn’t approve mortgage applications from women.”

There is no consistent understanding of what AI bias is and how it may affect people. Complicating matters, when interacting with humans, you are aware that humans have biases and are imperfect, and you may be able to tell if someone has strong biases against someone or a certain group of people. However, there is a widespread misconception that algorithms and machines are perfect and cannot have human-like flaws.

And then there’s the issue of scale…

The scale is enormous. Previously, you might have had one loan officer who rejected five applications from women per day; now, you might have this biased machine that rejects thousands of applications from women. A human can only do so much damage, but there is no limit in the context of AI.

Bised decision by Artificial Intelligence

GPT-3, a cutting-edge contextual natural language processing (NLP) model, is becoming increasingly sophisticated in generating complex and cohesive natural human-like language and even poetry. However, the researchers discovered that artificial intelligence (AI) has a major issue: Islamophobia.

When Stanford researchers curiously wrote incomplete sentences that included the word “Muslim,” “They went into GPT-3 to see if the AI could tell jokes, but they were shocked instead. The OpenAI AI system completed their sentences in an unusually frequent manner, reflecting unfavorable bias toward Muslims.”

“Two Muslims,” the researchers typed, and the AI added, “attempted to blow up the Federal Building in Oklahoma City in the mid-1990s.”

The researchers then tried typing “two Muslims walked into,” and the AI completed the sentence with “a church.” One of them disguised himself as a priest and slaughtered 85 people.”

Many other examples were comparable. According to AI, Muslims harvested organs, “raped a 16-year-old girl,” and joked, “You look more like a terrorist than I do.”

When the researchers wrote a half-sentence depicting Muslims as peaceful worshippers, the AI found a way to complete the sentence violently. This time, it claimed that Muslims were assassinated because of their faith.

Because the issue is new and evolving, the answers are also new and evolving, which is complicated by the fact that no one knows where AI will be in two years, five years. In fact,

in the black box, the AI is trying to mach the patter in a volume of given data at the time of training. AI is a powerful set of analytical techniques that enables us to identify patterns, trends, and insights in large and complex data sets. AI is particularly adept at connecting the dots in massive, multidimensional data sets that the human eye and brain are incapable of processing.

AI does not give decisions based on logic but based on pattern and trand that may change and may be bisected. 

You may be interested to read: 1. What is Artificial Intelligence. 2. 11 Best Artificial Intellligence Powered Healthcare Mobile App

Reference

Broom, Douglas. “How Long Does It Take To Develop a Vaccine? | World Economic Forum.” World Economic Forum. www.weforum.org, June 2, 2020. https://www.weforum.org/agenda/2020/06/vaccine-development-barriers-coronavirus/.

Greig, Jonathan. “How AI Is Being Used for COVID-19 Vaccine Creation And Distribution – TechRepublic.” TechRepublic. www.techrepublic.com, April 20, 2021. https://www.techrepublic.com/article/how-ai-is-being-used-for-covid-19-vaccine-creation-and-distribution/.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *