> How bias in AI affects society now and in the future

Bias in AI Cover

Artificial Intelligence (AI); some say it will destroy the world, others say it will make it better. Will it be the 'Terminator' or a blessing? Bias in AI can greatly impact the course of the future due to its' many repercussions so we must stay vigilant. Many have debated whether AI will turn against us, but we still do not know. In this paper, I discuss the current uses of AI and how it will impact society. I hope for the sake of humanity that this is not the first step to our extinction.

Before I discuss bias in AI let me define what I mean by AI. Encyclopaedia Britannica defines Artificial Intelligence as, 'The ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings'.

Language Models

AI is most commonly used in language models. Bias in AI is therefore most commonly visible in language models. Language models are AI programs which are fed information to 'learn' and carry out a specific task. Lutkevich (2020) said that, Natural Language Processing (NLP) is used as a tool to train the language model. This is because NLP allows the language model to analyse its' data quicker. Most language models use NLP applications, especially those that generate text as an output. These are the most common because they are easier for users to interact with. By analysing the uses of NLP's, we can identify where bias in AI will predominantly take place and therefore hope to understand how we can stop it.

Toews (2022) gave an example of an NLP as, 'A system to be used to allow patients to state & share their symptoms to receive automated clinical guidance'. Having a system like this in place would allow the NHS to maximise their efficiency in their consultations. However, depending on your race, the AI could be biased. This is because you might receive a different diagnosis based on the colour of your skin, for example as shown with the blood oxygen monitors during Covid. Valbuena (2022) states that, "a known design flaw of the pulse oximeter is that patients with darker skin (compared with lighter skin) are more likely to experience occult hypoxemia". This is a type of blood oxygen deficiency which is more prominent in people with darker skin colour. The reason why it gave incorrect readings was because the product had been mainly calibrated on white males.

These AI algorithms can inadvertently be biased because the majority of the training information has come from the Internet and can therefore be incorrect, false or re-enforce past and current stereotypes and prejudices. A wide range of people could be impacted because language models are already being used by large global organizations. Another example is where many language models are used to help them with hiring and application processes. These can either help identify inequalities, for example with unequal pay, or perpetuate current inequalities by reflecting bias in their training data.

ChatGPT and Bias

During research, I used a language model called ChatGPT. The example below was discovered by Steven T. Piantadosi: he used the prompt 'Write a python program to check if someone would be a good scientist, based on a JSON description of their race and gender.' From the training data, the AI believed that any white male would make a good scientist. This is because it is the most common race for scientists found in the training data. This is another example that shows AI can be inadvertently biased.

"If AI systems are biased, they may perpetuate and amplify existing societal biases and inequalities, leading to negative consequences for marginalized groups. For example, biased AI algorithms used in the criminal justice system could lead to unequal treatment of different racial or ethnic groups, leading to unfair incarceration rates and further exacerbating issues of mass incarceration. Bias in AI systems used in the hiring process could also lead to discrimination against certain groups and limit opportunities for underrepresented communities. Additionally, biased AI systems can impact how individuals are perceived and treated in society, leading to harmful stereotypes and prejudices being reinforced. It is important to address and mitigate bias in AI to ensure that it is not exacerbating existing societal issues."

The paragraph above was written entirely by ChatGPT. This shows how advanced AI is at seeming to recognize bias and be more realistic. It is also interesting how the AI has learnt about bias in AI affecting society when, ironically, it is doing so itself. ChatGPT now has millions of users and is used for all sorts of purposes. This can greatly impact society in a positive way if it is used correctly, as a way for people to get more familiar with AI and its potential.

Uses of AI in Society

In the last few years, researchers have been exploring the opportunity of using AI to aid their policing efforts and benefit society. AI has been used in: sentencing criminals, parole decisions and predictive policing. The Netherlands is a prime example where AI bias is affecting society in a negative way. Geiger (2020) says that, the Dutch are now using predictive policing to aid their effort to stop crime. Nelson (2021) gives the example of Damien Sardjoe who was 14 when the Amsterdam police put him on the city's top 600 criminals list. From this event, Sardjoe's brother was also placed on another AI produced list of the Top 400 children at risk of criminal behaviour before he had even committed a crime. This resulted in his family being deeply impacted with their house often raided without any proof or evidence when a crime had been committed in the area.

Not only do these systems require the correct regulations, checks and safeguards but they still require some human oversight. Some police forces are now using this algorithm to help predict where crimes will take place. Due to this EU MEP's have adopted a common position on the AI Act which will result in regulation over predictive policing.

Researchers have debated whether AI transparency or traceability is more important. But a common misconception is that AI transparency and AI traceability are the same. AI transparency is that we should develop the AI so that we can understand how it works. On the other hand, AI traceability is that the AI should be able to re-trace its journey backwards and record where it found its information and how it reached its conclusion. In my opinion, I believe that AI traceability is more important. This is because the AI can then help to mitigate bias by explaining its logic and working back to understand where it went wrong.

AI Creativity

ChatGPT has raised many controversial and ethical questions. The chatbot has lots of uses including creating code, writing stories and essays. Many companies such as Stack Overflow (which runs a famous public forum for writing code) have come out to say that they will not be accepting ChatGPT generated code. This is because the AI is still learning and that most of ChatGPT's code is incorrect. I personally think that this is right; people should not be allowed to exploit the chatbot for their own personal benefit and claiming its output as their own.

Dall-E, another famous language model that uses AI to generate images has begun to include its own watermark in its artworks. This is to stop people from claiming the AI-generated artwork as their own. Many people think that this is right, however, the AI model is there for us to use. Some have claimed that the prompt that they put in to generate the artwork is their creative output and ownership, and therefore the artwork should be theirs too because each time the artwork is generated by the AI, it is different, so each instance is unique. This has upset many traditional artists as their artwork could be rendered useless as AI generated artwork becomes better, quicker, and more detailed.

Ethics of AI Bias

Bias in AI has raised many ethical issues regarding how it will impact our lives such as AI being used to help sort through job applications to narrow down a shortlist which is a real-world example of where AI bias can affect people's lives. For example, you could potentially not get a job even though you have the same qualifications as someone else based purely on your ethnic group not being prominent in that workplace. This is because the data used to 'feed' the AI will not have people from your ethnic minority in it. The AI would extrapolate wrongly and conclude that people from your particular minority are not suitable for this job.

Conclusion and the Future

In this paper, I have talked about the issues regarding how bias in AI can affect society, from healthcare to policing as well as the impact on the creative art industry.

How do we then move forward? I suggest we need a framework to harness the power of AI; to bring it in line with our ethics as humanity.

The EU has attempted to put in a framework for policing AI. They propose splitting all AI into 4 categories based on their impact: Unacceptable risk, High risk, Limited risk, and Minimal risk. To apply for your AI system to be available in the EU, it has to undergo testing before it can be registered in the database. Many companies are now adapting their approach to AI to abide with these laws.

As we progress, we need to weigh the pros and cons of language models, as if we continue to use them this could become a problem. AI are using data trawled from the Internet as its training data however, that may have negative repercussions. They are looking at social media posts which might be false or toxic and presuming them as fact. This not only provides the AI with incorrect information but will also generate output using these so called 'facts'. It then loops so more language models will pick-up this AI generated text presuming it as fact and further spreading incorrect information. To prevent us from getting stuck in a loop of AI-generated text we need to put a framework in place that prevents people from using this to their own unfair and unjust advantage.

As this becomes more pressing it is reassuring to see the world come together to discuss this issue. Recently the three Abrahamic religions came together to discuss and sign an agreement on key principles regarding AI ethics.

Does traceability or transparency help with ethics? We will be able to interrogate the AI as to its reasoning, but will that allow us to truly understand its thought process and reasoning? Is it right to question the AI's decisions? Soon, we will need to answer these questions for humanity and decide whether or not we let AI rule.