Menu Close

The Dard side of ‘ChatGPT’ chatbot

ChatGPT has been a topic of conversation for the last few months. The artificial intelligence (AI) launched by OpenAI was built in November 2022 and is able to create human-like text. ChatGPT has been heralded as the next big disruptor to the world as we know it, and could one day dethrone Google as the most used search engine. ChatGPT currently has a free version that doesn’t require a download to utilize. The chat bot has a plethora of possibilities: it can write poetry, song lyrics, computer code and pass an MBA course exam. Users can ask the chat bot questions and responses are generated based on the vast amount of online data that ChatGPT is trained on. Although the possibilities and potential of ChatGPT seem endless, there is a dark side that also deserves examination.

In order to make ChatGPT less violent, sexist, and racist, OpenAI hired Kenyan laborers, paying them less than $2 an hour. The laborers spoke anonymously to TIME for an investigation about their experiences. The laborers were in charge of filtering harmful text and images in order to train the data to be able to recognize harmful content. One worker shared the trauma they experienced while reading and labeling the text for OpenAI, describing it as “torture” because of the traumatic nature of the text. An often-overlooked component of the creation of generative AI is the need to exploit the labor of people in underdeveloped countries.

Scientists like Joy Buolamwini and and Timnit Gebru have been sounding the alarm about the dark sides of AI for a while now. Safiya Umoja Noble has written extensively about the bias within our algorithms and social media websites like TikTok have been called out for the bias baked into their platform. ChatGPT is not immune to these biases. In December of 2022, one Twitter user named steven t. piantadosi outlined all the instances of bias they were able to detect on the platform. Equipped with this knowledge, OpenAI has instituted guardrails, which are designed to address the biased responses generated from the chat bot, although some users have figured out ways to get around these guardrails. There are a few things that can be done to reduce the bias within our AI systems. One approach is “pre-processing the data” which will help maintain data accuracy. Another option is introducing “fairness constraints” to limit a system’s ability to “predict the sensitive attribute.” An ideal AI system would be a fusion between human decision-making and technology. When we’re utilizing AI systems, it’s important for us to be mindful of the ways that these systems can intensify bias and in what ways. We must also consider processes that can be used to mitigate the bias in our AI systems.

Promoting a culture of ethics as it relates to AI could be an effective strategy to address bias within the AI systems that are used in the workplace. This could include updating the performance evaluation process in the workplace to intentionally introduce more ethical AI practices, as well as greater transparency and more discussions around the pitfalls of AI systems. It is no surprise that an AI chat bot like ChatGPT can generate biased responses. But AI technology only mirrors what has been programmed and what it has been trained on. We may be far from the point where we can truly rely on the responses generated from any AI system. While the possibilities of AI systems are limitless, they must be used with a grain of salt and an understanding of their potential limitations.

Article: The Dark Side Of ChatGPT

Leave a Reply

Your email address will not be published. Required fields are marked *