ChatGPT Causes Cheating Concerns In Higher Ed

AI is taking over! Higher level education cracks down on AI software, such as ChatGPT in order to prevent bias and plagiarism. (Drawing courtesy of Greta Hahn).

ChatGPT, an AI program capable of writing in a human-like format, brings uncertainty in higher level education, specifically with its ability to replicate educated responses. 

The AI has been used in recent months by students attending universities to write academic papers without so much as typing a paragraph. Schools have been made aware of this and are taking action based on each school’s plagiarism policies and honor codes. Although having an AI to write papers for you may seem convenient, there are multiple issues that have been brought to light that insinuate the software does not come without drawbacks.

ChatGPT was created by the start-up company OpenAI, a San Francisco based company with a close relation with Microsoft, on Nov. 30 2022. Schools across the country are combating the use of AI by blocking the software from school networks, such as the institutions’ WiFi networks and school issued computers. 

New York City school districts are some of the first to ban ChatGPT in the month of February, as the culmination of information spread through the AI is recognized as plagiarism. 

“The department cited concerns from local school teachers about student success. Oakland Unified in California and Seattle public schools have moved to block ChatGPT for now, in part because it creates human-like responses that can be difficult to detect,” reports USA Today.

Facilitator Kierston Greene, Associate Professor and Department Chair of Teaching & Learning, held a presentation online and at the Faculty Development Center on Feb. 15. She pointed out a multitude of social justice and marginalization issues associated with ChatGPT and AI software in general that could develop into significantly more pressing issues as time and technology advances.

The first problem Greene poses is that “natural language is inherently biased.” When asked a series of questions about natural language, the AI ordered examples of natural language, putting the English language first in the list. This poses an inherent bias coded by the AI software. 

“As an AI language model, I strive to be impartial and unbiased towards all languages and cultures, and I do not prioritize one over the other,” ChatGPT said.. The AI answers questions politically, with little room to question its bias. Although its answer seems fairly clear cut in its management of bias in general, the key word is strives. Because the software “strives” to maintain a low level of bias, the basic acceptance of potential bias is glossed over due to its strategic use of diction. 

Another problem that was highlighted within Greene’s presentation is that “[ChatGPT] is not able to process complex ideas that challenge its existence.”

“The concern that AI systems may replicate structural oppression is a valid one. AI systems can perpetuate existing biases and discrimination if they are trained on biased data, or if the design of the AI system itself is biased,” ChatGPT responds, when asked about its internalized biases. “This is especially concerning in the context of social justice, as AI systems can amplify existing inequalities and lead to further marginalization of already disadvantaged communities.”

“AI is biased and can also acknowledge it,” says Greene. ChatGPT acknowledges the potential of bias coming from software systems in general. “AI can be biased if the data used to train it is biased or if the algorithms used to develop it incorporate biased assumptions or biases of the developers,” it says. 

It is nearly impossible for real human beings to be free of all inherent biases. Because of this, the developers of AI can, whether it be consciously or subconscious, incorporate their own bias, which will then translate to AI responses. This can be especially dangerous when referring to potential censorship of the media through software, such as ChatGPT. 

“Ensuring diversity and inclusion in AI development is an important step in reducing the risk of bias, but it is not a guarantee that bias will be completely eliminated,” explains ChatGPT. Though the AI assures that there will be new developments to mitigate bias within the software, it is still not guaranteed, which poses concerns of what kind of information is being received.  

In response to a question about the murder of Tyre Nichols, which occurred Jan. 10, 2023, the AI says, “As an AI language model, I don’t have access to information beyond my knowledge cutoff date of September 2021, and I’m not able to browse the internet. Therefore, any information I provide beyond that date would be incorrect.” 

Since the software is not capable of accessing information past September of 2021, information given past that time is significantly less reliable. “It’s working off of a finite database even though it feels infinite. It’s not actually browsing the internet in real time, which is something I didn’t know,” says Greene. 

SUNY New Paltz is offering courses via Brightspace to faculty and staff to help combat the AI takeover. For more information on courses on this subject, refer to the New Paltz Faculty Development Center official website. 

The next seminar in the series regarding ChatGPT and AI refers to the Pedagogical Challenges and Approaches: Math and Science, facilitated by David Hobby, associate professor and Chair of Mathematics. For more information on this upcoming seminar on March 8, refer to the Spring 2023 Open AI campus discussions directory.

About Gianna Riso 15 Articles
Gianna Riso is a second-year journalism major, with a minor in communications. This is her first year with The Oracle. She enjoys photography, poetry and listening to music, specifically classic rock. Outside of school, she works at a bagel shop and enjoys hanging out with her friends.