ChatGPT has got the world talking, drawing public scrutiny after its launch on November 22, 2022, by the artificial intelligence lab OpenAI. ChatGPT has reached 100 million users within two months, surpassing the growth rates of popular applications like TikTok, Instagram, Netflix, and Facebook. In a press release issued in January 2023, Microsoft confirmed that it had invested billions of dollars in ChatGPT, an indicator of its rapid success.
When I asked ChatGPT to describe itself, it explained that it is a GPT (Generative Pre-trained Transformer) model trained on an extensive library of text data up to 2021.It uses machine learning algorithms and statistical techniques to analyze large amounts of text data and generate natural and intelligent responses to user queries.
ChatGPT is a versatile virtual assistant tool that helps users with a broad range of tasks, such as answering questions and engaging in natural and human-like small talk. Unlike any other chatbots, ChatGPT stands out for its advanced capability to generate contextually relevant, grammatically correct, and semantically meaningful responses. In addition to its advanced conversational abilities, ChatGPT features include planning schedules, providing guidance on personal finance, offering emotional support, recommending travel destinations, coding, and generating articles.
ChatGPT’s claimed purpose is to “benefit humanity” by providing a more natural and efficient way to interact with artificial intelligence. In a recent interview, Jeffrey Won, global chief innovation officer at professional services firm EY, shared his enthusiasm with CNBC. “What’s exciting is that the responses are more and more human-like, so what you’re seeing is things that we did not think computers could do before,” remarked Won.
ChatGPT has limitations in responding with accuracy and relevancy.
The most significant drawback is that the chatbot produces content that often appears reasonable and factually sound despite being poor in quality, lengthy, and unclear.
StackOverflow, a question-and-answer website for programmers, has issued a temporary ban on the use of ChatGPT. The website explained its decision: “Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to our site and our users asking or looking for correct answers.”
Another concern regarding ChatGPT is its bias and limited knowledge, which are inherent in the training data used as the primary source of its responses. Thus, the perpetuation of discriminatory outcomes can be detrimental and counterproductive to social impact initiatives in equality and diversity as organizations increasingly rely on AI-powered decision mechanisms for hiring, lending, and medical diagnoses. A recent New York Post article headlined “Wild West” ChatGPT also identified a ‘fundamental flaw’ with a political left-leaning bias.
Paul Eliopoulos, Director of the School of Information Technology at Washington University of Science and Technology, shared his insights about ChatGPT in academia, “[It] is a revolutionary use of existing technology with secondary effects yet to be fully understood. Our goal should be to build on its strengths while we mitigate risk exposure from the proverbial unintended consequences. We should be exploring collateral tools and policy changes to protect intellectual property. WUST is now studying the use case, and we remain committed to ensuring academic integrity with a zero-tolerance policy.” In response to criticisms regarding plagiarism, OpenAI released a free tool, an AI text classifier to distinguish between AI-generated and human-written text. The tool predicts the likelihood of a piece of text being generated by AI by categorizing it on a five-point bipolar scale that ranges from very unlikely to likely. The effectiveness of the AI text classifier in detecting content’s source is not guaranteed. The text classifier acknowledges its limitations, which include unreliable predictions or vulnerability to manipulation such as users editing or paraphrasing the AI-generated text to evade the AI text classifier. Educators are interested in using ChatGPT rather than banishing it from the classroom. Incorporating ChatGPT in a creative and engaging way is a productive alternative to avoiding it. Michael Cobb, Professor of Cyber Security at Washington University of Science and Technology, suggests that problem-based learning can be an innovative approach. Cobb adds, "[he is] not going to hide from the technology, but if ChatGPT encourages academia to revolutionize the way to assess and measure students’ learning, that is a win!” Busra Nur Arapoglu Master of Science in Cyber Security