News

Welcome to the Future: OpenAI is Making Plenty of Noise

Written by Parker Wadding

In November 2022 OpenAI unexpectedly dropped ChatGPT-3, the newest technological advancement in Artificial Intelligence (AI).  After just the first two months of being available to the public, ChatGPT gained over 30 million users and around 5 million users a day, making it one of the fastest-growing software products according to The New York Times.  ChatGPT is a search engine that can answer a wide variety of questions and prompts, appealing to those looking to save time in their search for credible sources. ChatGPT can debug code, write love songs, or counsel a person through a mental health crisis; however, this revolutionary technology has raised red flags for some. ChatGPT scans for patterns in a wide range of data from across the internet. Its unregulated nature makes it easier for plagiarism and misinformation to be presented as original ideas and concrete facts.

Educators have become increasingly alarmed after reports surfaced that students are plugging their essay prompts into ChatGPT and submitting the AI-generated work as their own. This new tool is forcing schools to adapt to the 21st century. Educators such as Professor Chris Girman of Point Park University are establishing lessons that teach students how to use ChatGPT as a helpful critical thinking tool rather than an instrument of plagiarism. Girman is asking his students questions like, “If the (ChatGPT-3) essays are ‘beautiful,’ what makes them so? What do the students dislike? Are they happy with the paper that the tool generated on their behalf?”

However, not all teachers see ChatGPT as a helpful tool that should be used in the classroom. Duquesne English Professor Greg Barnhisel told Public Source that “(the college essay) may die, as other historical artifacts have.” He reasons that this development in technology has made it too difficult to enforce the rules governing plagiarism. Barnhisel elects that projects are the future of education.

ChatGPT cannot be relied upon without further investigation, regardless of whether Girman’s or Barnhisel’s philosophies are correct. Even OpenAI’s chief executive, Sam Altman, said in a statement on Twitter, “ChatGPT is incredibly limited… it’s a mistake to be relying on it for anything important right now.” ChatGPT is more than limited. It has been known to give incorrect and biased information. One of the most concerning details about ChatGPT are the sources it references. While some of the information is pulled from peer-reviewed scholarly articles, other data is taken directly from blogs or other unsubstantiated websites such as Wikipedia.

Nature.com explains that ChatGPT cannot be left unchecked, especially if it is being used in fields that do not have a great deal of data training applied. The Nature.com article reported on an experiment that computational biologists conducted. The scientists plugged three of their research papers into ChatGPT, asking the AI generator to ‘improve’ the papers. The scientists claimed that the papers that ChatGPT produced were easier to read and “even spotted a mistake in a reference to an equation.” However, the article reports that users should be cautious. ChatGPT’s sole objective is to continue the conversation with the user, and it only regurgitates what it has learned through data training. This resulted in an abundance of errors and misleading information in the updated papers. Another problematic issue that the article reported was ChatGPT’s lack of credible sources. When asked to write an academic research paper, ChatGPT and AI sites like it are known to fabricate sources, making the data or papers produced extraordinarily unrelatable.

Another drawback of ChatGPT is its ability to spread hate speech. Although there are safeguards in place to prevent the spread of harmful rhetoric through ChatGPT, some users have found ways around them. The New York Times tested this and came up with some very disturbing results. By asking ChatGPT to write like a specific person on subjects like vaccines or school shootings, hateful and false information became apparent in its responses.

To decrease the amount of hate speech or ignorance that could be generated on its platform, OpenAI partnered with Sama, a training-data company, in November 2021. A Time Magazine article explains that companies like Facebook and OpenAI use data training companies like Sama to sift through unsavory or illegal images and descriptions on the internet for the purpose of teaching the new AI technology how to respond to sensitive questions.

Although reducing the amount of hateful content accessible on the internet sounds advantageous, the methods used are not. Sama outsources labor in Kenya to weed through tens of thousands of texts and images sent from OpenAI. The exploitation of workers occurs regularly in countries such as Kenya, Uganda, and India due to lax labor laws. The Time article reported that Kenyan workers employed by Sama earned an hourly wage of between $1.32 and $2. For many Sama employees, having a steady minimum wage job was not worth the mental taxation of inspecting such content every day.

The Time article cited that a good deal of the content sent “described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest.” Sama’s workers had a contractual right to mental health counseling. However, anonymous Sama employees claimed that access to “wellness” counseling was severely restricted. Due to the number of employees Sama was losing, they decided to sever ties with OpenAI in March, eight months earlier than their contract established. Since then, Sama has announced it is terminating all work with sensitive content.

An article in the New York Times explores OpenAI’s roots. In its early days, OpenAI looked very different than it does today. In 2015, Sam Altman, Peter Thiel, Reid Hoffman, and Elon Musk started a nonprofit research lab, asserting that they were “a mission-driven organization that wants to ensure that advanced AI will be safe and aligned with human values.” Skeptics of OpenAI claim that the company has embraced a more competitive spirit. In 2019 Microsoft invested $1 billion into OpenAI and the startup became a for-profit subsidiary.

Sam Altman, OpenAI’s chief executive, seems partially to blame for a more cut-throat atmosphere at OpenAI. The New York Times journalist interviewed many current and former OpenAI employees, anonymously, at their request. The employees have not been permitted to speak publicly about their time at OpenAI. The article reported on the surprising way in which the company embraced its new competitive spirit. Employees reported that they had been working on ChatGPT-4, which was scheduled to be released in early 2023. However, in November 2022, as they were making their finishing touches on ChatGPT-4, they were told that in two weeks they would need to release ChatGPT-3 to get ahead of any competition that may be releasing faster than they could. So, OpenAI employees frantically dusted off an older model, revamped it slightly, and put it out on the market calling it ChatGPT-3.

Clearly, OpenAI is making a lot of noise. Forbes Magazine reported that OpenAI has received a significant amount of backing from companies like Microsoft, which has just confirmed a multibillion-dollar deal. As of February 1, 2023, OpenAI released ChatGPT Plus an updated version of ChatGPT-3 which costs $20/month. OpenAI described this new feature as “a pilot subscription plan for ChatGPT, a conversational AI that can chat with you, answer follow-up questions, and challenge incorrect assumptions.”

With the rising fears about plagiarism and misinformation, Princeton Alumni Weekly announced a new program that a student at Princeton University created called GPTZero. GPTZero is an application that can scan text and determine whether it was generated by humans or AI. This development is game-changing for educators concerned about students using ChatGPT to complete their assignments.

Although the advancements OpenAI are making with ChatGPT are astonishing, it is imperative that the problematic aspects of the new technology are addressed. The rising threat of plagiarism and misinformation must be combated through fact-checking and plagiarism detection programs. ChatGPT can be extremely useful if used in the right way. ChatGPT should be operated as a jumping-off point when it comes to research and not be relied upon entirely for the collection of information. The development of AI is not stopping anytime soon, thus demanding that the quality of the technology continues to improve is paramount.

Image from canva.com

Leave a Reply