Using GPT-3 as a Writing Assistance Tool

There’s no question that artificial intelligence (AI) and machine learning are becoming more prolific as technology continues to advance. Think of self-driving cars and AI-generated artwork and the possibilities that come with their introduction. However, just as promising prospects arise from them, questions concerning how they’ll develop and what risks are involved with AI writing are just as important to consider. After all, if this is the future, how much of the human element will be reduced or eliminated once it becomes more advanced?

 

GPT-3 Overview

The Generative Pre-Trained Transformer 3 (or GPT-3) is a language model created by the nonprofit organization OpenAI that utilizes deep learning to generate text at a human-like level. Released in 2020, GPT-3 is the successor of a previous iteration that was pretrained on raw texts only, GPT-2 (Figure 1), as a language model over 100 times larger and is presently the largest constructed language model. Its current state, provided by its creators as an application programming interface (API), requires a user to provide a text prompt, which will attempt to provide continuation based on the prompt’s content. Regardless of the type of content you want to write, the AI behind the scenes will need to be “trained” to recognize what is required. GPT-3 can run two types of writing applications. The first is a form-based method that generates text according to the desired content’s form. However, it is still prone to generating errors and requires you to fix them manually (Figure 2). The alternative is the “blank canvas” writing tool, which is understood as the predictive text generator that creates text content based on a given prompt according to what could reasonably be in it. However, quickly creating content comes with the caveat that it could generate content that may not contain the information you hoped it would create.

Figure 1. Text generated using DeepAI’s GPT-2 through a sample prompt.
Figure 2. Advertising copy generated using neuroflash’s GPT-3 AI Writer. (Note the presence of errors in spelling and punctuation.)

 

Benefits and Shortcomings

While this provides an interesting look at future AI-based text generation applications, using it for generating writing involves asking questions about integrity. As expected in research, writing original papers includes expectations of academic integrity—that studies submitted to journals were conceived and written by the authors through experiments they conducted. However, despite the promising aspects of GPT-3, it would be difficult to see its use as a writing tool when the generated text could potentially contain errors that could easily be overlooked. Among items that could be seen in these types of AI-generated papers include silly and nonsensical words, incorrect interpretations of data, and flawed hypotheses or methodologies. Furthermore, the text could run into complications involving plagiarized text, an issue that could question any author’s integrity as a researcher. In addition, the errors generated within a paper are based on the AI’s “predictions” on what could be in it, but the tool lacks the understanding that comes with the use of specific words. It may know how to put words into a natural-sounding order, but a lack of understanding of how words, or strings of words, come together and create meanings out of them means it will create meaningless and incorrect statements. Based on this likely outcome, one should consider asking whether it would still be preferable to have a human doing the writing, especially when writing content for publications specifically tailored to disseminating scrutinized, scientifically backed information.

However, its benefits shouldn’t be overlooked despite its downsides. Outside of the context of writing, having an AI that can generate text that could plausibly follow a prompt means it could do more. It could write code, solve math problems, and correct grammatical errors when trained to perform specific tasks. It’s a versatile tool that can handle various writing-based tasks, and AI has also been adopted for tasks beyond writing. The American Association for Cancer Research (AACR) is utilizing the AI software product Proofig to screen studies their journals have provisionally accepted to determine whether they contained plagiarized images. With further advances in technology, this screening process could also expand to automated text-based checking or even doing both types of screening comprehensively. In a more positive light, using it is simple enough when getting started; think of those times when you inserted text in Google’s search bar and found predictions or suggestions based on what you might be looking for given a certain word or phrase. Now imagine that but expanded to paragraphs of text. That is what you expect when using a writing tool like GPT-3.

 

Utilities Beyond Writing

Moving forward, the changes that come with the implementation of GPT-3, or a similar AI writing assistance program, would mean changes to the kinds of skills expected when using it. As mentioned in a study conducted by Floridi and Chiriatti (2020), the skills expected when using GPT-3 won’t be “copy and paste” but “prompt and collate.” Given that the writing tool will involve generating content, users would need to shift their skills toward shaping their prompts and collating those results intelligently. Some would say that its application would render the average human’s intelligence irrelevant if the use of the tool became widespread. However, in its current state, it would still need the human element to improve. How it improves and how human intelligence will be affected will depend on how it is used. Some would argue that having an AI write in your place has its benefits, and it certainly does. However, one should also consider the consequences that would follow human writing. The tool may be capable of making mistakes, but its quality has been noted to be “better than many people” (Elkins and Chun, 2020). As mentioned earlier, integrity would be called into question, as would the content if there were factual errors. Note that in the description of the GPT-3 tool, some would see “generate” as the operative word.

Conversely, the applications of GPT-3 as a writing tool could extend far beyond simply writing content. That content could be used for advertising or analytics for companies reliant on it. Beyond that, it could lead to automating repetitive tasks. These could alleviate burdens and provide room for tasks that require more human involvement while minimizing errors (when performed by a sufficiently trained AI).

It is understandable to believe that GPT-3 as a writing tool has limitations that could potentially cause more harm than help, but know that this is a tool, nonetheless. Using it with proper knowledge of its strengths and limitations could be an advantage, especially when utilizing it in the proper situations. Think of GPT-3 like a tool you’d find in a toolbox; once you know what you need to do, you can then pick the right tool for the job that needs to be done. Furthermore, with knowledge about how this type of tool could significantly shape technology and human interactions in response to that kind of change, awareness and vigilance should be at the forefront. Though this look at GPT-3 writing tools and their advantages and disadvantages could be understood as a Luddite response to the changing technological landscape, understand that it is an examination of what is present and could be coming in the future. This kind of technology is still developing, and with it, humanity might as well follow along and see where it goes.

At Journal Lab, we maximize the use of AI with our systematic processes, and we integrate that into our expert human editing. Reach us at [email protected] today to know more!

Share the Post:

Related Posts