UT Austin Flags
3 min readFeb 27, 2023

--

Writing Flag Committee’s Guidance on chatGPT and Classroom AI

The Faculty Writing Committee has been reviewing chatGPT in light of other writing-related AI tools such as grammar checkers, plagiarism detection services, automated citation systems. Their guidance for classroom use of AI, including chatGPT, is grounded in essential writing pedagogy, and will be published to faculty later this month. Some highlights from the Writing Committee’s review of chatGPT:

· There is room for for AI tools in the writing and educational experience (as noted by professional organizations like the Association for Writing Across the Curriculum), but decisions about these tools should be made locally, by informed stakeholders.

· More scholarly research is needed on tools like chatGPT, which are often introduced with much fanfare but little formal study of their impact on teaching and learning.

· The known biases of AI are an important factor when weighing how and when to use it in classrooms. Bias in AI using Large Language Models (like chatGPT), is a well-known, serious problem that shows no signs of abating, but bias is likewise evident in the structures and applied uses of grammar-checkers, assessment systems, and plagiarism detection tools.

· The Writing Flag’s feedback and revision requirement offers a useful map for navigating questions about chatGPT and other AI. Centering a process of drafting, feedback, and revision, as all Writing Flag classes should do, protects against the possibility that AI-generated text will be submitted as a student’s own work. In addition, emphasizing learning objectives focused on developing skills and practices as writers can reduce the incentives to use AI-generated work.

· Instructors should avoid overreacting to fears about chatGPT. ChatGPT’s output is often lacks strong internal cohesion, displays a lack of nuance in its deployment of vocabulary and contextualizing sources poorly. These qualities are not uncommon in the writing of novices; in fact, they are developmental markers that indicate where our students’ skills need improvement. If we penalize students for “sounding like chatGPT,” or accuse students of academic dishonesty merely because their prose sounds awkward and artificial, we will be punishing students for perfectly normal writing activity.

· Machines cannot take responsibility for what they write, but academic writers are obliged to do so. Scholarly journals are already setting standards for the use of AI in published work — for example, that chatGPT cannot be listed as an “author,” and that any use of the tool in the writing process must be fully disclosed. A similar approach should be adopted in the classroom: The use of AI by students or instructors should always be fully disclosed, including the tool used and the parts of the process it was used in.

· AI typically insinuates itself into writing courses when instructors have too many students. Instructors may turn to aggressively marketed products like “plagiarism detection services” when they lack the time or energy to read students’ writing carefully. Students resort to chatbots and paper mills when they suspect their work won’t be carefully read by overburdened instructors. Programs should ensure that writing-intensive classes are appropriately staffed to allow the individualized feedback and revision processes that promote learning and discourage the misuse of AI.

The full guidance statement from the Writing Committee will be available on the Center for the Skills & Experience Flags website later this month.

Please note that the Center for Teaching and Learning has created a basic informational page for faculty on chatGPT, including some specific approaches to the tool that may be incorporated into classrooms.

--

--

UT Austin Flags

The Center for the Skills & Experience Flags provides resources and support for the general education shared by all undergraduates at UT Austin.