Yale Center for Teaching and Learning

AI Guidance

The ChatGPT AI Chatbot has been much in the news since November 2022, when the Open AI institute released version 3.5. Because the bot can produce more coherent texts than any previous system, there has been a lot of speculation about what impact this tool might have on teaching and learning.

In consultation with instructors and technology experts at Yale and beyond, the Poorvu Center offers the following suggestions about how to adapt your current teaching in light of quickly developing AI (artificial intelligence) software capabilities. This guidance will likely change as these technologies, and how we address them, continue to evolve.

Find advice for teaching fellows and recommended reading below.

Instructors: Share your thoughts and ideas with us! Send questions or examples of how you are integrating these tools in your lessons or adjusting in light of them. Email us at askpoorvucenter@yale.edu

Watch “Artificial Intelligence and Teaching: A Community Conversation.” The Poorvu Center hosted a panel of Yale faculty and students discussing implications of AI writing software for teaching and learning. This event took place on Tuesday, February 14, 2023 and featured panelists:

  • Nisheeth Vishnoi, A. Bartlett Giamatti Professor of Computer Science
  • Justin Farrell, Professor of Sociology in the Yale School of the Environment
  • Laura Wexler, Charles H. Farnam Professor of American Studies and Women’s, Gender, and Sexuality Studies
  • Brian Scassellati, Professor of Computer Science, Cognitive Science, and Mechanical Engineering
  • Noreen Khawaja, Associate Professor of Religious Studies
  • Isiuwa Omoigui ‘23, a Yale College senior majoring in Political Science and a Lead Writing Partner at the Poorvu Center

Guidance for Instructors

(1) Instructors should be direct and transparent about what tools students are permitted to use, and about the reasons for any restrictions. Yale College already requires that instructors publish policies about academic integrity, and this practice is common in many graduate and professional school courses. If you expect students to avoid the use of AI chatbots when producing their work, add this to your policy. Find example statements below.

As for explaining why, understanding the learning goals behind assignments helps students commit to them. The only reason to assign written work is to help students learn — either to deepen their understanding of the material or to develop the skill of writing in itself. From our long experience in reviewing the portfolios of Yale College students, we see clear evidence that over the course of four years, they progress significantly as writers.

Research shows that people learn more and retain the information longer when they write about it in their own words. If instead, students task an AI to generate texts, they won’t learn as much. This is why we ask them to write their own papers, homework assignments, problem sets, and coding assignments. This impact on learning applies across all disciplines—STEM problem sets that require explanations also depend on students’ generating language to learn more deeply. ChatGPT AI can also generate coding solutions, not just natural language. 

(2) Controlling the use of AI writing through surveillance or detection technology is probably not feasible. Although there have been reports of an app that can detect ChatGPT usage, and although Turnitin promises that its software will learn to detect student use of AI to generate papers, the sophistication of AI software is likely to outpace technological detection. Rather than relying on catching and punishing users by using software, we suggest fruitful ways to adjust teaching in light of this new technology.

students speaking in class

Research by the Poorvu Center shows that a minuscule number of Yale College students plagiarize in their course papers. While acknowledging that new technologies can change the landscape, we know that most students want to do their own work because they care about learning.

Tools like ChatGPT raise broader questions. We agree with this approach from Inside Higher Ed: “Go big. How do these tools allow us to achieve our intended outcomes differently and better? How can they promote equity and access? Better thinking and argumentation? … In the past, near-term prohibitions on … calculators, … spellcheck, [and] search engines … have fared poorly. They focus on in-course tactics rather than on the shifting contexts of what students need to know and how they need to learn it.” While the Poorvu Center doesn’t have all the answers, we can offer some general guidance and work with you to put this advice into practice.

(3)  Changes in assignment design and structure can substantially reduce students’ likelihood of cheating— and can also enhance their learning. Based on research about when students plagiarize (whether from published sources, commercial services, or each other), we know that students are less likely to cheat when they:

  • Are pursuing questions they feel connected to
  • Understand how the assignment will support their longer-term learning goals
  • Have produced preliminary work before the deadline
  • Have discussed their preliminary work with others

Many of the above characteristics can be integrated into authentic assessments, which are assignments that 1. have a sense of realism to them, 2. are cognitively challenging, and 3. require authentic evaluation (see this handout for more). Examples can be found at the bottom of this website. Authentic assessments also align with practices that prioritize student learning, and make it harder to collaborate with AI tools, including:

  • Using alternative ways for students to represent their knowledge beyond text (e.g., draw images, make slides, facilitate a discussion). Consider adopting collaboration tools like VoiceThread.
  • Asking students to use resources that are not accessible to ChatGPT, including any resources behind a paywall or many of the amazing resources in Yale’s collections.
  • Incorporating the most up-to-date resources and information of your field so that students are answering questions that have not yet been answered or only begun to be answered
  • Engaging with ChatGPT as a tool that exists in the world and having students critically engage with what it is able to produce, as in these examples (along with a growing list of teaching experiments planned by Yale instructors)

Addressing ChatGPT on your Syllabus

The Poorvu Center recommends including on your syllabus an academic integrity statement that clarifies your course policies on academic honesty. The simplest way to state a policy on the use of ChatGPT and other AI composition software is to address it in your academic integrity statement.

A policy prohibiting the use of ChatGPT for assignments in your course might read: Collaboration with ChatGPT or other AI composition software is not permitted in this course.

If you’d rather consider students’ use of ChatGPT on a case-by-case basis, your policy might read: Please obtain permission from me before collaborating with peers or AI chatbots (like ChatGPT) on assignments for this course.

Guidance for Teaching Fellows

As a teaching fellow, raising the issue of ChatGPT with your lead instructor can open up a broader conversation about the goals of your course and how ChatGPT might impact those goals. For example, it is helpful to understand the intended purpose of both low-stakes and high-stakes assessments, in addition to possible redesigns to work students are doing outside of class.

Regardless of whether you make changes to assessments, you play an important part in making the goals clear to your students. Whether you hope that their work will help them explore new ideas, familiarize themselves with important concepts, or make progress toward a bigger project, articulating those learning outcomes and the purpose of the activities that support them will help students understand why using AI would be counterproductive to their own development and progress.

These conversations with students can happen in conventional sections, but they are also worthwhile to have in office hours and review sessions. Additionally, as you give feedback to students, think about ways that you can point them back to the larger goals of the course: by showing them how they can grow through the assessment process, you indicate its value. 

Background: What ChatGPT Can and Cannot Do (Yet)

Can: ChatGPT produces good summaries of knowledge, like those you might find in the section of academic arguments that reviews previous research. It can produce texts that compare views or express judgment—for instance, texts that support a choice between alternative theories or approaches. It’s common for introductory courses to ask students to defend a choice between two or three positions, so it would be relatively easy for students to begin or supplement their work with ChatGPT.

Cannot (yet): So far, ChatGPT texts don’t cite sources. You can make “cite sources” part of the prompt, but the AI mostly just makes them up.

Less concretely, the products of ChatGPT often strike an informed reader as superficial or even perversely incorrect. Granted, this is sometimes true of texts produced by humans. But if one goal of academic writing is to extend or disrupt what is currently known, ChatGPT texts frequently fall short of this standard or produce claims that an informed reader finds to be patently false.

Recommended Reading

New articles on this fast-developing topic are appearing every day. Below is a selection of thoughtful pieces that address various elements of this tool, covering practical advice for teachers, the risks of proliferating biases, and challenging the bot to write Medieval poetry.

Condensed list of faculty advice from Inside Higher Ed, Jan. 12, 2023

Advice and a sample class activity from Times Higher Education, Nancy Gleason, Dec. 9, 2022

Hidden biases and societal risks from People of Color in Tech, Christian Ilube, Dec. 13, 2022

Useful insights and advice from the U. of Michigan CRLT, Jan. 9, 2023

Creative writing challenges that show AI is a toy, not a tool from The Atlantic, Ian Bogost, Dec. 7, 2022

Joanne Lipman describes her experiences using ChatGPT in a Yale class on media and democracy, in TIME, January 10, 2023

Advice from Jenny Frederick, Poorvu Center Executive Director, in the Hartford Courant, Jan. 30, 2023