Yale Center for Teaching and Learning

AI Guidance for Teachers

Since OpenAI released ChatGPT in November 2022, the availability of generative AI tools that produce text, image, or code has expanded to include Microsoft’s BingAI, Google’s Bard, and OpenAI’s more powerful GPT-4. The power of these tools to answer complex questions and generate coherent text continues to improve, and they’ve recently been integrated into widely used software like Google Docs and Microsoft Office. This has led to questions about how students are using these tools and how faculty can respond to best support their students’ learning. 

 

In consultation with instructors and technology experts at Yale and beyond, the Poorvu Center offers guidance on exploring how Generative AI works and suggestions about how to adapt your current teaching. We also offer advice for teaching fellows and a list of recommended reading. Additional resources include examples for incorporating AI into teaching from Yale instructors and resources from Poorvu Center events on teaching in the age of AI.

 

Instructors: Share your thoughts and ideas with us! Send questions or examples of how you are integrating these tools in your lessons or adjusting in light of them. Email us at askpoorvucenter@yale.edu.

 

 

Understanding Generative AI 

Generative AI tools respond to user-submitted text with relevant content expressed in natural language. Many of these tools remember all text that a user has entered in an ongoing conversation, and they are capable of revising their responses when prompted. 

 

Generative AI tools operate on programs called large language models. (The technology that underlies image and coding tools is also referred to by LLM.) LLMs are programmed by being fed large bodies of text (e.g. the entire internet) and trained to predict the most relevant sequence of words in response to a prompt. As a result, they can draw on a vast database of human knowledge, and their responses reflect the biases and limitations of the material they are trained on. Put differently, generative AI produces text by calculating the probability that it will be relevant to the prompt a user has submitted. As a result, the responses produced by generative AI tools tend to reflect consensus understandings, including any biases and inaccuracies that inform those positions.  

 

Because generative AI predicts words based on probabilities, they are not capable of original thought. They are best at responding to prompts that require less creativity, like summarizing information, solving problems whose answers are already known, or supporting one position among alternative theories or approaches. But AI tools can generate new ideas by combining existing knowledge in new ways, and users can submit follow-ups that prompt the bot to incorporate additional ideas into an existing response or to revise it for greater complexity. 

 

The best way to learn about AI tools is to try them yourself. Signing up for an account at ChatGPT, Google Bard, or BingAI (requires Microsoft Edge browser) will allow you to explore how these programs work and help you better understand how your students might be using them. You can often learn the most about these programs by simply letting curiosity be your guide. But here are a few prompts you might use to get started: 

  • Ask the program to write a response to one of the assignments from your class. 

  • Prompt the tool for help with a task you’re working on like writing an email or choosing the next step toward completing a project. 

  • Choose an assignment one of your students has submitted to your class and prompt the AI tool to produce a response that is as close as possible to the student’s, entering follow-up prompts as necessary to bring the text closer to the student’s. 

  • Ask the tool to teach you about a subject and then quiz you at the end. 

A few words of caution

While it can be valuable to try out these tools so that you understand how they work, they do raise issues of privacy and intellectual property.

  • Material that you submit may then become part of the program’s database—using the software also contributes to its development.

  • The systems have different policies and methods to let you remove your material, so be sure to look for those instructions and to review the Terms of Service. 

If you ask students to use these tools—and over time, many of us may encourage some of this collaboration—the risks become even more complicated.

 

  • Even software platforms that don’t use your content to train the product DO collect your information and often install cookies to track your other activity. While you may choose to make this bargain to try out AI tools yourself, it can be more problematic to require students to sign up for tools that will track and use their private data. 

  • Be sure not to submit your students’ work directly into any of these tools (which could theoretically help you understand a bot’s capabilities more fully), as this violates their intellectual property and could be a FERPA violation.

This article from the New York Times offers guidance on how to phrase prompts submitted to AI tools, and this Beebom article lists dozens of sample prompts that show the range of generative AI capabilities.

 

 

Guidance for Instructors 

(1) Instructors should be direct and transparent about what tools students are permitted to use, and about the reasons for any restrictions. Yale College already requires that instructors publish policies about academic integrity, and this practice is common in many graduate and professional school courses. To ensure that your students’ use of AI tools aligns with your learning goals, consider updating your academic integrity policy to clarify how students are permitted to engage with AI tools and how they should signal that it has been used. Find example statements in the next section. 

 

As for explaining why, understanding the learning goals behind assignments helps students commit to them. The only reason to assign written work is to help students learn—either to deepen their understanding of the material or to develop the skill of writing in itself. From our long experience in reviewing the portfolios of Yale College students, we see clear evidence that over the course of four years, they progress significantly as writers. 

 

Research shows that people learn more and retain the information longer when they write about it in their own words. If instead, students task an AI to generate texts, they won’t learn as much. This is why we ask them to write their own papers, homework assignments, problem sets, and coding assignments. This impact on learning applies across all disciplines—STEM problem sets that require explanations also depend on students’ generating language to learn more deeply. AI tools can also generate coding solutions, not just natural language.  

 

(2) Controlling the use of AI writing through surveillance or detection technology is probably not feasible. Turnitin has acknowledged a reliability problem with its detection tool software and Yale has not enabled this feature in Canvas. We believe there are more fruitful ways to engage writing processes and expectations than to rely on predictions that will probably be outpaced by further AI development. 

students speaking in class

 

Research by the Poorvu Center shows that a minuscule number of Yale College students plagiarize in their course papers. While acknowledging that new technologies can change the landscape, we know that most students want to do their own work because they care about learning. 

 

Tools like ChatGPT raise broader questions. We agree with this approach from Inside Higher Ed: “Go big. How do these tools allow us to achieve our intended outcomes differently and better? How can they promote equity and access? Better thinking and argumentation? … In the past, near-term prohibitions on … calculators, … spellcheck, [and] search engines … have fared poorly. They focus on in-course tactics rather than on the shifting contexts of what students need to know and how they need to learn it.” While the Poorvu Center doesn’t have all the answers, we can offer some general guidance and work with you to put this advice into practice. 

 

(3) Changes in assignment design and structure can substantially reduce students’ likelihood of cheating—and can also enhance their learning. Based on research about when students plagiarize (whether from published sources, commercial services, or each other), we know that students are less likely to cheat when they: 

  • Are pursuing questions they feel connected to 

  • Understand how the assignment will support their longer-term learning goals 

  • Have produced preliminary work before the deadline 

  • Have discussed their preliminary work with others 

  • Make their writing process visible in the completed assignment 

Many of the above characteristics can be integrated into authentic assessments, which are assignments that 1. have a sense of realism to them, 2. are cognitively challenging, and 3. require authentic evaluation (see this handout for more). Examples can be found at the bottom of this website. Authentic assessments also align with practices that prioritize student learning, and make it harder to collaborate with AI tools, including: 

  • Asking students to engage primary and secondary sources, which can include:
    • Assignments where students place their ideas in conversation with the ideas of other scholars—either readings you give them, or sources they find themselves
    • Asking students to use resources that are not accessible to ChatGPT, including any resources behind a paywall or many of the amazing resources in Yale’s collections.
    • Incorporating the most up-to-date resources and information of your field so that students are answering questions that have not yet been answered or have only begun to be answered.
    • Engaging with ChatGPT as a tool that exists in the world and having students critically engage with what it can produce as in these examples from Yale instructors.
  • Using alternative ways for students to represent their knowledge beyond text (e.g., draw images, make slides, facilitate a discussion). Consider adopting collaboration tools like VoiceThread.

 

 

Addressing Generative AI on your Syllabus

The Poorvu Center recommends including on your syllabus an academic integrity statement that clarifies your course policies on academic honesty. The simplest way to state a policy on the use of ChatGPT and other AI composition software is to address it in your academic integrity statement. 

 

A policy that encourages transparency in how students use AI might read: Before collaborating with an AI chatbot on your work for this course, please request permission by sending me a note that describes (a) how you intend to use the tool and (b) how using it will enhance your learning. Any use of AI to complete an assignment must be acknowledged in a citation that includes the prompt you submitted to the bot, the date of access, and the URL of the program.  

 

If you think students’ learning is best supported by avoiding AI altogether, your course policy might read: Collaboration with ChatGPT or other AI composition software is not permitted in this course. 

 

 

Guidance for Teaching Fellows 

As a teaching fellow, raising the issue of ChatGPT with your lead instructor can open a broader conversation about the goals of your course and how ChatGPT might impact those goals. For example, it is helpful to understand the intended purpose of both low-stakes and high-stakes assessments, in addition to possible redesigns to work students are doing outside of class. 

 

Regardless of whether you make changes to assessments, you play an important part in making the goals clear to your students. Whether you hope that their work will help them explore new ideas, familiarize themselves with important concepts, or make progress toward a bigger project, articulating those learning outcomes and the purpose of the activities that support them will help students understand why using AI would be counterproductive to their own development and progress. 

 

These conversations with students can happen in conventional sections, but they are also worthwhile to have in office hours and review sessions. Additionally, as you give feedback to students, think about ways that you can point them back to the larger goals of the course: by showing them how they can grow through the assessment process, you indicate its value. 

 

 

Poorvu Center Sessions

  • Navigating AI Literacy on January 11, 2024 analyzed what competencies are required of faculty and students to meaningfully engage with AI systems, assess their impact on society, and make informed decisions regarding their use. The session included essential skills for fostering deeper learning, and we heard from two students who are on the cutting edge of generative AI learning and research.
  • Teaching in the Age of Generative AI on October 13, 2023 provided an introduction to the concept of large language models, shared comparative examples of student and AI-generated work, and created space for faculty to try out one of these tools with their own assignments and discuss the ramifications with other faculty.

  • Teaching with AI: Conversation and Workshop Parts I & II on August 24, 2023 was a hybrid session for faculty to participate in person and in zoom. Part I was an introduction to Generative AI while Part II considered the generative possibilities with AI in one’s research.

  • Teaching with AI Writers: Conversation and Workshop on May 25, 2023 provided a space for faculty to hear the various changes colleagues have made to their teaching in response to new artificial intelligence writers, as well as a hands-on opportunity to consider their own assignment design approach.

  • Spring Teaching Forum: Teaching in the Age of ChatGPT on May 1, 2023 hosted Derek Bruff–an educator, author, higher ed consultant, and former director of the Vanderbilt University Center for Teaching–to briefly survey the landscape of AI text generation tools, then dive into the teaching choices they present.

  • Artificial Intelligence and Teaching: A Community Conversation on February 14, 2023 featured a panel of Yale faculty and students discussing the implications of AI-writing software for teaching and learning.

 

Incorporating AI in Teaching: Examples from Yale Instructors

Yale instructors are considering how to redesign elements of their courses to address ChatGPT and other AI platforms. Examples across the disciplines demonstrate some of the novel ways instructors are engaging their students as this technology evolves.

 

 

Recommended Reading

New articles on this fast-developing topic are appearing every day. Below is a selection of thoughtful pieces that address various elements of this tool.