The information on this page is for teachers. Please visit our webpage for AI Advice for Students.
For questions about research support, please visit Yale Center for Research Computing.
To learn more about AI at Yale University, visit the AI at Yale webpage.
Overview
Since OpenAI released ChatGPT in November 2022, the availability of generative AI tools that produce text, image, or code has expanded to include other tools such as Microsoft CoPilot, Google Gemin, Anthrophic Claude and OpenAI’s more powerful GPT-4o. The power of these tools to answer complex questions and generate coherent text continues to improve, and they’ve recently been integrated into widely used software like Microsoft 365 and Zoom. This has led to questions about how students are using these tools and how faculty can respond to best support their students’ learning.
In consultation with instructors and technology experts at Yale and beyond, the Poorvu Center offers guidance on exploring how Generative AI works and suggestions about how to adapt your current teaching. We also offer advice for teaching fellows and a list of recommended reading. Additional resources include examples for incorporating AI into teaching from Yale instructors and resources from Poorvu Center events on teaching in the age of AI.
Instructors: Share your thoughts and ideas with us! Send questions or examples of how you are integrating these tools in your lessons or adjusting in light of them. Email us at askpoorvucenter@yale.edu.
AI at Yale
Please visit Yale’s offical AI page to learn more about Yale’s AI provided tools, news and insights on AI, and updated guidance and resources.
Understanding Generative AI
Generative AI tools operate on programs called large language models (LLM). LLMs are programmed by being fed large bodies of text (e.g. the entire internet) and trained to predict the most relevant sequence of words in response to a prompt. As a result, they can draw on a vast database of human knowledge, and their responses reflect the biases and limitations of the material they are trained on. Put differently, generative AI produces text by calculating the probability that it will be relevant to the prompt a user has submitted. As a result, the responses produced by generative AI tools tend to reflect consensus understandings, including any biases and inaccuracies that inform those positions.
Because generative AI predicts words based on probabilities, they mostly produce oversimiplified or generic outputs. They are best at responding to prompts to summarize information and solve problems whose answers are already known, or supporting one position among alternative theories or approaches. But AI tools can generate new ideas by combining existing knowledge in new ways, and users can submit follow-ups that prompt the bot to incorporate additional ideas into an existing response or to revise it for greater complexity.
The best way to learn about AI tools is to try them yourself. You can try out Yale’s Clarity platform, which provides students, faculty, and staff with a secure access to robust AI tools such as ChatGPT-4o and Microsofto CoPilot. Clarity will allow you to explore how these programs work and help you better understand how your students might be using them. You can often learn the most about these programs by simply letting curiosity be your guide. But here are a few prompts you might use to get started:
-
Ask the program to write a response to one of the assignments from your class.
-
Prompt the tool for help with a task you’re working on like writing an email or choosing the next step toward completing a project.
-
Ask the tool to teach you about a subject and then quiz you at the end.
A few words of caution
While it can be valuable to try out these tools so that you understand how they work, they do raise issues of privacy and intellectual property.
-
Material that you submit may then become part of the program’s database—using the software also contributes to its development. Yale’s AI Clarity Platform has chat data securley encrypted and your data is not used to train any of the AI models.
-
The systems have different policies and methods to let you remove your material, so be sure to look for those instructions and to review the Terms of Service.
If you ask students to use these tools—and over time, many of us may encourage some of this collaboration—the risks become even more complicated.
-
Even software platforms that don’t use your content to train the product DO collect your information and often install cookies to track your other activity. While you may choose to make this bargain to try out AI tools yourself, it can be more problematic to require students to sign up for tools that will track and use their private data, unless they are using Yale-approved tools such as Clarity.
-
Be sure not to submit your students’ work directly into any of these tools (which could theoretically help you understand a bot’s capabilities more fully), as this violates their intellectual property and could be a FERPA violation. Yale’s AI Clarity Platform does state you can include high-risk data in the chatbot, but to refer to data classification guidelines before proceeding.
This article from the New York Times offers guidance on how to phrase prompts submitted to AI tools, and this substack post from Ethan Mollick offers further recommendations for crafting AI prompts.
Guidance for Instructors
(1) Instructors should be direct and transparent about what tools students are permitted to use, and about the reasons for any restrictions. Yale College already requires that instructors publish policies about academic integrity, and this practice is common in many graduate and professional school courses. To ensure that your students’ use of AI tools aligns with your learning goals, consider updating your academic integrity policy to clarify how students are permitted to engage with AI tools and how they should signal that it has been used. You can also craft a policy dedicated to AI in classroom. Find example statements in the next section.
As for explaining why, understanding the learning goals behind assignments helps students commit to them. The only reason to assign written work is to help students learn—either to deepen their understanding of the material or to develop the skill of writing in itself. From our long experience in reviewing the portfolios of Yale College students, we see clear evidence that over the course of four years, they progress significantly as writers.
Research shows that people learn more and retain the information longer when they write about it in their own words. If instead, students task an AI to generate texts, they won’t learn as much. This is why we ask them to write their own papers, homework assignments, problem sets, and coding assignments. This impact on learning applies across all disciplines—STEM problem sets that require explanations also depend on students’ generating language to learn more deeply. AI tools can also generate coding solutions, not just natural language.
(2) Controlling the use of AI writing through surveillance or detection technology is not feasible. Turnitin has acknowledged a reliability problem with its detection tool software and Yale has not enabled this feature in Canvas. We believe there are more fruitful ways to engage writing processes and expectations than to rely on predictions that will probably be outpaced by further AI development.
Research by the Poorvu Center shows that a minuscule number of Yale College students plagiarize in their course papers. While acknowledging that new technologies can change the landscape, we know that most students want to do their own work because they care about learning.
Generative AI tools raise broader questions. We agree with this approach from Inside Higher Ed: “Go big. How do these tools allow us to achieve our intended outcomes differently and better? How can they promote equity and access? Better thinking and argumentation? … In the past, near-term prohibitions on … calculators, … spellcheck, [and] search engines … have fared poorly. They focus on in-course tactics rather than on the shifting contexts of what students need to know and how they need to learn it.” While the Poorvu Center doesn’t have all the answers, we can offer some general guidance and work with you to put this advice into practice.
(3) Changes in assignment design and structure can substantially reduce students’ likelihood of cheating—and can also enhance their learning. Based on research about when students plagiarize (whether from published sources, commercial services, or each other), we know that students are less likely to cheat when they:
-
Are pursuing questions they feel connected to
-
Understand how the assignment will support their longer-term learning goals
-
Have produced preliminary work before the deadline
-
Have discussed their preliminary work with others
-
Make their writing process visible in the completed assignment
Many of the above characteristics can be integrated into authentic assessments, which are assignments that 1. have a sense of realism to them, 2. are cognitively challenging, and 3. require authentic evaluation (see this handout for more). Examples can be found at the bottom of this website. Authentic assessments also align with practices that prioritize student learning, and make it harder to collaborate with AI tools, including:
- Asking students to engage primary and secondary sources, which can include:
- Assignments where students place their ideas in conversation with the ideas of other scholars—either readings you give them, or sources they find themselves
- Asking students to use resources that are not accessible to generative AI tools , including any resources behind a paywall or many of the amazing resources in Yale’s collections.
- Incorporating the most up-to-date resources and information of your field so that students are answering questions that have not yet been answered or have only begun to be answered.
- Engaging with Clarity or ChatGPT as a tool that exists in the world and having students critically engage with what it can produce as in these examples from Yale instructors.
- Using alternative ways for students to represent their knowledge beyond text (e.g., draw images, facilitate a discussion). Consider adopting collaboration tools like VoiceThread.
Addressing Generative AI on your Syllabus
The Poorvu Center recommends including on your syllabus an academic integrity statement or an AI policy statement that clarifies your course policies on academic honesty and the uage of AI. The simplest way to state a policy on the use of AI tools is to address it in your syllabus.
A policy that encourages transparency in how students use AI might read: Before collaborating with an AI chatbot on your work for this course, please request permission by sending me a note that describes (a) how you intend to use the tool and (b) how using it will enhance your learning. Any use of AI to complete an assignment must be acknowledged in a citation that includes the prompt you submitted to the bot, the date of access, and the URL of the program.
If you think students’ learning is best supported by avoiding AI altogether, your course policy might read: Collaboration with ChatGPT or other AI composition software is not permitted in this course.
For additional resources we offer a recent slide deck that walks through steps of how to think about crafting an AI policy for your course as well as examples of AI policy statements from various Yale instructors.
Guidance for Teaching Fellows
As a teaching fellow, raising the issue of generative AI with your lead instructor can open a broader conversation about the goals of your course and how AI tools might impact those goals. For example, it is helpful to understand the intended purpose of both low-stakes and high-stakes assessments, in addition to possible redesigns to work students are doing outside of class.
Regardless of whether you make changes to assessments, you play an important part in making the goals clear to your students. Whether you hope that their work will help them explore new ideas, familiarize themselves with important concepts, or make progress toward a bigger project, articulating those learning outcomes and the purpose of the activities that support them will help students understand why using AI would be counterproductive to their own development and progress.
These conversations with students can happen in conventional sections, but they are also worthwhile to have in office hours and review sessions. Additionally, as you give feedback to students, think about ways that you can point them back to the larger goals of the course: by showing them how they can grow through the assessment process, you indicate its value.
Seed grants for reviewing curricula in the context of AI
Addressing the need to prepare Yale’s graduates to lead and thrive in a future infused with AI, the Task Force acknowledged Yale’s “opportunity to serve as a model by adapting its curricula.” Deans and faculty are already implementing innovative changes to coursework and offering answers to questions about what it means to teach and train in this new age.
The Poorvu Center for Teaching and Learning will pilot curriculum review grants this academic year, assisting schools and departments as they examine their programs and disciplines in the context of AI. More information about these seed grants will be coming soon, please revisit this webpage to view updated information.
Poorvu Center Sessions
- Crafting an AI Policy on August 22, 2024 provided an overview of generative AI along with some recommended practices on how to craft a meaningful AI policy for a course. We also had two faculty panelists who discussed their views on AI in the classroom.
- Navigating AI Literacy on January 11, 2024 analyzed what competencies are required of faculty and students to meaningfully engage with AI systems, assess their impact on society, and make informed decisions regarding their use. The session included essential skills for fostering deeper learning, and we heard from two students who are on the cutting edge of generative AI learning and research.
-
Teaching in the Age of Generative AI on October 13, 2023 provided an introduction to the concept of large language models, shared comparative examples of student and AI-generated work, and created space for faculty to try out one of these tools with their own assignments and discuss the ramifications with other faculty.
-
Teaching with AI: Conversation and Workshop Parts I & II on August 24, 2023 was a hybrid session for faculty to participate in person and in zoom. Part I was an introduction to Generative AI while Part II considered the generative possibilities with AI in one’s research.
-
Teaching with AI Writers: Conversation and Workshop on May 25, 2023 provided a space for faculty to hear the various changes colleagues have made to their teaching in response to new artificial intelligence writers, as well as a hands-on opportunity to consider their own assignment design approach.
-
Spring Teaching Forum: Teaching in the Age of ChatGPT on May 1, 2023 hosted Derek Bruff–an educator, author, higher ed consultant, and former director of the Vanderbilt University Center for Teaching–to briefly survey the landscape of AI text generation tools, then dive into the teaching choices they present.
-
Artificial Intelligence and Teaching: A Community Conversation on February 14, 2023 featured a panel of Yale faculty and students discussing the implications of AI-writing software for teaching and learning.
Incorporating AI in Teaching: Examples from Yale Instructors
Yale instructors are considering how to redesign elements of their courses to address ChatGPT and other AI platforms. Examples across the disciplines demonstrate some of the novel ways instructors are engaging their students as this technology evolves.
Recommended Reading
New articles on this fast-developing topic are appearing every day. Below is a selection of thoughtful pieces that address various elements of this tool.
-
Ethan Mollick’s recent blog post, “Post Apocalyptic Education.” (August 2024)
-
A new Yale School of Management course prepares students for a workplace transformed by AI (January 2024).
-
Harvard’s metaLAB, “AI Pedagogy Project,” includes AI literacy, a guided interaction with a chatbot, and examples of reimagined assessments in the higher education classroom (January 2024).
-
This essay from the Fordham Institute helps instructors ethically embrace AI in education with practical guiding principles and examples (November 2023).
-
The Association of College and University Educators (ACUE) released this tips-oriented practical guide to teaching with AI (Fall 2023)
-
The Chronicle of Higher Education outlines how to teach with AI tools in ways that meet faculty concerns about ethics and equity (November 2023).
-
Jenny Frederick, Poorvu Center Executive Director, discusses the center’s approach to AI guidance with MIT Technology Review (September 2023).
-
Mollick and Mollick at UPenn created this 5-part video series, “Practical AI for Instructors and Students,” as an introduction to AI in higher education (July 2023). They also published seven approaches to using AI in the classroom (June 2023).
- The Department of Education released insights and recommendations on AI and the Future of Teaching and Learning (May 2023). Brief summary of the report is available on EdSurge.
- OpenAI, the creator of ChatGPT, shares about safety and how they train their tool as well as their guidelines for educators.
-
The Center for Teaching at Vanderbilt University and the University of South Carolina’s Center for Teaching Excellence offer comprehensive AI guidance to instructors from revised learning goals and assessment to teaching strategies.