Circuit board design with the Yale Y tiled through out.

Guiding Principles For Teaching and Learning with AI

At Yale, we see education as a process of learning how to think, to create, and to share knowledge responsibly. Generative AI challenges and amplifies those goals in new ways. Widely available AI tools produce competent documents across a range of formats—writing, visualizations, computer code, even STEM problem sets. But if students rely on AI assistance before building their own skills, the result can be a loss of the struggle that deepens learning. At the same time, AI tools can assist with familiar tasks, allowing experienced thinkers to work more effectively. 

Much of the Poorvu Center’s guidance focuses on scaffolding assignments to preserve learning spaces. But students should also learn how knowledge is made and shared in their disciplines and in settings beyond the university, and how these methods may change in light of AI. We’ve gathered the principles below from our conversations with instructors, students, and staff, and we hope they can support conversation and reflective decision-making in what is still a new area of learning together.

Generative AI tools should be used in ways that allow students and faculty to take deeper ownership of their work. Decisions around using AI should consider questions like: Are you the author or maker of the work? By how much and in what ways? If it’s writing, is it in your voice? And does the voice reflect your unique point of view? When designing their courses, faculty should reflect on whether students’ use of AI supports genuine learning—deep engagement, critical thinking, and skill development—or if it bypasses those processes in ways that ultimately undermine their growth.

Understanding the impact of AI on learning requires thoughtful attention. Within each course, instructors at Yale have full authority to determine whether and how students may use AI when completing assignments. But for the greatest long-term impact, guidelines on AI use must be thoughtfully designed to support authentic learning. They should be developed as part of a deep and ongoing conversation—informed by disciplinary expertise—about how learning is affected by different ways of writing, of doing research, and of solving problems. And the conversation must welcome students’ voices explaining how and why they might choose to use AI to complete an assignment. Creating a shared understanding around the costs and benefits of different uses of AI will benefit both students and instructors. 

The professional methods in nearly every subject studied at Yale are changing because of AI. Engaging with these shifts—whether to preserve established learning practices or experiment with new ones—requires clarity and reflection. Instructors are encouraged to discuss with students how AI is influencing their discipline and its methods. In some courses, it will make sense to restrict use of AI while students develop skills and become oriented to a field. Some courses may choose to incorporate AI practices more fully so that students can better understand how using AI shifts habits of analysis, expression, or problem-solving. Settings for these limits need to be discussed transparently as part of bringing students into the field under study.

Protect Student Work and Privacy

Instructors must safeguard student data, intellectual property, and autonomy when integrating AI tools into coursework. Yale-approved platforms should be used when AI is required, because they ensure greater privacy protections. 

Use AI Responsibly and Ethically

Students are responsible for using AI in ways that uphold the ethical standards of their courses and disciplines. These standards are set by instructors to support authentic learning and are detailed in the course syllabus or in the instructions for each course assignment. 

Develop a Practice of Transparency

Faculty and students should cite when and how they use AI. When instructors thoughtfully acknowledge how AI has supported their work—in course design, writing, or research—they help students develop a more mature understanding of academic tools, agency, and intellectual ownership.

Discuss AI’s Broader Impacts

AI tools emerge from systems that affect labor, sustainability, and equity. Instructors are encouraged to acknowledge these broader implications openly with students, exploring together when and how AI use aligns or conflicts with disciplinary and ethical commitments.

Additional Resources

We’re here to help!

Reach out to the Poorvu Center team if you have any questions or to learn more about our programs.

Contact Us