Yale Center for Teaching and Learning

Generative AI and Learning at Yale: Some Advice for Students

The information on this page is for students. Please click here to access our AI Guidance for Teachers.

As a hub for faculty and student collaboration, the Poorvu Center has engaged in many conversations about the impact of generative AI. Because so many people use GenAI as part of their work and learning, we think students should learn enough about AI to become responsible practitioners or informed critics. While each course at Yale sets its own policies for using AI, we present here some general tips to prioritize your learning while you experiment with these tools. If you are a Yale student with suggestions or questions about AI and learning, please send them to us at the Poorvu Center.

Learning About AI

  • Experiment with AI tools to learn and to solve problems, keeping in mind course restrictions.
  • We’ve heard from faculty and students that they start to feel oriented about AI possibilities after they’ve done about 10 hours of exploring, so push yourself beyond first impressions of what these tools can do.
  • Try tasks in familiar subjects to better understand AI’s strengths and limitations.
  • Explore multiple chatbots and their customizations for different tasks.
  • For further learning, we recommend Ethan Mollick’s substack or his book Co-Intelligence. He is a professor at Wharton whom many people in higher education turn to for guidance about AI.

Ethical Concerns

As a teaching and learning center, we are trying neither to boost you towards AI nor discourage you from exploring it. But we should mention several pressing questions raised by the rise of this technology. Each bullet point includes a link for further reading.

Learning With AI

As a Search Tool

Especially since Wikipedia is one of their most important sources, AI chatbots can give broadly useful overviews on a lot of subjects. Even everyday Google searches now begin with AI generated answers rather than links to direct sources. But these tools have some important limitations as sources of reliable information.

  • AI chatbots have limited datasets, excluding most scholarly publications.
  • They lack true understanding and can’t distinguish reliable sources.
  • Be aware of their tendency to invent information (“hallucinate”).

For Summarization

  • Summaries can help prioritize reading and improve comprehension. That’s why scientific papers begin with abstracts.
  • But writing your own notes after reading enhances long-term understanding.
  • Retrieval Practice also teaches more than reviewing notes or summaries. Consider using AI to quiz you after you read rather than relying solely on AI-summaries to begin with.

For Writing and Problem-Solving

Using AI in Your Courses

  • AI policies vary by instructor, course, and assignment. These policies are designed to support you as you learn fundamental and higher skills within each discipline.
  • Ask your instructor if policies aren’t clear.
  • Yale College regulations require that you cite the source for any material you submit as part of your course work, including language, images, and ideas that you source from AI.

Using AI Outside of Courses

  • Check with supervisors about AI use in Yale employment. Federal regulations govern the use of many kinds of data.
  • Be aware of AI policies for graduate study or fellowship applications.
  • Although research shows that AI detectors are unreliable, some organizations (including potential employers) will still use them to review submitted materials.