The information on this page is for students. Please click here to access our AI Guidance for Teachers.
As a hub for faculty and student collaboration, the Poorvu Center has hosted many conversations about the impact of generative AI since 2023. The University has made substantial investments to encourage exploration of AI, and has developed some central resources and advice, including Clarity—a portal that provides free access to powerful AI tools with secure data protection. As a teaching and learning center, we specifically want to offer suggestions about how to prioritize your learning while you experiment with AI tools. Ultimately, we think students should learn enough about AI to become responsible practitioners or informed critics. Note, please remember that each course at Yale sets its own policies for using AI. If you are a Yale student with suggestions or questions about AI and learning, please send them to us at the Poorvu Center.
Learning About AI
- Experiment with AI tools to learn and to solve problems, keeping in mind course policies.
- We’ve heard from faculty and students that they develop a clearer sense of AI’s capabilities after about 5 hours of exploring, so push yourself beyond first impressions of what these tools can do.
- Try tasks in familiar subjects to better understand AI’s strengths and limitations.
- Explore multiple models and their customizations to understand their utility for different tasks.
- For further learning, we recommend Ethan Mollick’s substack or his book Co-Intelligence.
Ethical Concerns
As a teaching and learning center, we are trying neither to boost you towards AI nor discourage you from exploring it. But we should mention several pressing questions raised by the rise of this technology. Each bullet point includes a link for further reading.
- Many chatbot responses reflect cultural, gender, and racial biases. To learn more about these limitations, we suggest reading Joy Buolamwini’s book Unmasking AI: My Mission to Protect What Is Human in a World of Machines. Dr. Buolamwini is a member of the Media Lab at MIT.
- Large language models use vast amounts of text without explicit permission and without compensating the authors.
- Developing and maintaining AI models consumes enormous amounts of energy.
- Some sectors of the industry have been criticized for exploitative labor practices.
Learning With AI
As a Search Tool
AI chatbots can give broadly useful overviews on a lot of subjects. Even everyday Google searches now begin with AI generated answers rather than links to direct sources. But these tools have some important limitations as sources of reliable information.
- AI chatbots have limited datasets, excluding many scholarly publications or current academic journals.
- They lack true understanding and can’t distinguish reliable sources.
- Be aware of their tendency to invent information (“hallucinate”).
For Summarization
- Summaries can help prioritize reading and improve comprehension. That’s why scientific papers begin with abstracts.
- But writing your own notes after reading enhances long-term understanding.
- Retrieval Practice also teaches more than reviewing notes or summaries. Consider using AI to quiz you after you read rather than relying solely on AI-summaries to begin with.
For Studying
- Interactive Quizzing. Instead of relying solely on AI-generated summaries, use LLMs to create practice questions based on your study material. This approach leverages retrieval practice, which is more effective for long-term retention than passive review.
- Supplementary not Primary Source. Use LLMs as a complement to, not a replacement for, primary study methods and materials. Cross-reference important information with reliable sources.
- Active Engagement. While LLMs can summarize content, writing your own notes after reading enhances long-term understanding. Use LLMs to stimulate critical thinking by discussing or debating points rather than accepting summaries passively.
For Writing and Problem-Solving
- When compared to using AI tools for learning content, using them to write papers or to complete homework assignments is more likely to be restricted by course policies. Although some people use chatbots to help outline their notes, to rephrase awkward or redundant sentences, to make their writing more concise, or to check their grammar, these are also tasks that—depending on the course and the assignment—your instructors will expect you to complete on your own or with help from the Writing Center.
- Doing work yourself is crucial for skill development. Research on feedback, for instance, shows that being asked questions improves your writing faster than having your work edited for you.
- Letting AI solve problems and generate text may hinder your learning process.
- Most course policies will be based on this principle: that you need a deep knowledge base before you can effectively collaborate with more advanced tools.
Using AI in Your Courses
- AI policies vary by instructor, course, and assignment. These policies are designed to support you as you learn fundamental and higher skills within each discipline.
- Ask your instructor if policies aren’t clear.
- Yale College regulations require that you cite the source for any material you submit as part of your course work, including language, images, and ideas that you source from AI.
Using AI Outside of Courses
- Check with supervisors about AI use in Yale employment. Federal regulations govern the use of many kinds of data.
- Be aware of AI policies for graduate study or fellowship applications.
- Although research shows that AI detectors are unreliable, some organizations (including potential employers) will still use them to review submitted materials.