The Poorvu Center believes that students should learn enough about AI to become responsible practitioners and/or informed critics.
-
Audience
-
Category
Students must be prepared to lead in an AI-infused future. Yale needs to evolve its curricula to prepare graduates for a society and workforce where AI skills are becoming essential…
Report of the Yale Task Force on Artificial Intelligence (2024)
On This Page
We want to offer suggestions on how to prioritize your learning while you experiment with AI tools.

Learn About AI
There are numerous resources available at Yale and beyond that you can use to learn more about how, when, and why to use AI effectively. However, we have found that some of the best strategies for learning about using AI come from hands on practice. Consider the following strategies for learning more about AI:
-
Experiment with AI Tools
after a few hours of exploring you will develop a clearer sense of AI’s capabilities
-
Explore Multiple Models
to understand their customizations and unique utility for different tasks
-
Try Tasks
in familiar subjects to better understand AI’s strengths and limitations
-
Ask Instructors
in different disciplines about their views for how AI can help with academic tasks
For Further Learning
We recommend Ethan Mollick’s substack, One Useful Thing, or his book Co-Intelligence.
This image, found on Ethan Mollick’s substack, has the following attribution: “generated by MidJourney AI v3 with the prompt “One useful thing” in pixelart style.”

Suggestions or Questions
If you are a Yale student with suggestions or questions about AI and learning, please let us know!
Yale’s Investment in AI
The University has made a substantial $150 million investment to encourage exploration of AI by both faculty and students.
Learning with AI
Research has long shown that producing your own writing leads to deeper learning, longer retained. If you use an AI chatbot to write for you—whether explanations, summaries, topic ideas, or even initial outlines—you will learn less and perform more poorly on subsequent exams and attempts to use that knowledge. With that caveat in mind, there are ways that people use generative AI to extend their learning.
AI chatbots can give broadly useful overviews on a lot of subjects. Even everyday Google searches now begin with AI generated answers rather than links to direct sources. Several AI tools are equipped with web search capability, which makes it easy to review some of the sources they’re drawing from. But these tools have some important limitations as sources of reliable information.
- AI models have limited datasets, excluding many scholarly publications or current academic journals.
- They lack true understanding and can’t distinguish reliable sources.
- Be aware of their tendency to fabricate information (“hallucinate”).
- Reading summaries can help prioritize reading and improve comprehension. That’s why scientific papers begin with abstracts. But writing your own notes after reading enhances long-term understanding.
- Retrieval Practice also teaches more than reviewing notes or summaries. Consider using AI to quiz you after you read rather than relying solely on AI-summaries to begin with.
- Interactive Quizzing. Instead of relying solely on AI-generated summaries, use LLMs to create practice questions based on your study material. This approach leverages retrieval practice, which is more effective for long-term retention than passive review.
- Supplementary not Primary Source. Use LLMs as a complement to, not a replacement for, primary study methods and materials. Cross-reference important information with reliable sources.
- Active Engagement. While LLMs can summarize content, writing your own notes after reading enhances long-term understanding. Use LLMs to stimulate critical thinking by discussing or debating points rather than accepting summaries passively.
Most course policies will be based on the principle that you need a deep knowledge base before you can effectively collaborate with more advanced tools.
Before you give your work over to AI, ask yourself: am I giving up an opportunity to learn something here?
- When compared to using AI tools for learning content, using them to write papers or to complete homework assignments is more likely to be restricted by course policies. Although some people use chatbots to help outline their notes, to rephrase awkward or redundant sentences, to make their writing more concise, or to check their grammar, these are also tasks that—depending on the course and the assignment—your instructors will expect you to complete on your own or with help from the Writing Center.
- Doing work yourself is crucial for skill development. Research on feedback, for instance, shows that being asked questions improves your writing faster than having your work edited for you.
- Letting AI solve problems and generate text may hinder your learning process.
- Ask instructors in different disciplines about their views for how AI can help with academic tasks.
Many people would say that generative AI programs are even better at coding than at writing. But research shows that relying on these systems prevents you from developing the deeper understanding of programming you will need to invent new solutions or to code at higher levels.
- AI policies vary by instructor, course, and assignment. These policies are designed to support you as you learn fundamental and higher skills within each discipline.
- Ask your instructor if policies aren’t clear.
- Yale College regulations require that you cite the source for any material you submit as part of your course work, including language, images, and ideas that you source from AI.
- Check with supervisors about AI use in Yale employment. Federal regulations govern the use of many kinds of data.
- Be aware of AI policies for graduate study or fellowship applications.
- Although research shows that AI detectors are unreliable, some organizations (including potential employers) will still use them to review submitted materials.
Yale’s AI Tools
Yale provides access to a range of secure and robust AI tools that can be used to develop, teach, and learn.
Clarity Platform
Clarity is a secure chatbot designed for Yale University that provides access to multiple different models.
Ethical Concerns
As a teaching and learning center, we are trying neither to boost you towards AI nor discourage you from exploring it. But we should mention several pressing questions raised by the rise of this technology. Each bullet point includes a link for further reading.
- Many chatbot responses reflect cultural, gender, and racial biases. To learn more about these limitations, we suggest reading Joy Buolamwini’s book Unmasking AI: My Mission to Protect What Is Human in a World of Machines. Dr. Buolamwini is a member of the Media Lab at MIT.
- Large language models use vast amounts of text without explicit permission and without compensating the authors.
- Developing and maintaining AI models consumes enormous amounts of energy.
- Some sectors of the industry have been criticized for exploitative labor practices.