Within a week of the release of the ChatGPT AI Chatbot, Yale’s Poorvu Center for Teaching and Learning began receiving questions from faculty about the implications for teaching. By the time the Spring semester began, the questions — and ideas — were pouring in.
In response, the Poorvu Center created AI Guidance for faculty and convened a panel of Yale faculty for a discussion on February 14, 2023.
The hourlong discussion flew by, and in reflecting their own area of expertise, each speaker proved that AI cannot compare to the ability of human cognition to create, integrate, and express new ideas. Moderator Alfred Guy expressed a pull to both consider taking “actions in relationship with this new technology” as a teacher, and “to understand the historical moment.”
Watch the full discussion: Artificial Intelligence and Teaching: A Community Conversation and find a summary of the talk, below.
Reflecting on Excellence in Teaching
As faculty who are dedicated to the art of teaching, the panelists responded to this technology with renewed inspiration to reflect upon their roles in supporting Yale scholars to develop as learners and thinkers.
Computer Science professor Nisheeth Vishnoi opened the discussion with the observation that ChatGPT makes evident “the gap between learning and evaluation. As a university, our goal is to ensure that our students learn. But how we check whether they have learned is through evaluations. … In the short term, I would imagine that this gap will increase. But I think it does give us a very nice opportunity to think about how we can reduce the misalignment.”
Brian Scassellati, also a computer scientist, added “It’s actually a very good challenge [for educators] because it’s forcing us to look for creative and original thought — and not just what these [AI] systems do, which is just a rewording and a regurgitation of the sources that they have been fed…. We can train our students to be more robust thinkers.”
Sociology professor Justin Farrell took a broader view of ethics and higher education: “All fields…rely on a creed, upon rules and practices — educational virtues that are necessary for truth-seeking and they favor disagreement, … continual openness to new evidence.”
Farrell continued, “All fields… can instill values of a liberal education that I think will combat a culture whereby cheating feels necessary, or even justified, because only education can nurture a culture of truth seeking, of scientific reasoning, and ethical engagement needed to inoculate individuals from falling prey to the misuse of AI. So, as AI continues to improve, …teachers need to bring a renewed seriousness to education itself, and Poorvu is a big help in this.”
Heralding the End (Yet Again)
“Don’t panic” was the advice from Professor Scassellati. “We’ve been here before.”
Scassellati is a computer scientist and an expert in how we understand social behavior. He builds robots that help people by providing social and cognitive support, including teaching social skills to children with autism, and helping adults cope with anxiety and depression.
Scassellati looked back to 1966, with Joe Weizenbaum’s invention of ELIZA, a very simple program that interacted with people by mimicking their words back to them. It was so convincing that the Association for Psychotherapy begged the inventor to end ELIZA for fear of eliminating their jobs. He added that, while ChatGPT is an incredible piece of technology, the hype and the anxiety are nothing new.
Professor Laura Wexler added, “It does seem to me also that we’ve been here before: cell phones and computers, Wikipedia, Google Search, spell check. … There have been a lot of changes that have been said to herald the end of things as we know them, and that we had to put up defensive barriers in our classrooms against them.”
With that, Wexler looked back even further — a full 15 centuries earlier — to a conversation between Socrates and Plato about the invention of another technology: the written word.
Socrates was against writing because it interfered with remembering — an internal process — in favor of externally “reminding.” Wexler quoted Socrates: “You provide your students with the appearance of wisdom, not with its reality. They will imagine that they have come to know much, while for the most part they will know nothing.”
Wexler pointed out, “Socrates’ solution does not lie in a clever assignment or … a way of keeping the collaborative AI out of the space of the classroom. It actually relies on person-to-person presence between people learning together — teachers and students — so that the writing is merely a reminder of the embodied, real … knowledge that’s being communicated in the classroom.”
Experiential Learning Cannot be Replicated
Before and during the panel, many in the audience asked for concrete suggestions for incorporating ChatGPT creatively in the classroom, and for maintaining engagement with learning.
School of the Environment professor Justin Farrell shared an addition to his syllabus, that students must investigate their own research question using ChatGPT and then critique it. Because his students are conducting on-the-ground research, they can interrogate whether ChatGPT’s answers are accurate, ethical, misleading, or incomplete.
“I’m pushing students to consider how the production of knowledge from tools like ChatGPT might parallel similar developments where algorithms can profit from disinformation,” Farrell said, “…even pre-digital campaigns, like smoking or climate denial.”
Farrell added that he prioritizes experiential learning. “I try to get students out of the classroom as much as possible into the field, using project-based teaching that offers tools for discovery, tools for cognitive engagement, and also social belonging.” Farrell brings his class to his own research sites at Yellowstone and pairs them with local NGOs, where “they cultivate these virtues of social responsibility, of personal investment in their projects, and responsibility to their on-the-ground partners.”
“These types of experiences are difficult to replicate, even for AI.”
Self-Expression, Authentic and Enhanced
Prof. Vishnoi pointed out that AI can help “level the playing field,” for a student who speaks English as a second language, and for whom an email to their professor takes extra effort. Starting from an outline, AI can create a more polished message and preserve that student’s time and energy.
Similarly, panelists wondered about impacts on admissions applications, especially at the graduate level — where personal essays help faculty get to know the potential student. The discussion covered a reexamining of what is being evaluated in that process, and the holistic evaluation which includes many other measures of achievement and potential.
Prof. Khawaja observed that, to apply to a PhD program in the Humanities, “I think it’s very unlikely that we would have that kind of an issue with someone submitting the kind of extended research that you would need.”
Prof. Vishnoi suggested that evaluators start asking themselves what they respond to in application essays. “At the end of the day, we, as your evaluators will also have to correct our internal biases” and put more value on those which stand out from the norm.
Ethical Questions
Vishnoi added, “On the other hand, given that these are generated models which are learning from existing data which data doesn’t come from vacuum. It comes from the society that we live in. It will, for sure, propagate the power, the bias that already exists, and I think things have to be done to check these kinds of biases.”
Prof. Scassellati added an ethical consideration: That according to some publications, ChatGPT “can never be listed as an author” because “with authorship comes a responsibility.”
A Student Perspective
The student participant, Isiuwa Omoigui ’23, echoed the faculty, “Ultimately, we do what we do here as students — not just to gain credentials, or to get a grade on the transcript — but really to learn how to think, to read and to solve problems after we leave Yale.”
She added, “With that in mind, the critical work is to figure out how to responsibly engage with the technology as a tool rather than a crutch. Because tech is really an increasingly important part of … our every day lies, especially as a 21st century student, I feel I need to know how to engage with this kind of technology responsibly and understand its limits and its biases.”
Tech Support
Professor of Religion Noreen Khawaja brought a philosophical perspective, calling this a moment of “ontological confusion” because the possibilities of the tool are yet unknown.
Prof. Scassellati shared his excitement that, with ChatGPT, his computer science students have the rare opportunity to test the boundaries a system first-hand, and “differentiate the hype about a technology from the reality” of it.
Prof. Khawaja suggested that for faculty, it can be difficult to “pause” in the midst of rapid developments, and that they could probably gain from a “computer camp and also a history of tech camp.” Prof. Scassellati delivered the good news that the Computer Science department already offers classes of this kind for faculty. And for students, they’re developing an AI course for non-majors, with no programming required.
Thank you to the panelists:
- Nisheeth Vishnoi, A. Bartlett Giamatti Professor of Computer Science
- Justin Farrell, Professor of Sociology in the Yale School of the Environment
- Laura Wexler, Charles H. Farnam Professor of American Studies and Women’s, Gender, and Sexuality Studies
- Brian Scassellati, Professor of Computer Science, Cognitive Science, and Mechanical Engineering
- Noreen Khawaja, Associate Professor of Religious Studies
- Isiuwa Omoigui ‘23, a Yale College senior majoring in Political Science; and a Lead Writing Partner at the Poorvu Center
- Moderated by: Afred Guy, Poorvu Center’s Director of Undergraduate Writing and Tutoring; and Yale College Assistant Dean of Academic Affairs