In “ChatGPT sparks a pivot in academia,” Hartford Courant reporter Ed Stannard interviewed Jenny Frederick, among other university faculty.
Below are excerpts featuring Jenny Frederick, Executive Director of the Poorvu Center.
[On the topic of AI tools generating offensive text:] “It’s drawing from an internet that reflects humanity and humanity has bias and stereotypes.” However, “if you try to prompt it to say something that’s blatantly biased or racist. … It’s trained to give you some language that says it won’t do that. But you can get a little more sophisticated in your prompt and you can actually get it to do that.”
[On teaching tactics:]
“I think the main sentiment going on right now at our campus is people are paying attention. And there’s a handful of people on the leading edge who are already thinking about how to integrate this into their courses and help students learn how to use it and think critically about it.”
But Frederick said, in addition to potential problems with academic integrity, that she worries about ChatGPT using students’ inquiries as learning material. “When you ask your students to interact with this tool, they’re actually contributing to the training and the improvement of it,” she said. “And that raises some issues that lead us to consider, probably students should be able to opt out if they don’t want to be part of that.”
Frederick said there are two messages faculty should receive from their teaching centers: “No. 1, would be to be explicitly clear about expectations for your students,” she said. “And if this adds a new piece to your policy about what students may or may not do, how they may or may not get help when they’re preparing assignments, then be really clear about that and have conversations with your students,” she said.
Second is avoiding indiscriminate use of the chatbot: “Having students connect to things that mean something to them, requiring outlines and drafts and having a lot of … constructive feedback, peer review, breaking things down into smaller steps. Those are things that I would have recommended three years ago and they still hold.”
A way to use ChatGPT in the classroom might be to say, “give this prompt to ChatGPT and bring the product to class and then we’re going to do things with it. So the product becomes the starting point,” Frederick said.
An example might be to come up with a source on the topic that ChatGPT did not include and see what different angles students come up with. “I think the tool is changing and improving so rapidly that the critiques of this month may be very different from the critiques of two or three months from now,” she said. “So, again, there’s a lot of experimenting going on.”
Frederick said using ChatGPT could violate academic integrity policies if students were presenting AI-generated work as their own. She said teachers should tell their students, “Don’t use a robot to help you write your paper.”
“I think we need to maybe enhance our safeguards a little bit, and part of that is policy and part of that is good old-fashioned conversation about what it is we want our students to learn and getting them to think reflectively about the ways in which the things that they’re practicing are advancing their learning,” Frederick said.