Yielding Results

Linda Clarke

Can AI teach storytelling? A Yale team explores using large language models to guide STEM graduate students in scientific writing and critical thinking.

Three people sit at a table facing the camera.

L-R: Lauren Gonzalez, Ryan Wepler, and Marynel Vázquez were awarded a seed grant to explore how AI can improve scientific storytelling. Photo by Robert DeSanto

Is it possible to develop artificial intelligence (AI) into a writing coach for STEM graduate students? Could its large-language-model (LLM) capabilities encourage critical thinking and storytelling in scientific writing?

A seed is planted. Ideas germinate.

The School of Engineering and Applied Science (SEAS) and the Office of the Provost thought so. They awarded an AI seed grant in June 2025 to Assistant Professor of Computer Science Marynel Vázquez, principal investigator, and Yale Poorvu Center for Teaching and Learning staff Ryan Wepler, director of the Graduate Writing Lab, and Lauren Gonzalez, assistant director of scientific communication. The team’s vision to shift the perception of AI from a writing tool for speed to one that fosters thoughtful, analytic engagement received the competitive funds to produce preliminary results, strengthen their research portfolio, and increase monetary support beyond Yale in 18 months.

“I believe scientists can communicate the value of their work more effectively by developing storytelling skills,” said Vázquez, “leading to greater public engagement and impactful research programs.”

Ideas sprout and take shape.

The project’s primary idea of using AI in an untraditional way — not focusing on grammar corrections or rewriting, but on building an advisor that could emulate the reflective and pedagogical style of human writing consultants — crystallized through collaboration. The team decided to customize an AI model, hoping to create a persona that would give students thoughtful feedback on their writing, encourage critical thinking, and foster ownership of their work rather than rely on AI for quick solutions or automated writing.

We’ve been thinking a lot about an AI model that avoids the kind of feedback on writing that LLMs typically give.

Ryan Wepler, Director, Graduate Writing Lab

Project grows and develops.

Once the team decided to customize an AI model to help researchers convey their ideas in a more persuasive and compelling way, the project began to blossom.

As the research unfolded, it focused on three areas: understanding how to steer an AI model for improving storytelling skills; collecting examples (data) of drafts and suggested text edits; and creating a computer interface through which AI can collaborate with writers to assist them in analyzing their craft.

The team organized writing retreats where the Poorvu Center Graduate Writing Lab fellows and/or academic advisors gave guidance to the students on their scientific writing. There was also a web-form process to collect samples of their pieces and the feedback they received.

“The writing workshops gave us a chance to reflect on the human-advisor process and how it might be emulated by AI,” said Wepler. “I enjoyed helping to shape the feedback philosophy, emphasizing choice and reflection as opposed to rewriting.”

Gonzalez participated in these discussions, making sure that scientific expertise and writing pedagogy (method and practice) were incorporated into the process that would train whatever LLM they chose to customize. She noted that the role of storytelling in scientific writing is to “help readers follow the logic of compelling story arcs, become emotionally invested in the content, and be motivated to continue reading.”

While the writing workshops were taking place, undergraduate and graduate students in Vázquez’s lab were interviewing the STEM students about their feedback preferences, trying to figure out how to create an AI persona that could boost storytelling skills. From this, they crafted the computer interface through which AI could collaborate.

Vázquez has been overseeing the work. “We’ve learned that different LLMs give different feedback and that the STEM writers prefer advice that is constructive, direct, and focused,” said Vázquez. “They did not enjoy responses that were highly confident or impersonal in tone because it felt like the LLM was trying to sound smart rather than guiding them.”

Planning for the harvest.

While the research and work continue, the team is getting closer to unveiling its AI-based approach to supporting writers and looks forward to reaping what they sowed. This entails showing how the AI model can be personalized more easily than with prompt engineering. User tests are in the works to better understand how people interact with AI when it is designed to help students reflect on their writing. The team suspects that when scientists use AI to think about their writing, instead of having it just edit text, they will hone their storytelling skills, changing the typical relationship that most people have with AI-assisted writing.

“What’s surprising,” said Gonzalez, “considering that AI cannot truly understand the content of a written piece, is how useful the feedback has been even with a preliminary version of an LLM that we experimented with early on.”

Many hands (and minds) make light work:

Vázquez and team would like to acknowledge the assistance of Yale undergraduate and graduate students in their research. The Graduate Writing Lab fellows who participated in LLM team discussions and/or facilitated writing retreats are: Shoham Benmelech, Sara Gelles-Watnick, Susanna Maisto, Kevin Pataroque, and Sydney Schuster. Additionally, the team has worked with undergraduate students, graduate students, and a postdoc to run user studies as well as design and implement the AI-based system: Nannette-Rose Tarver, Nathan Tsoi, Khosilmurod (Murad) Abdukholikov, Andrew Perez, Mai Blaustein, Kate Candon, and Houston Claure.