ADHD Meets AI: Designing my Learning Workflow | Tina Zeng, Davenport ‘28

In my first semester of college, I was diagnosed with ADHD. In the summer before starting college, my friends speculated on my ADHD, which made me curious – but the impetus came after a conversation with a Poorvu Center staff member who’d recently been diagnosed themselves. We shared so many similarities in how our minds operated, even in our MBTI-personality types (likely coincidence). We’d light up with new ideas constantly – only to watch many of them fizzle out before we could finish. From this conversation, I realize I’d rather know sooner than later in my life whether my brain may be wired differently. I sought an evaluation through Yale Health’s Mental Health & Counseling in the beginning of the semester. After my diagnosis, I connected with Student Accessibility Services (SAS), where I received certain accommodations that empowered my learning. Since then, I’ve become fascinated by how my ADHD manifests, especially in academic settings.

Discovering Assistive Technology

During my first year, I didn’t seek out assistive technologies, even though they could have made a big difference. I remember receiving emails about the training required to access them but never followed through. However, towards the end of second semester, I attended the AI at Yale Symposium, where I chatted with Jordan Colbert, who is the Associate Director for Assistive Technology at SAS. He convinced me to explore the tools SAS offers through the demos at the table. As someone always looking for ways to optimize how I learn, I was intrigued.

In my third semester, I finally reached out to SAS’s Assistive Technology team because I was taking six classes (unsurprisingly, two focused directly on AI and three more that are tangentially related! And, I started my role as a Student AI Liaison). Each class comes with heavy weekly readings and written responses, which means my head is always buried in readings.

What I Learned from Talking with Jordan and the STARS Team

In September, I sat down with Jordan again to understand the bigger picture of accessibility and AI at Yale. He described how modern AI has transformed tools that once felt clunky – robotic text-to-speech, unreliable note-taking, inaccurate captions – into far more accurate and usable supports. AI has also helped SAS scale the sheer volume of work behind the scenes: a document-remediation process that once handled around 500 files per semester now processes more than 14,000, thanks to automated workflows for the Assistive Technology team.

Even so, he emphasized that human judgment still matters. AI makes mistakes, and some situations still require people, like dictation during exams or navigating injury-related needs. That’s why SAS focuses on helping students understand both the power and the limits of these tools.

Jordan’s explanation helped me understand the scale, the intentionality, and the human decision-making that still shapes every accommodation at Yale. With that context, I was ready to explore which tools could meaningfully support my learning.

At my virtual orientation with the Student-workers for Technology and Accessible Resource Services (STARS) Team, I was introduced to two tools: Genio Notes and Speechify. 

Illustration of a female student in a grey Yale sweater surrounded by assistive technology interface tabs

Image generated by OpenAI’s DALL-E, then edited by Tina Zeng using ibisPaint and Canva

My Experience with Note Taking using AI

I’ve experimented with voice-memoing my seminars using my phone’s default recording app. So far this semester, I’ve recorded over 80 voice memos. When I need a summary, I copy the transcript into ChatGPT, which works surprisingly well – especially for meetings where actionable items come up.

SAS recommended Genio Notes, but I’ve been slow to incorporate it into my workflow. It has features I wish I took more advantage of. Genio Notes is built for diarization – identifying who spoke when – which would make it far better than my current system for tracking multi-speaker discussions. Right now, I rely on a raw transcript pasted into ChatGPT, which can’t always distinguish speakers.

I only tried Genio Notes a few times early on. The small notification sound it plays when recording starts threw me off; and, I also haven’t built the habit of requesting presentation slides in advance, something Genio Notes is optimized for as it lets you annotate and take notes while the slides are on screen.

I know I need to give it another chance. As I keep refining my study workflow, I’m realizing that tools built specifically for structured note-taking might help me capture discussions more accurately – especially as the rest of my learning tools continue to evolve.

How Speechify Supercharged My Ability to Read and Process Information

If Genio Notes is a tool I should use more, Speechify is one I can’t stop using. I absolutely love it – so much that I’ve shared it with several friends and even offered them my login.

In just four weeks (including October Break), I’ve uploaded about 50 readings. I use Speechify at least five days a week, and it’s completely transformed my experience with processing information.

Before Speechify, I could easily spend 1–2 hours on a single reading. Now, I can finish that same text in under 20 minutes, and my comprehension is not compromised.

Learning to Listen to My Mind: Why Speechify Works So Well for Me

As Jordan has described how AI has transformed text-to-speech into something far smoother and more responsive, it clicked for me why Speechify felt like such an intuitive extension of my learning style. 

I figured out my most optimal way of processing information is listening and reading simultaneously. I’ve always been a visual-auditory learner – ever since I started listening to daily news podcasts in high school – so AI-powered text-to-speech tools like Speechify felt like a natural fit. Speechify lets me drop in anything I need to read, whether it’s a PDF, a webpage link, or pasted plain text. Its intelligent design leverages natural, expressive AI voices, real-time text highlighting, and an AI-enhanced Auto Skip Mode that skips footnotes, tables, or citations in parentheses (with adjustable settings), helping me stay focused on the content.

Speechify highlights each sentence and a darker shade of highlight on the word being read, keeping me anchored. The visual-auditory stimuli and subtle motion create just enough stimulation to hold my attention and maintain a steady reading rhythm. Before using Speechify, my mind would often drift unless I was completely immersed in the material. Now, that immersion happens almost instantly because of the layered sensory cues. Even if I zone out, I notice it immediately and can replay the section – unlike with silent reading, where I might realize paragraphs or pages later that I’ve lost comprehension. I am able to jump back to where I left off or highlight specific text to re-read aloud.

I usually listen in the range of 2.5–3.5x times sped up (the max is 4x), depending on the complexity of the reading. I’ve always preferred faster playback – a habit I picked up from years of watching YouTube at 2x – but Speechify’s ability to fine-tune speed in 0.05x intervals lets me find the perfect pace for each text. After adopting Speechify, I realize that I’m a slow(er than I’d like to be) reader than when relying on my internal monologue. When I listen and read simultaneously, though – almost like following closed captions – the words seem to flow more effortlessly, and my comprehension feels smoother and more continuous.

The AI summary feature is especially helpful too. Even though I always strive to engage with every reading fully – by listening or reading – the summaries are invaluable for jogging my memory, especially when I read ahead or have multiple readings in a week. On a more granular level, the “Ask AI” function allows me to query specific concepts from the reading (though I believe that grappling with difficult texts is a muscle worth maintaining).

My Workflow in Beta Mode: What I Could Improve

Currently, I still copy text into Google Docs to annotate – a pre-Speechify habit that’s hard to break. Speechify actually has built-in highlighting and note-taking tools that I haven’t explored yet, and I plan to experiment with those next.

The only technical hiccup I’ve encountered was uploading a PDF with embedded fonts that neither my cursor’s highlighting nor Speechify could process properly. Otherwise, it’s been smooth sailing.

There are still features I haven’t tried, like Quiz Mode and Podcast Mode (similar to NotebookLLM), which I’m curious to test out as I continue refining my study workflow.

If you’re curious what voice I listen to, I’ve stuck with Cliff’s voice – the founder of Speechify – and at this point, it feels oddly comforting to have him narrating my readings every week. Whereas my friend uses a British-accented voice for their British history readings, which is pretty cool!

Closing Thoughts

Learning to work with my ADHD has been less about fixing my focus and more about learning its language – noticing its patterns, its impulses, its unique tempo. Tools like Speechify – and, hopefully soon, Genio Notes – have helped transform what once felt like friction into flow: what used to feel like struggle – keeping up with readings, sustaining attention – has become something far more manageable, even enjoyable.

Beyond accessibility, I gained agency in learning to customize how I learn by listening to my brain.

If you’re curious about accommodations or think assistive technology could support your learning, check out Student Accessibility Services (SAS). You can also explore their full suite of assistive technology tools.

About the Author

Tina Zeng is a sophomore studying Global Affairs + Computing, Culture & Society. For the past few years, she has thought about AI everyday. Outside of working as a Student AI Liaison for the Poorvu Center, she interns with DataHaven, the community data organization, as a Dwight Hall Urban Fellow, and she works as a Communication Coordinator for Yale’s Digital Ethics Center. As a Dahl Scholar at Yale’s Institution for Social and Policy Studies, she researches NYC Community Boards using AI-driven computational and qualitative research methods. With her passions in public interest technology, democratic innovation and social entrepreneurship, Tina hopes to leverage technologies like AI for social impact. She’d love to chat with you, reach out to tina.zeng@yale.edu!