
BetterMind Alumni | Batch July 2025

Maansi Murali Prasad
Pre-Dental Student at Texas A&M University
Starting With a Real Friction
The idea began with a simple observation: doctors spend a surprising amount of time writing notes instead of talking to patients.
Clinical visits are fast, information-heavy, and often stressful. Important details can be missed. Documentation takes time. And patients sometimes leave unsure about what just happened.
Instead of trying to “fix healthcare,” the goal was more practical improve how conversations during clinical visits are captured, understood, and documented.
“I kept thinking about the gap between conversation and documentation,” the student explains. “What if notes could write themselves while doctors focus on patients?”
That question shaped everything that followed.

Understanding the Flow of a Clinical Visit
Early on, the challenge wasn’t building AI models, it was understanding real conversations.
Doctor–patient interactions are messy. People interrupt each other. Symptoms are described vaguely. Medical terms mix with everyday language. Background noise exists.
The system had to handle:
-
Natural conversation flow
-
Multiple speakers
-
Medical terminology
-
Incomplete or unclear information
The solution started with recording conversations and using AI transcription to convert speech into structured text.
Turning Conversations Into Meaningful Information
Once transcription worked reliably, the next step was transforming raw text into useful insights.
A large language model (LLM) was used to analyze the conversation and generate structured outputs:
-
Visit summary — what happened during the consultation
-
Key symptoms identified
-
Relevant medical history
-
Possible diagnosis discussed
-
Second-opinion suggestion if uncertainty existed
The challenge wasn’t just summarizing, it was identifying what actually mattered medically.
This required designing prompts carefully and testing how different outputs changed depending on context, phrasing, and missing information.
“What mattered wasn’t just accuracy,” the student notes. “It was whether the output actually helped a doctor make decisions.”

Thinking Beyond the Model
As the project evolved, the focus shifted from AI performance to usability.
The system needed to work end-to-end:
-
Recording audio smoothly
-
Processing data quickly
-
Presenting results clearly
-
Making insights actionable
This led to building a simple dashboard where both doctors and patients could view:
-
Visit summaries
-
Highlighted symptoms
-
Diagnostic insights
-
Documentation records
The goal was clarity — turning complex conversations into structured understanding.
“I started thinking less about models and more about experience,” the student says. “If the output isn’t usable, the technology doesn’t matter.”
Thinking End-to-End
By the end, the project wasn’t just an AI tool, it was a workflow improvement system.
It demonstrated how AI could:
-
Reduce documentation burden
-
Improve communication clarity
-
Help patients better understand their visits
-
Support medical decision-making with structured insights
Ready to Apply?
Jump in and start your journey!
We encourage students to fill out the application themselves it gives us a clearer sense of their interests and intent. Please take a moment to read through the questions and answer them with care. Each application is reviewed thoughtfully, so genuine, well-considered responses really do make a difference.
