top of page
Search

How a 9th grade student made an AI Medical Misinformation Detector Project: Akash Case Study

  • Writer: BetterMind Labs
    BetterMind Labs
  • Jan 4
  • 4 min read

Introduction: How a 9th grade student made an AI Medical Misinformation Detector Project

Dentist in purple scrubs shows dental X-ray to relaxed patient, pointing with an orange pen in a bright, modern clinic.

Artificial intelligence is everywhere in healthcare discussions, yet one of the most dangerous problems rarely appears in high school projects: medical misinformation. While many students build chatbots or image classifiers, few confront the reality that incorrect health information spreads faster than verified medical guidance and causes real harm.

Admissions officers are aware of this gap. They also know that tackling healthcare problems requires more than surface-level coding. It demands responsibility, data discipline, and ethical reasoning. This is why AI projects grounded in healthcare misinformation stand out immediately. They show that a student is not just technically curious, but intellectually serious.

Grades, AP science courses, and coding camps remain baseline expectations. What distinguishes applicants today is whether they can apply AI to a high-stakes domain and explain their choices with clarity. An AI medical misinformation detector does exactly that.

Why healthcare misinformation is a serious AI problem

Healthcare misinformation is not a hypothetical issue. According to a 2023 study published by the World Health Organization, false or misleading medical claims online contribute directly to delayed treatment, vaccine hesitancy, and misuse of medication. The problem worsened during and after COVID-19, but it extends far beyond pandemics.

From an AI perspective, this problem is hard for three reasons:

  1. Language ambiguity

    Medical claims often sound plausible even when incorrect.

  2. Context sensitivity

    Advice that is correct in one scenario can be harmful in another.

  3. Ethical stakes

    A false negative or false positive has consequences beyond accuracy metrics.

Students who recognize these constraints and still attempt a solution demonstrate a level of thinking admissions committees respect.

Case study: Akash Kumar Soumya’s AI Medical Misinformation Detector



Akash Kumar Soumya began with a concern many people share but rarely address technically: How can people trust the health information they see online? Like many students, he initially viewed AI as a powerful but distant tool. The challenge was understanding how to apply it responsibly.


The problem he chose to solve

Medical misinformation spreads through:

  • Social media posts

  • Blog articles

  • Forums and comment sections

  • Misleading headlines

Akash focused on building an AI-powered medical misinformation detector that analyzes health-related text and flags potentially misleading or false claims. The goal was not to issue medical advice, but to assist users in identifying risky information before acting on it.

Technical approach and tools

The project used a natural language processing pipeline with:

  • Text preprocessing and tokenization

  • Labeled datasets distinguishing verified medical information from misleading claims

  • Machine learning classification models trained to identify misinformation patterns

  • Confidence scoring to indicate uncertainty rather than binary judgment

This approach reflects real-world healthcare AI standards, where uncertainty is explicitly communicated rather than hidden

Why admissions officers value projects like this

Selective colleges look for indicators that a student can handle complexity responsibly. Healthcare AI is one of the clearest signals.

From an admissions perspective, this project demonstrates:

  • Interdisciplinary thinking

    Combining AI, ethics, and healthcare knowledge.

  • Risk awareness

    Understanding that incorrect outputs can cause harm.

  • Independent judgment

    Making decisions without relying on step-by-step tutorials.

  • Narrative clarity

    The student can explain why the problem matters, not just how the code works.

This aligns closely with what top universities seek in future researchers, engineers, and policy-aware technologists.

Comparing typical AI projects vs healthcare-grounded projects

Typical High School AI Project

AI Medical Misinformation Detector

Follows a known tutorial

Defines a real-world problem

Clean, pre-made datasets

Messy, credibility-sensitive data

Accuracy-focused

Ethics- and impact-aware

Limited explanation needed

Requires careful reasoning

Low admissions signal

High admissions signal

This comparison explains why healthcare AI projects carry disproportionate weight in applications.

What an ideal AI + healthcare learning environment looks like


Doctor pointing to brain and spine MRI scans on a lightbox, wearing a white coat. Detailed medical images dominate the scene.

Strong outcomes rarely come from isolated learning. Effective programs share common traits:

  • Mentorship from experienced practitioners

    Healthcare AI has domain-specific pitfalls that self-study rarely reveals.

  • Structured project timelines

    Students move from idea to implementation with accountability.

  • Feedback-driven iteration

    Models improve through critique, not guesswork.

  • Outcome-oriented thinking

    Projects result in explainable systems, not just code files.

These conditions mirror undergraduate research environments, which is exactly why they prepare students well.

FAQ

1. Is healthcare AI too advanced for high school students?

No, if approached responsibly. Students are not expected to solve medical diagnosis, but they can build assistive tools under clear constraints.

2. Do colleges worry about students “overreaching” in healthcare?

Only when students ignore ethical limits. Projects that acknowledge limitations are viewed positively.

3. Does an AI healthcare project help beyond computer science admissions?

Yes. Such projects appeal to public health, biology, data science, and interdisciplinary programs.

4. Is mentorship really necessary for projects like this?

In healthcare AI, mentorship reduces mistakes and accelerates learning. Most meaningful projects are guided, not improvised.

Closing perspective: why this project matters

As someone who has reviewed student AI work across domains, healthcare projects reveal depth faster than almost anything else. They force students to slow down, think critically, and justify decisions.

Akash Kumar Soumya’s AI Medical Misinformation Detector shows what happens when curiosity meets structure. The result is not just a project, but a signal of maturity.

Programs like BetterMind Labs exist to support this level of thinking through guided mentorship, real engineering workflows, and ethical grounding. For families evaluating serious AI pathways, exploring structured programs at bettermindlabs.org is a logical next step, especially when the goal is depth, not noise.


Comments


bottom of page