Skip to content

2026

From Rehab Robotics to GraphRAG: Why Context is King

Before I was designing Generative AI architectures, I was building explainable machine learning applications for healthcare.

Specifically, I worked on systems to help Occupational Therapists guide young people with Cerebral Palsy through home-based therapy. The challenge wasn't just "detecting a movement." It was distinguishing between a therapeutic gesture and the "noisy" neurological commands often present in CP, like spasticity or muscle synergies.

To make that work, we couldn't just throw raw data at a black box. We had to build strict calibration procedures to personalize the system to the individual’s physiology. We had to select interpretable features—like movement variability—that gave therapists actual clinical insight rather than just a binary "pass/fail".

I carried this obsession with context and calibration into my recent work with Large Language Models.

Giving Feedback at Scale: Our Journey into Fine-Tuning GPT for Education

We have all felt that moment of frustration: you put hours of thought into a complex response, only to receive a generic "Good job" or a cold numerical score. As a researcher at Acuity Insights, I have spent years looking at how we can make educational assessment more human, even when we are dealing with thousands of students.

My team and I recently presented our findings on this challenge at The 40th ACM/SIGAPP Symposium on Applied Computing (SAC '25) in Catania, Italy. We wanted to know: Can we actually teach an AI to give feedback that feels personal, supportive, and—most importantly—useful?

DOI: 10.1145/3672608.3707735