Statement of Objectives
Statement of Objectives
At age nine, I tried to build an operating system in Microsoft PowerPoint when I discovered hyperlinks. I was captivated by Windows 8's UI aesthetic and wanted to create my own OS. At twelve, when IMAX theatres didn’t exist in Chennai, I taught myself to modify a Minecraft plugin and built a virtual IMAX theater to web stream movies inside the game. At thirteen, fascinated by the idea of real superhuman abilities - not gadgets, but abilities core to their identity, I spent an entire summer building full-body cardboard exoskeletons with salvaged electronics. The fourth attempt actually worked.
Looking back, these definitely weren't the projects of a prodigy trying to change the world but definitely were the work of a kid who got obsessed with curious ideas and refused to let "we can't afford that" or "I don't know how" or “we have never done that way before” stopped from trying. At the core, what I learned wasn't just about UIs, OS or electronics but that resourcefulness itself is a form of invention, and that you can build your way toward seemingly impossible things.
[more of my early projects] - cardboard exoskeletons, VFX stop motion films, spacecrafts for NASA contest
Wannabe Tony Stark!
Spacecrafts for NASA SS Contest!
My naive teenage fascination with superhuman abilities stuck with me long enough to eventually mature into a more serious question: What does it actually mean to augment human capability? As an existentialist, I've come to define augmentation not as mere adding gadgets to bodies, but as a path towards enhancing an individual's ability to pursue and flourish in their whatever chosen purpose. Systems that interface similar to unconscious bio-process serving conscious purpose rather than direct manipulation. True augmentation should exceed baseline performance and enable capabilities that the unaugmented user could never have.
This philosophy drove my undergraduate first authored research on AR wearables - HUX - Heads Up eXperience, an always-on AI system to go beyond heads-up displays. Working with Dr. Gowdham Prabhakar at IIT Kanpur's HIVE lab in 2024, I built a real-time system capable of tracking selective visual attention, detecting overlooked environmental changes, and creating multimodal memories. The goal wasn't to overlay information on reality, but to build a digital model of the user with intimate understanding enough to co-pilot them across diverse human-computer-environment interactions. HUX along with other co-authorships resulted in two CHI submissions and a publication at NIME (CHI spinoff) during my time at IIT Kanpur.
Testing the backend system with an eye-tracker and a display before interfacing with an AR / VR device.
Pipeline for live multimodal query pointed using eye tracking.
Pipeline for storing multimodal episodic memory.
I converted my room into what I called an "Intelligence of Things" system, a space with actuated "limbs" (IoT devices) and spatially aware perception (camera) that could interpret human-centered, empathetic commands like "turn on the lamp nearest to me" or "switch off the farthest light". We conducted a consented HCI study with 15 volunteers from my apartment building, mostly elderly residents with a mean age of 45.8.
The elderly participants, many of whom struggled to walk to their switches or struggled with English device names and complex interfaces, felt genuinely empowered. They could speak in their native languages and use spatial reasoning instead of memorizing arbitrary product names. NASA TLX scores showed statistically significant cognitive load reduction, and 14 of 15 users preferred the system over baseline smart home controls.
One participant told me in Tamil, "This is not just a cool gadget, but the first time I've felt like a room understands me, not the other way around - so natural yet like magic". I think words like these justified every hour I'd spent in the lab alone pushing through fear of failure. The research also led to an Indian patent application on assisted living and taught me that the most powerful augmentation often feels invisible, it simply lets people be more themselves.
Intelligence of Things enabled room environment for user testing.
As a visiting student at Dr. Stefanie Mueller's HCIE group at MIT, I experienced what cutting-edge HCI and AI research looks like at its best. Contributing to work at the intersection of generative AI and 3D systems while remotely working and going to sleep at 3am in the morning to align with EST, I learned rigorous methodology and eventually became a co-author on a UIST submission. More importantly, I connected with Cayden Pierce, founder of Mentra - a previous Fluid Interfaces member, who saw potential in HUX and provided funding and state-of-the-art hardware support.
The project eventually won a grant and a brief contract that took me to Y Combinator in San Francisco my first time leaving India. What began as a nine-year-old's obsession with makeshift operating systems had somehow led to standing in Silicon Valley, discussing assistive technology with world class founders and mentors.
This led to building ChatGlasses, a live-captioning call system for smart glasses. I released it as a deployed product with open-source MIT license alongside 2 other products, expecting maybe a few downloads from smart glasses enthusiasts. Instead, I received a heartwarming email from a hard-of-hearing user in the Netherlands about it helping and asking for more features that help them be more connected to their family. That email from someone I'd never met, in a country I'd never visited, solved a problem I hadn't explicitly designed to teach me something crucial about building technology: you design for one person's needs as deeply as possible, and if you do it right, you've designed for thousands.
[more about HUX derivative real world smart glasses products]
Concealed identity because this is a private email.
Discussions about ChatGlasses from a public discord server.
Returning to India, I was encouraged by entrepreneur and mentor Paras Chopra to explore broader problems that wearables could address. Through a connection to JIPMER, a leading government hospital, I met with healthcare stakeholders sitting in on consultations, watching workflows, iterating on prototypes with real clinicians.
The result was DocDoc, a system that automatically creates patient records from live data collected by doctors' wearables, then integrates into existing EMR systems after AI-driven analysis. The manual record-keeping process I observed was error-prone, time-consuming, and deeply frustrating for doctors who wanted to focus on patients, not written paperwork. I conducted hands-on demos with clinicians, refined the prototype based on their feedback, and learned that healthcare technology fails when it's built in labs without everyday hospital context.
This work taught me that the problems with the most impact aren't always the most technically sophisticated, they're also the ones that require understanding human systems and impact.
JIPMER, a leading Indian Government Hospital at Pondicherry, India
Automated Patient Record from Transcriptions, EMR integration.
Healthcare professional wearing smart glasses. (Camera, Microphone and Speakers).
*Consent seeked for photography
One of the first prototypes of DocDoc.
Healthcare professional wearing smart glasses interacting with a patient while logging to the backend system.
*Consent seeked for photography and transcription.
Meeting with Mr. Paras Chopra
Why the Media Lab? I saved money, bought a plane ticket and visited the Media Lab despite having fear of failures or lack of guarantees. It was the best decision I've made. I spent forty minutes with Professor Pattie Maes discussing my glasses work, origin story, my music composition from 2022 based on a dream, and my ideas. She called my ideas "creative and great," and was really encouraging how she listened genuinely curious about a visitor from India who'd built exoskeletons from cardboard as a thirteen year old.
I've never felt less like an outsider. The density of creative, inventive people who not only understood what I was trying to do but pushed me to think bigger, it confirmed my gut as a fourteen year old: this is where I actually belong.
I want to contribute research at the intersection of wearable AI, cognitive augmentation, and everyday life with insights from psychology and neuroscience.
Building on HUX and other open source works, I envision systems that don't just display information and answer queries but understand user model context deeply enough to act as cognitive partners. I'm particularly excited about Professor Maes and Pat' work on Future You and the possibility of building "artificial consciences" - LLM-powered systems that simulate the guidance from counsel of mentors we aspire to become, delivered through smart glasses in specific real-world situations. [future directions]
I'm also eager to extend Intelligence of Things research by integrating it with camera-enabled smart glasses, creating more human centered systems that reduce cognitive load through fully agentic control based on user data like live transcriptions, visual positioning.
At the Media Lab, invention isn't a means to an end, it's both the means and the end. That's the environment where a kid’s cardboard exoskeleton can eventually become research that helps people flourish in their chosen purposes.
[more about future directions and previous work at MIT Media Lab]
Meeting with the most honorable Dr. Pattie Maes!