I work in the AI for Health team at Google DeepMind. The team develops multi-modal techniques spanning radiology, pathology, genomics, and transcriptomics. Our goal is to help clinicians make better decisions for their patients, and to enable scientists discover promising medicine.
Prior to this, I led the Neural Networks Vision group at Vicarious. We developed visual recognition algorithms for kitting, packaging and bin picking robotic applications.
I completed my Ph.D. in computer vision under Jim Rehg at the School of Interactive Computing, Georgia Tech. in 2018. My thesis explored recognition problems in videos where motion and sparse labeling can be used to build a life-long object learning system. Before Georgia Tech, I received my Masters degree in CG, Vision & Imaging in late 2010 from UCL. Here, I researched with Gabriel Brostow (aka Gabe) on detecting regions of occlusion in consecutive video frames.
I also did a brief stint at The University of Warwick with Nasir Rajpoot, developing registration and dimensionality reduction techniques for cancerous tissue examined under Toponome Imaging System. Previously, I was stationed at LUMS SSE where I had the honor to work with Sohaib Khan. In my 3 years stay, I collaborated with biologists at LUMS SSE and MRC NIMR in developing tracking techniques for fluorescence microscopy, with some interludes in Systems research with Umar Saif.