More soon about my current role.
Previously, I completed my PhD at MIT, where I worked on robust, reliable, and uncertainty-aware natural language systems using tools from probabilistic modeling and Bayesian inference. My academic research was supported by the NSF GRFP and an MIT Presidential Fellowship and has been the recipient of several outstanding paper awards including at COLM 2025 (top 0.3%) and EMNLP 2023 (top 0.6%). More papers can be found on my publications page.
During my PhD, I was also a member of the GenLM research consortium and worked as a student researcher in Apple’s special projects group. In parallel, I contributed to several large-scale open-source initiatives, including the development and evaluation of code models with the Star Coder project by Hugging Face and Service Now, the first implementation of CFG-guided text generation for the Outlines library by dottxt-ai, and contributions to AI for mathematics with Project Numina. See my news for more details on these latter projects.
Prior to starting my PhD, I studied computational neuroscience at the University of Michigan, and worked for several years on machine learning applications to neurobiology, including publications in Nature and PNAS. I have also previously organized several interdisciplinary workshops including NLRSE @ ACL 2024 and NHLS @ The U.S. National Science Foundation.