Ben Lipkin


Ben Lipkin — CV
PhD Student Researcher @ MIT BCS

Papers | Personal | Contact

About Me

Hi, my name is Ben Lipkin, and I am a current PhD student and researcher in the Department of Brain & Cognitive Sciences (BCS) at the Massachusetts Institute of Technology (MIT), primarily advised by Evelina Fedorenko and Roger Levy. I am grateful for funding support from the National Science Foundation (NSF) Graduate Research Fellowship (GRFP), the Computationally-Enabled Integrative Neuroscience Training Program (CEIN) and an MIT Presidential Fellowship. Check out this site to learn more about my published work, personal interests, and how to get in touch with me.

Background
I completed my undergraduate degree at the University of Michigan in Ann Arbor, where I studied Computational Neuroscience and enjoyed supplemental coursework at the Center for the Study of Complex Systems. From 2018-2020, I worked with David Brang and Shawn Hervey-Jumper on machine learning (ML) applications to neuro-surgery and neuro-oncology, and after completing my thesis in early 2020, I graduated with high-honors, and relocated to Cambridge, MA. I then spent 2020-2022 working as Technical Research Staff with Evelina Fedorenko on a range of projects investigating human brain and artificial neural network (ANN) representations of language and other abstract hierarchically structured systems, from computer programming to recursive social reasoning. Through this work, I developed an interest in how humans jointly leverage and apply linguistic knowledge in tandem with domain-general mechanisms supporting abstract properties and structures to solve tasks, and how we can go about modeling these skills in artificial systems.

Current Focus
These days, I am working towards my doctoral research, which broadly straddles the intersection of cognitive science and natural language processing (NLP), with a particular focus on modular neurosymbolic programming. Towards this research program, I am working on:

1) CogSci: developing a cognitive neuroscience account for how the brain's language network, along with other regions specialized for mathematics, logic, and programming, work together to solve verbal tasks with underlying algebraic, logical, or algorithmic structure. Through this line of work, I also contribute to methods for statistical modeling of high-dimensional neuroimaging data, and to applications in math education and pedagogy.

2) NLP/AI: building neurosymbolic systems that can reason over natural language, by jointly leveraging large language models (LLMs) with tools from symbolic AI and programming languages, including formal grammars, probabilistic programs, and satisfiability solvers. Through this line of work, I also contribute to research in semantic parsing, inference algorithms for string generation tasks, and natural language interfaces for expert systems.

Miscellaneous
I'm passionate about Open Source and Open Science. As part of this, I allocate a portion of my time towards community contributions, previously including the BigCode StarCoder project by HuggingFace and ServiceNowResearch, Outlines guided text generation by .txt, as well as a number of self-led development projects, such as ProbSem, a library for probabilistic LLM evaluations. These days, I am particularly excited about the AI4Code and AI4Math spaces. In my own work, I make an effort to produce research and engineering artifacts that are transparent, easy to use, and highly reproducible. As such, all code I release comes packaged with automatic dependency management and prebuilt dataflow pipelines to run all experiments and regenerate any published tables, figures, etc. from raw inputs. Feel free to reach out if you're interested in learning how to use DevOps to improve FAIR science.

News

  • Sept 2024: Co-organizer of SFI workshop on "Assessing Representation in Minds and Artificial Systems" in Santa Fe, NM.
  • Aug 2024: Co-organizer of ACL workshop on "Natural Language Reasoning and Structured Explanations" in Bangkok, Thailand.
  • May 2024: "Elements of World Knowledge (EWOK): A cognition-inspired framework for evaluating basic world knowledge in language models" released as preprint alongside the companion open-source software library.
  • May 2024: Student organizer to NSF workshop on "New Horizons in Language Science: Large Language Models, Language Structure, and the Cognitive and Neural Basis of Language" in Arlington, VA.
  • Apr 2024: "Modeling uncertainty in semantic parsing" presented at the New England NLP Meeting in Providence, RI.
  • Apr 2024: Awarded NSF GRFP Fellowship.
  • Apr 2024: "Log probability scores provide a closer match to human plausibility judgments than prompt-based evaluations" presented at the SouthNLP symposium in Atlanta, GA.
  • Dec 2023: "LINC: a neurosymbolic approach for logical reasoning by combining language models with first-order logic provers" wins outstanding paper award at EMNLP conference in Singapore.
  • Aug 2023: Started at SFI summer school on intelligence and representation at the Isaac Newton Institute in Cambridge, UK.
  • July 2023: "Evaluating statistical language models as pragmatic reasoners" presented at CogSci in Sydney, Australia, and ACL NLRSE workshop in Toronto, Canada.
  • May 2023: "StarCoder: may the source be with you!" published at TMLR.
  • Dec 2022: "This is your brain. This is your brain on code." promotes our work in MIT News and Communications of the ACM.
  • Nov 2022: "Convergent Representations of Computer Programs in Human and Artificial Neural Networks" presented at NeurIPS conference in New Orleans, LA.
  • Sept 2022: Awarded MIT Presidential Fellowship.
  • Sept 2022: Started PhD at MIT BCS.

© Ben Lipkin

The site was built with Mobirise