Hello!

My goal is to make ML systems safe and interpretable using formal methods, control theory, and optimization! I am broadly interested in:

  • Making LLMs better at logical reasoning
  • Training models that are interpretable by design
  • Developing simple yet useful explainability techniques
  • Establishing formal guarantees for ML systems
  • Linear algebra and big matrices