Jacob L. Block

I am a Ph.D. Student in the Electrical and Computer Engineering Department at the University of Texas at Austin co-advised by Drs. Aryan Mokhtari and Sanjay Shakkottai . My current research focuses on efficient model fine-tuning and unlearning.

I received my B.S.E in Electrical Engineering from the University of Michigan in 2023, where I was fortunate to work with Dr. Jeffrey Fessler .

Email: jblock at utexas dot edu

Resume  /  Google Scholar  /  Twitter  /  Github

profile photo

Experience

  • Hudson River Trading (2026)
  • Google Research, Learning Theory Team (2025)
  • MathWorks, Audio Toolbox Team (2022)
  • Lawrence Livermore National Laboratory, Beam Physics Group (2021)

Research

Machine Unlearning under Overparameterization
JLB, Aryan Mokhtari, and Sanjay Shakkottai
NeurIPS, 2025
PDF   ArXiv   Code

We study machine unlearning in the overparameterized regime, where many different models can interpolate the training data. We propose a new unlearning definition based on the minimum-complexity interpolator over the retained data, and introduce MinNorm-OG, an algorithm enforcing this principle using only model gradients at the original solution. We provide theoretical guarantees for several model classes and demonstrate that MinNorm-OG outperforms existing baselines in practice.

Provable Meta-Learning with Low-Rank Adaptations
JLB, Sundararajan Srinivasan, Liam Collins, Aryan Mokhtari, and Sanjay Shakkottai
NeurIPS, 2025
PDF   ArXiv   Code

Foundation models learn rich representations, but require further training for adaptation to downstream tasks. We introduce a generic PEFT-based meta-learning framework that prepares models to adapt efficiently to unseen tasks, and show that for linear models with LoRA, standard retraining is provably suboptimal while our method achieves strict performance guarantees. Experiments on synthetic, vision, and language tasks confirm significant gains over conventional retraining.