I’m a Ram and Vijay Shriram Postdoctoral Fellow at Stanford University, affiliated with Stanford Data Science. I work with Emmanuel Candès in the Department of Statistics.
I obtained my PhD in Electrical Engineering and Computer Sciences at UC Berkeley in 2023, where I was advised by Moritz Hardt and Michael Jordan. I spent the summer of 2020 interning at Apple AI Research, hosted by Vitaly Feldman. My PhD research was generously supported by an Apple PhD Fellowship in AI/ML. Before starting my PhD, I completed my BEng in Electrical and Computer Engineering at the University of Novi Sad in Serbia, where I was advised by Dragana Bajovic. During undergrad I spent a summer at Caltech, working with Babak Hassibi.
My research establishes foundations to ensure data-driven technologies have a positive impact. Topics of interest include performative prediction, a framework that formalizes the impacts that predictive algorithms can have on society and studies algorithms for finding desirable equilibria in such settings. Much of my recent work focuses on prediction-powered inference and active inference, developing statistically valid methods for an increasingly common setting in which a small amount of “gold-standard” data is supplemented by a large amount of AI predictions. My work also contributes to drawing reliable conclusions in the presence of selection bias, such as the bias arising from cherry-picking only those effects and models that seem most promising based on the data.
email: “firstname”.”lastname”@stanford.edu
office: 116 Sequoia Hall
I hold weekly office hours as part of a broader initiative by the Learning Theory Alliance. You can book a slot here.
(* denotes equal contribution, α-β denotes alphabetical ordering)
Can Unconfident LLM Annotations Be Used for Confident Conclusions?
K. Gligorić*, T. Zrnic*, C. Lee*, E. J. Candès, D. Jurafsky
Preprint arxiv
A Note on the Prediction-Powered Bootstrap
T. Zrnic
Note arxiv package
Active Statistical Inference
T. Zrnic, E. J. Candès
International Conference on Machine Learning (ICML) 2024 (oral) ICML arxiv code
Plug-in Performative Optimization
L. Lin, T. Zrnic
International Conference on Machine Learning (ICML) 2024 ICML arxiv
Locally Simultaneous Inference
T. Zrnic, W. Fithian
Annals of Statistics (AoS) 2024 AoS arxiv code talk
Cross-Prediction-Powered Inference
T. Zrnic, E. J. Candès
Proceedings of the National Academy of Sciences (PNAS) 2024 PNAS arxiv code package
PPI++: Efficient Prediction-Powered Inference
(α-β) A. N. Angelopoulos, J. C. Duchi, T. Zrnic
Preprint arxiv code package
Prediction-Powered Inference
(α-β) A. N. Angelopoulos, S. Bates, C. Fannjiang, M. I. Jordan, T. Zrnic
Science 2023 Science arxiv package
Post-Selection Inference via Algorithmic Stability
T. Zrnic, M. I. Jordan
Annals of Statistics (AoS) 2023 AoS arxiv talk
Algorithmic Collective Action in Machine Learning
(α-β) M. Hardt, E. Mazumdar, C. Mendler-Dünner, T. Zrnic
International Conference on Machine Learning (ICML) 2023 ICML arxiv talk
Valid Inference After Causal Discovery
P. Gradu*, T. Zrnic*, Y. Wang, M. I. Jordan
Journal of the American Statistical Association (JASA) 2024+ arxiv
A Note on Zeroth-Order Optimization on the Simplex
T. Zrnic, E. Mazumdar
Note arxiv
Regret Minimization with Performative Feedback
M. Jagadeesan, T. Zrnic, C. Mendler-Dünner
International Conference on Machine Learning (ICML) 2022 ICML arxiv
Symposium on Foundations of Responsible Computing (FORC) 2022 (non-archival)
Private Prediction Sets
A. N. Angelopoulos*, S. Bates*, T. Zrnic*, M. I. Jordan
Harvard Data Science Review (HDSR) 2022 HDSR arxiv code
Who Leads and Who Follows in Strategic Classification?
T. Zrnic*, E. Mazumdar*, S. S. Sastry, M. I. Jordan
Conference on Neural Information Processing Systems (NeurIPS) 2021 NeurIPS arxiv
Individual Privacy Accounting via a Rényi Filter
(α-β) V. Feldman, T. Zrnic
Conference on Neural Information Processing Systems (NeurIPS) 2021 NeurIPS arxiv short talk long talk
Symposium on Foundations of Responsible Computing (FORC) 2021 (non-archival)
Outside the Echo Chamber: Optimizing the Performative Risk
J. Miller*, J. C. Perdomo*, T. Zrnic*
International Conference on Machine Learning (ICML) 2021 ICML arxiv blog post
Symposium on Foundations of Responsible Computing (FORC) 2021 (non-archival)
Asynchronous Online Testing of Multiple Hypotheses
T. Zrnic, A. Ramdas, M. I. Jordan
Journal of Machine Learning Research (JMLR) 2021 JMLR arxiv blog post code online FDR package
Stochastic Optimization for Performative Prediction
C. Mendler-Dünner*, J. C. Perdomo*, T. Zrnic*, M. Hardt
Conference on Neural Information Processing Systems (NeurIPS) 2020 NeurIPS arxiv blog post code
Performative Prediction
J. C. Perdomo*, T. Zrnic*, C. Mendler-Dünner, M. Hardt
International Conference on Machine Learning (ICML) 2020 ICML arxiv blog post talk code
The Power of Batching in Multiple Hypothesis Testing
T. Zrnic, D. L. Jiang, A. Ramdas, M. I. Jordan
International Conference on Artificial Intelligence and Statistics (AISTATS) 2020 AISTATS arxiv talk code
Natural Analysts in Adaptive Data Analysis
T. Zrnic, M. Hardt
International Conference on Machine Learning (ICML) 2019 ICML arxiv talk
SAFFRON: an Adaptive Algorithm for Online Control of the False Discovery Rate
A. Ramdas, T. Zrnic, M. J. Wainwright, M. I. Jordan
International Conference on Machine Learning (ICML) 2018 ICML arxiv code
Tensor-Based Crowdsourced Clustering via Triangle Queries
R. K. Vinayak, T. Zrnic, B. Hassibi
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2017 IEEE
Improving Location of Recording Classification Using Electric Network Frequency (ENF) Analysis
Z. Saric, A. Zunic, T. Zrnic, M. Knezevic, D. Despotovic, T. Delic
IEEE International Symposium on Intelligent Systems and Informatics (SISY) 2016 IEEE
Prediction and Statistical Inference in Feedback Loops
T. Zrnic
UC Berkeley EECS