Conformal prediction under feedback covariate shift for biomolecular design

Tuesday October 18th, 4-5pm EST | Clara Wong-Fannjiang — PhD Candidate, UC Berkeley

Many applications of machine learning methods involve an iterative protocol in which data are collected, a model is trained, and then outputs of that model are used to choose what data to consider next. For example, one data-driven approach for designing proteins is to train a regression model to predict the fitness of protein sequences, then use it to propose new sequences believed to exhibit greater fitness than observed in the training data. Since evaluating designed sequences in the wet lab is typically costly, it is important to quantify the uncertainty in their predicted fitness. This is challenging because of a characteristic type of distribution shift between the training and test data in the design setting: one in which the training and test data are statistically dependent, as the latter is chosen based on the former. Consequently, the model's error on the designed sequences has an unknown and possibly complex relationship with its error on the training data. We introduce a method to quantify predictive uncertainty in such settings. We do so by constructing confidence sets for predictions that account for the dependence between the training and test data. The confidence sets we construct have finite-sample guarantees that hold for any prediction algorithm, even when a trained model chooses the test-time distribution as in the design setting. As a motivating use case, we demonstrate with several real data sets how our method quantifies uncertainty for the predicted fitness of designed proteins, and can therefore be used to select design algorithms that achieve acceptable trade-offs between high predicted fitness and low predictive uncertainty.

Paper: https://www.pnas.org/doi/10.1073/pnas.2204569119

Recording Link: https://www.youtube.com/watch?v=AOyDjBSQjhk