NeurIPS’22: Paper by Dr. Margo Seltzer gets coveted oral presentation spot at conference
Part 2 in a series about some of the department’s accepted papers at NeurIPS 2022 (conference being held Nov. 28 - Dec. 9)
Dr. Margo Seltzer, professor of UBC Computer Science and her co-authors from Duke University have two papers accepted at the upcoming NeurIPS conference, with one of their papers qualifying for a prestigious oral presentation spot. Only a select two per cent of the accepted papers at this premiere conference in AI and Machine Learning (ML) are provided oral presentation.
The paper:
Exploring the Whole Rashomon Set of Sparse Decision Trees
Rui Xin, Chudi Zhong, Zhi Chen, Takuya Takagi, Margo Seltzer, Cynthia Rudin
Seltzer and Rudin (Duke) have a longstanding and prolific collaboration. “She and I started collaborating about seven years ago, and we've had a wonderfully productive collaboration," Margo said. "I feel like I know many of her students quite well, even though we’ve never met in person (thank you COVID). In fact, one just asked me to be on her committee. We’ve also had UBC Systopia undergraduates, graduate students, and postdocs participate in some of this joint work.”
Improving Interpretable Machine Learning, one decision at a time
Interpretable Machine Learning is an area of machine learning in which a person can look at a model and understand exactly what it's doing when it makes predictions or classifications. This is the arena in which Seltzer’s UBC team and Rudin’s Duke team work together.
She explained with an example, “If I'm building you an ML model to help you decide whether you should bring an umbrella, I might ask, ‘Is it currently raining?’ If it is, I’d say bring the umbrella. If it's not, I might then ask if it’s cloudy. You may answer yes. So I might say, ‘Bring an umbrella.’ If you answer no, it’s not cloudy, then I might say, ‘Is it November in Vancouver?’ If you say yes, I’d say, ‘Bring an umbrella.’ But if you answered no, it’s not November, I’d then say ‘Don’t bring an umbrella.’"
Margo said that by looking at the sequence of questions and answers, the reasoning behind the ultimate suggested output is fairly obvious.
Currently however, most machine learning models produce only a single model from a set of training data. “But we've known for a long time there are a lot of models that perform well,” Margo said. The team posited that if a user could examine all of the good models, they could then select a model based on their specific needs. For example, perhaps one model is using a sensitive feature such as race. The user may want to utilize another model that includes a collection of other features that are not sensitive; this work gives users exactly that ability.
“The problem until now,” Margo said, “Is that there hasn’t been a way to produce all good models. Our work fully enumerates the Rashomon set, which is the set of all equally good models, for decision trees.”
The Rashomon set is the set of all almost-optimal models. Rashomon sets can be large in size and complicated in structure, particularly for highly nonlinear function classes that allow complex interaction terms, such as decision trees.
A game changer
Margo explained how this research is potentially game-changing. She said it empowers users to pick the model they like. They can analyze the whole collection of models, identify which features of the samples are most important across the whole collection of models, and select the models that meet whatever criteria they want – for example, those that avoid sensitive features, match domain expert intuition, or are easiest to remember.
ML models are typically evaluated by the fraction of the time the model’s prediction or classification is accurate. “But, imagine that you have a skewed data set in which the vast majority of the samples predict True and only a small number predict False,” Margo said. “The simplest model that achieves high accuracy is one that always predicts True, but that is not a particularly useful model. In cases like that, we typically try to optimize for ‘balanced accuracy,’ that is, achieving the same accuracy on both True and False samples.”
“Another thing we've shown is that even if you create the Rashomon set for all models based on accuracy, all the good models for other objective functions such as balanced accuracy are included in that set too. So you can perform the same kind of manual selection for different decision trees with these objectives.”
The second paper by Margo and her Duke co-authors accepted for the NeurIPS conference is: FasterRisk: Fast and Accurate Interpretable Risk Scores, Jiachang Liu, Chudi Zhong, Boxuan Li, Margo Seltzer, Cynthia Rudin
Learn more about Margo and her research group Systopia.
In total, the department has 13 accepted papers by 9 professors at the NeurIPS conference. Read more about the accepted papers and their authors.