Thorsten Joachims

Cornell University

 

Batch Learning from Bandit Feedback

When: Thursday, September 13, 2018 @ 11am

Where: 1043 ERF

In this talk, I will explore learning methods for batch learning from logged bandit feedback (BLBF). Following the inductive principle of Counterfactual Risk Minimization for BLBF, this talk presents an approach to training linear models and deep networks from propensity-scored bandit feedback. Related to this, the talk also touches on the use of observational partial-information feedback in the context of learning-to-rank.

Abstract:  Every time a system places an ad, presents a search ranking, or makes a recommendation, we can think about this as an intervention for which we can observe the user’s response (e.g. click, dwell time, purchase). Such logged intervention data is one of the most plentiful types of data available, as it can be recorded from a variety of systems (e.g., search engines, recommender systems, ad placement) at little cost. However, this data provides only partial-information feedback — aka “bandit feedback” — limited to the particular intervention chosen by the system. We don’t get to see how the user would have responded, if we had chosen a different intervention. This makes learning from logged bandit feedback substantially different from conventional supervised learning, where “correct” predictions together with a loss function provide full-information feedback. It is also different from online learning in the bandit setting, since the algorithm does not assume interactive control of the interventions.

Biography:  Thorsten Joachims is a Professor in the Department of Computer Science and in the Department of Information Science at Cornell University. His research interests center on a synthesis of theory and system building in machine learning, with applications in information access, language technology, and recommendation. His past research focused on counterfactual and causal inference, support vector machines, text classification, structured output prediction, convex optimization, learning to rank, learning with preferences, and learning from implicit feedback. In 2001, he finished his dissertation advised by Prof. Katharina Morik at the University of Dortmund. He is an ACM Fellow, AAAI Fellow, and Humboldt Fellow.

0

You may also like

MS student awarded twice at MIT Energy Hack 2018
Reaping the benefits of an internship experience
EVL’s Lance Long recognized with UIC’s Award of Merit

Tom Cicero


Notice: get_currentuserinfo is deprecated since version 4.5.0! Use wp_get_current_user() instead. in /var/www/cs1.engr-dev.uic.edu/wp-includes/functions.php on line 3853