Agenda (Pacific Time)
8:55 – 9:00: Welcome, Agenda and Opening Remarks (Shaili Jain, Sofus Macskassy)
9:00 – 9:45: Franziska Bell, VP Data & Analytics, BP
Invited Talk: How data science powers energy production
Abstract: In this talk I am going to discuss data science applications that resulted in step function changes to the energy industry. The energy industry is filled with rich data-driven problems and opportunities to change the world through data science, and advanced data science techniques have been able to drive transformative changes to the operations and evolution of the energy industry.
Bio: Dr. Franziska Bell is the vice president, data & analytics at bp. She heads the data & analytics discipline which is comprised of data science, AI, data engineering, data management and data analytics. Before joining bp, Fran was an executive at Toyota Research Institute, where she focused on two areas: (i) novel battery and fuel cell materials using AI and computational chemistry for a low-emission future and (ii) a human-centered AI. Previously, Fran was the head of data science platforms at Uber. At Uber, Fran founded and built several digital platform teams with the mission to transform anyone in the company into a data scientist at a push-of-a-button. Before Uber, Fran was a postdoc at Caltech where she developed a novel, highly accurate approximate quantum molecular dynamics theory to calculate chemical reactions for large, complex systems, such as enzymes. Fran earned her Ph.D. in theoretical chemistry from UC Berkeley focusing on developing highly accurate, yet computationally efficient approaches which helped unravel the mechanism of non-silicon-based solar cells and properties of organic conductors.
9:45 – 10:45: Data Collection and Learning (Session Chair: Shaili Jain)
- Challenges in Data Production for AI with Human-in-the-Loop, Dmitry Ustalov, Toloka (Yandex) Abstract:
Today, successful Artificial Intelligence applications rely on three pillars: machine learning algorithms, hardware for running them, and data for training and evaluating models. Although algorithms and hardware have already become commodities, obtaining up-to- date and high-quality data at scale is still challenging—but possible by building hybrid human-computer pipelines called human-in-the- loop. This talk will show how to make a significant business impact using human-in-the-loop pipelines that combine machine learning with crowdsourcing. We will share the experience of one of the world’s largest search engines, Yandex. After a brief introduction to human-in-the-loop, we will describe two insightful case studies with a significant business impact at Yandex. First, we will show how to use human-in-the-loop with subjective human opinions to gather training data for learning-to- rank models in the online setting, crucial for the recommendation, e-commerce, and search applications. Second, we will show how human-in-the-loop combined with spatial crowdsourcing enables keeping information on brick-and-mortar businesses up-to-date and transformed into structured data, essential for social impactful applications like online maps and directories. Then, we will present the practical challenges of deploying human-in-the-loop pipelines, focusing on common issues with task design and quality control. We will demonstrate the end-to-end task design techniques that better fit for open-ended and subjec- tive questions compared to widely-used classification tasks. We will present our recent advances in this field, including the use of large-scale language models (like T5 and BART) for sequence aggregation. Also, we will show the new evaluation datasets for textual and subjective annotation.1 We will discuss the problem of reliable quality control in crowdsourcing by describing the rele- vant computational methods for aggregation, quality estimation, and model selection. Finally, we will demonstrate Crowd-Kit, an open-source library that offers battle-tested and platform-agnostic implementations of all the above-described methods in Python.2 Overall, we will share our experience in running impactful human- in-the-loop pipelines in production while overcoming the common practical challenges using the available and reliable open-source technologies, datasets, and tools.Bio:
Dr. Dmitry Ustalov is the Head of Research at Toloka, https://toloka. ai/, a global data labeling platform that grew out of an in-house service for all Machine Learning-based products of a European tech giant Yandex. He is responsible for enabling the state-of-the-art methods for quality control in Toloka and spreading the innovations made by the Toloka Research team. His research interests focus on Computational Semantics, Crowdsourcing, and Evaluation; he studies how words, meanings, and relationships between them can be efficiently gathered, computationally expressed, and carefully assessed so that humans can understand each other better. Dmitry received bachelor’s and master’s degrees from the Ural Federal University, Russia, a Ph.D. in Computer Science from the South Ural State University, Russia, and post-doctoral training from the University of Mannheim, Germany. Dr. Ustalov’s research is published in the leading international scientific venues, such as NeurIPS, COLI, ACL, EACL, and EMNLP.3 He serves as a reviewer for NeurIPS, COLI, SWJ, ACL, EMNLP, COLING, ISWC, LREC, and other publications. Dmitry has been co-organizing Crowd Science Workshop at NeurIPS and VLDB since 2020,4 TextGraphs workshop at ACL since 2018,5 and Russian Semantic Evaluation initiative since 2015. Dmitry co-authored and presented crowdsourcing and human-in-the-loop tutorials at the top-tier scientific conferences on Artificial Intelligence, including NAACL-HLT ’21, WWW ’21, SIGMOD/PODS ’20, and WSDM ’20.6 Dr. Dmitry Ustalov has more than eight years of teaching expe- rience. He taught Distributed Systems in 2013–2017 at Ural Fed- eral University and Text Analytics in 2018 at the University of Mannheim. Since 2019 Dmitry has been teaching a crowdsourcing class at three premier advanced vocational training programs, Yan- dex School of Data Analysis in Moscow,7 Y-DATA in Israel,8 and Computer Science Center in Saint Petersburg.9 Also, his course on Graphs, Computation, and Language has been recently accepted at the prestigious ESSLLI 2022 summer school in Galway, Ireland.
- AI & Public Data for Humanitarian and Emergency Response, Alex Jaimes (Dataminr) Abstract:
When an emergency event, or an incident relevant for peacekeeping or humanitarian needs first occurs, getting the right information as quickly as possible is critical in saving lives. When an event is ongoing, information on what is happening can be critical in making decisions to keep people safe and take control of the particular situation unfolding. In both cases, first responders, peacekeepers, and others have to quickly make decisions that include what resources to deploy and where. Fortunately, in most emergencies, people use social media to publicly share information. At the same time, sensor data is increasingly becoming available. But a platform to detect emergency situations and deliver the right information has to deal with ingesting thousands of noisy data points per second: sifting through and identifying relevant information, from different sources, in different formats, with varying levels of detail, in real time, so that relevant individuals and teams can be alerted at the right level and at the right time. In this talk I will describe the technical challenges in processing vast amounts of heterogenous, noisy data in real time from the web and other sources, highlighting the importance of interdisciplinary research and a human-centered approach to address problems in humanitarian and emergency response. I will give specific examples and discuss relevant future research directions in Machine Learning, NLP, Information Retrieval, Computer Vision and other fields, highlighting the role of knowledge combined with Neural and other approaches. This talk will present an overview, and draw from some of our publications at CVPR, AAAI, EMNLP, and others.Bio:
Alex is Chief Scientist & SVP of AI at Dataminr. Alex is a leader in AI and as an Engineering executive and scientist has built and led AI teams at large companies such as Yahoo and at several startups, where he has led efforts to build AI products used by millions of people across multiple B2C and B2B industries (real-time event detection/emergency response, healthcare, self-driving cars, media, telecomm, etc.). He has 15+ years of intl. experience in research (Columbia U., KAIST) and product impact at scale (Yahoo, Telefónica, IBM, Fuji Xerox, Siemens, AT&T Bell Labs, DigitalOcean, and IDIAP-EPFL) in the USA, Japan, Chile, Switzerland, Spain, and South Korea. He has been a professor (KAIST, South Korea), and has 100+ patents and publications (h-index 40) in top tier conferences and journals in diverse topics in AI. His work has received 6K+ citations and he has been featured widely in the press (MIT Tech review, CNBC, Vice, TechCrunch, Yahoo! Finance, etc.). He has given 100+ invited talks at the top academic and industry conferences (UN AI for Good Global Summit, ICML & NeurIPs workshops, KDD, O’Reilly AI, Strata, Velocity, the Deep Learning Summit (Re-Work), Tech Open Air, the Future of Technology Summit, CogX, Stanford, Cornell, & Columbia Universities, etc.). He is a mentor at Endeavor (which leads the high-impact entrepreneurship movement around the world) and Techstars; he is a member of the advisory board of Digital Divide Data (a non-for profit that creates sustainable tech opportunities for underserved youth, their families, and their communities in Asia and Africa), and was an early voice in Human-Centered AI (Computing). He is one of ten experts in the Colombian Government’s Artificial Intelligence Expert Mission, which will evaluate and produce concrete recommendations in the short, medium and long term to implement an AI Policy. Colombia’s AI Expert Mission is one of the first of its kind in the region, and one of the first to focus on developing measures for the development of education and employment policies for the fourth industrial revolution. Alex is an active member of the research community (publishing and being in the program committee of several top-tier conferences). He holds a Ph.D. and an M.S. from Columbia University.
- Scalable Attribute Extraction at Instacart, Shih-Ting Lin (Instacart) Abstract:
Structured attribute information extracted from natural text inputs has been extensively exploited in e-commerce to help improve the customer experience. For example, attributes can be extracted from the product catalog data such as product name & product descriptions; similarly, attributes can also be extracted from user queries to the search engine. Having these attribute information available can help greatly boost relevance of many different functionalities such as search, recommendation, and ads. However, with the huge space of product categories and extensive details in the product information, how to extract attribute information from text with high accuracy and high efficiency becomes an extremely challenging problem. In this talk, we will present the scalable machine learning-based attribute extraction pipeline we have built at Instacart for our online grocery business. We start our presentation with the unique challenges at Instacart on building our meta-catalog (catalog on top of catalogs from different retailers), and how we work with a diverse set of attribute naming conventions from multiple sources. After which we will talk about how we bootstrapped our attribute extraction work from scratch following a human-in-the-loop based solution, and trained our practical machine learning-based attribute extraction solution. We then present our achievement on unifying the attribute extraction on both user search queries and product textual information, and how we tackle the problem of mitigating the vocabulary gap between user search queries and product textual information in the catalog. Finally we present applications of our work in the real production environment and our learnings.Bio:
Shih-Ting Lin is a machine learning engineer in the Instacart general machine learning team led by Min Xie. His recent works mainly focus on building scalable machine learning solutions for information extraction on text data, including product catalog data and search queries, to help improve the e-commerce applications. Prior to Instacart, he received his Master degree from the computer science department at UT Austin. During his Master study, his research focused on learning NLP models in various areas, such as question answering and temporal event modeling, that can generalize across domains.
10:45 – 11:00: 15-min BREAK
11:00 – 11:45: Barr Moses, Co-founder and CEO, Monte Carlo.
Invited Talk: The Rise of Data Observability: Architecting the Future of Data Trust
Abstract: As companies become increasingly data driven, the technologies underlying these rich insights have grown more and more nuanced and complex. While our ability to collect, store, aggregate, and visualize this data has largely kept up with the needs of modern data teams (think: domain-oriented data meshes, cloud warehouses, data visualization tools, and data modeling solutions), the mechanics behind data quality and integrity has lagged. To keep pace with data’s clock speed of innovation, data engineers need to invest not only in the latest modeling and analytics tools, but also technologies that can increase data accuracy and prevent broken pipelines. The solution? Data observability, the next frontier of data engineering. I’ll discuss why data observability matters to building a better data quality strategy and tactics best-in-class organizations use to address it — including org structure, culture, and technology.
Bio: Barr Moses is CEO & Co-Founder of Monte Carlo, a data reliability company and creator of the industry’s first Data Observability Platform, backed by Accel, GGV, Redpoint, ICONIQ Growth, Salesforce Ventures, and other top Silicon Valley investors. Previously, she was VP Customer Operations at customer success company Gainsight, where she helped scale the company 10x in revenue and, among other functions, built the data/analytics team. Prior to that, she was a management consultant at Bain & Company and a research assistant at the Statistics Department at Stanford University. She also served in the Israeli Air Force as a commander of an intelligence data analyst unit. Barr graduated from Stanford with a B.Sc. in Mathematical and Computational Science.
11:45 – 12:25: Knowledge Representation (Session Chair: Sofus Macskassy)
- Mining Frequent Patterns on the Tax Knowledge Graph, Lalla Mouatadid (Intuit)
The Tax Knowledge Graph is a large-scale knowledge graph that captures the complicated U.S. and Canadian income tax compliance logic (both calculations and rules). It has helped transform Intuit’s flagship TurboTax product into a smart and personalized experience while accelerating and automating the tax preparation process for millions of customers. In this work, we describe how to mine frequent calculation patterns on the Tax Knowledge Graph in order to allow for easy maintenance of the graph (as the tax code changes yearly), reducing storage size, and increasing development and runtime speed of our tax engine.Bio:
Lalla Mouatadid is a research scientist at Intuit Futures, the research and innovation group at Intuit. She leads advanced research on graph algorithms, currently focusing on mining and knowledge graphs. Lalla received her Ph.D. in 2018 in Theoretical Computer Science from The University of Toronto with a focus on Graph Theory and Algorithms. She has many publications in top tier journals and conferences and was invited to speak at top universities for her work on graph algorithms. http://www.cs.toronto.edu/~lalla/
- Graph Neural Networks for the Global Economy with Microsoft DeepGraph, Jaewon Yang, Alex Samylkin, Baoxu Shi (LinkedIn, Microsoft)
Graph Neural Networks (GNNs) are AI models that learn embeddings for the nodes in a graph and use the embeddings to perform prediction tasks. In this talk, we present how we developed GNNs for the LinkedIn economic graph. LinkedIn economic graph is a digital representation of the global economy with 1B nodes and 200B edges, consisting of social graphs about members’ connections, activity graphs between members and other economic entities, and knowledge graphs about members’, companies’, job postings’ attributes. By applying GNN to this graph, we can utilize the full potential of the economic graph in many search and recommendation products across LinkedIn. The biggest challenge was to scale up GNNs to a massive scale of billion nodes and edges. To address this challenge, we developed Microsoft DeepGraph, an open source library for large scale GNN development. DeepGraph allows for training GNNs on large graphs by serving the graph in a distributed fashion with graph engine servers. In this talk, we will highlight the strengths of DeepGraph, such as supporting both PyTorch and TensorFlow, and integration with Azure ML and Azure Kubernetes Service. We will share lessons and findings from developing GNNs for various applications around the LinkedIn economic graph. We will explain how we combine graphs with different nature —social, activity, knowledge graphs — into one gigantic heterogeneous graph, and what algorithms we employed for the heterogeneous graph. We will present a few case studies, such as how we identify job postings with vague titles and replace them with more specific titles with GNNs.Bio:
Jaewon Yang is a Senior Staff Software Engineer at LinkedIn, where he leads projects on building machine learning models for standardizing member profiles and job postings to build the LinkedIn Knowledge Graph. His research interests include Information extraction, Knowledge graph mining and Conversational AI. Prior to joining LinkedIn in 2014, he obtained a Ph.D degree from Stanford Infolab and Master in Statistics at Stanford University. He received the SIGKDD dissertation award, the ICDM KAIS journal best paper award, and the ICDM best paper award. Presentation link: WWW 2020 Tutorial Baoxu Shi is a Staff Machine Learning and Relevance Engineer at LinkedIn, who mainly works on Knowledge Graph Representation Learning and Knowledge Graph Construction. Prior to LinkedIn, he obtained his Ph.D. degree from the University of Notre Dame with a focus on Knowledge Graph Completion and Knowledge Graph Mining. He regularly serves as the program committee members for conferences including AAAI, ACL, EMNLP, ICWSM, NAACL, SDM. Presentation link: WWW 2020 Tutorial Alex Samylkin is a Principal Software Engineer in Microsoft Ads Marketplace team, where he builds training and evaluation infrastructures to productionize large scale deep learning models. Prior to joining Microsoft, he worked at Facebook and Uber with the focus on developer experience. He received PhD in Applied Mathematics from Keldysh Institure of Applied Mathematics in Moscow, Russia.
12:30 – 1:30pm: LUNCH
1:30 – 2:15: Vijay K Narayanan, Chief AI officer, ServiceNow
Invited Talk: Successes and opportunities in Enterprise AI
Abstract: The advances in AI over the last decade has led to significantly better outcomes and improved experiences in consumer applications. Meanwhile, successful applications of AI in enterprises have been modest during this period. In this talk, I will provide an overview of the scope of Enterprise AI, how it is similar and different from AI for consumer applications, a few successful applications, open problems and challenges and the enormous opportunity to create better outcomes and transform the experiences for employees and customers of enterprises.
Bio: Vijay Narayanan founded and leads the Advanced Technology group (ATG), a customer-focused innovation group of researchers, applied scientists and engineers in ServiceNow building smart user experiences using AI and related advanced technologies. Previously, he led the sciences and engineering group in Pinterest for all organic products, and earlier led teams building Cloud AI platform and solutions in Microsoft. Even earlier he worked on machine learning platforms, products and solutions at Yahoo Labs and in FICO. He has deep and wide experience leveraging statistical analysis, machine learning, and scalable systems to drive innovative breakthroughs in new and existing product lines and services in different domains.
2:15 – 3:15: User Preferences and Behavior (Session Chair: Lei Tang)
- Experiments with predictive long term guardrail metrics, Sri Sri Perangur (Lyft)
Most tech companies such as Google, Amazon, Netflix etc run thousands of experiments ( also known as A/B test) a year [Reference].The aim is to measure the impact new features have on core KPIs before deciding to launch it to production. Traditional A/B testing metrics will usually measure the impact of the feature on core KPIs in the short-term. However, for many lines of business (such as loyalty and memberships), this is not enough, as we want to understand the impact of the features in the mid/long term. This reality can force companies to run experiments to 6+ months duration, or use a correlated leading metric (such as user activity, engagement level) with estimated impact in the long term. Both these situation are not ideal, the first slows down the rate of innovation while the second does not account for multiple factors that define the future results. At Lyft, this reality is shared, and one that becomes a challenge for innovation as we need to know the long term impact before we decide to ship new features. As a solution we design forecasted metrics for retention and revenue at a user level that can be used to measure the impact of experiments in the long term. In this talk we will discuss challenges and learnings from this approach, when applied in practice.Bio:
Sri Sri Perangur is a Senior Data Scientist at Lyft, previous to Lyft she worked in Netflix, Spotify amongst other companies. Over her 10+ years in Data Science, she has been specialising in Experimentation ( A/B testing, Quasi-experiments, Heterogeneous Treatment Effects, etc) and Growth Data Science ( Multi-touch attribution modelling, customer lifetime value modeling,etc), across the USA and the UK. You can find out more about her here: Bio Details
- Near real time AI personalization for notifications at LinkedIn, Ajith Muralidharan (LinkedIn)
Notifications at LinkedIn are very crucial for our members to stay informed about their network, discover professionally relevant content, conversations and courses, as well as identify potential career opportunities. For the Notifications AI team, our mission is to use AI to notify the right members, about the right content, at the right time and frequency through the right channel (push, in app or email) to maximize member value. In this talk we will give an overview of the AI systems and models behind these decisions. We will present the candidate generation systems as well as the final relevance layer, built on top of the Air Traffic Controller (ATC), to enable volume optimization, notification channel (badge, push or email) selection and state aware message spacing based delivery time optimization. We describe how we formulated a multi-objective optimization problem, considering multiple objectives that capture member and business impact on the entire ecosystem. This problem considers three types of utilities: whether a member visits, their engagement on the notifications, and their overall engagement on LinkedIn. We will explain the final decision function, derived from the multi-objective optimization formulation, and show that it can be applied in a streaming fashion. The final decision function is tuned online, through a hyperparameter tuning solution developed at Linkedin which allows us to fine tune tradeoffs in the multi-objective optimization approach. We will conclude with a discussion on some of the wins this has enabled, managing most of the notifications sent to our 774million+ members.Bio:
Ajith Muralidharan is a Sr. Staff AI Engineer at LinkedIn. He is a tech lead in Growth AI at LinkedIn, applying AI/ML to enable member retention and deliver notifications (offsite communications) at Linkedin. He has architected a scalable ecosystem which enables the AI behind most notifications delivered from LinkedIn, ensuring that LinkedIn delivers value in a timely fashion to our members. He has also worked on feed and content relevance at Linkedin, in addition to working on foundational technologies like reinforcement learning for recommendation products. Prior to joining LinkedIn, he worked at Sensys Networks, a global leader in deploying wireless sensors for measuring traffic, where he developed traffic simulation, measurement, prediction and control systems. He obtained his Phd in Control Systems from UC Berkeley. Linkedin Profile : https://www.linkedin.com/in/ajithmuralidharan/
- Studying Long-Term User Behaviour Using Dynamic Time Warping for Customer Retention, Harsha Gwalani (Twitter)
Dynamic Time Warping (DTW) is an asynchronous alignment algorithm used to measure similarities between temporal sequences. DTW is advantageous for comparing sequences when the shape of the pattern is more important than the speed of events. It has been widely used for automated speech recognition, pattern detection in stock pricing data, studying energy consumption patterns for appliances, etc. In this presentation, we discuss the use of dynamic time warping for understanding long-term usage patterns for users on Twitter. Time series data are more useful to understand repeat user behavior and build user narratives than aggregated and/or snapshot user features. We utilize the time series of different user metrics as temporal signals and cluster them to identify specific usage and engagement patterns for churning users (users who become inactive). This approach led to more accurate opportunity sizing for the different user personas which in turn helped prioritize interventions for customer retention. We will discuss the implementation of this approach for large-scale data, understanding the time series clusters in a human-understandable manner, and challenges associated with multi-dimensional time-series data in the presentation.Bio:
Harsha Gwalani is a Data Scientist at Twitter, working with the Notifications team currently. She has been leading the efforts for leveraging notifications for customer retention and churn prevention at Twitter. Previously, she received a Ph.D. in Computer Science from the University of North Texas in 2019. Her research interests include spatial/ temporal clustering algorithms, location-allocation, constrained optimization, and disease outbreak modeling.
3:15 – 3:30: 15-min BREAK
3:30 – 4:15: Haixun Wang, VP Engineering and Distinguished Scientist, Instacart
Invited Talk: Rethink e-Commerce Search
Abstract: The quality of the search experience on an e-commerce site plays a critical role in customer conversion and the growth of the e-commerce business. In this talk, I will discuss the current status and challenges of product search. In particular, I will highlight the significant amount of effort it takes to create a high-quality product search engine using classical information retrieval methods. Then, I will discuss how recent advances in NLP and deep learning, especially the advent of large pre-trained language models, may change the status quo. While embedding-based retrieval has the potential to improve classical information retrieval methods, creating a machine learning-based, end-to-end system for general-purpose, web search is still extremely difficult. Nevertheless, I will argue that product search for e-commerce may prove to be an area where deep learning can create the first disruption to classical information retrieval systems.
Bio: Haixun Wang is currently an IEEE fellow, editor in chief of IEEE Data Engineering Bulletin, and a VP of Engineering and Distinguished Scientist at Instacart. Before Instacart, he was a VP of Engineering and Distinguished Scientist at WeWork, a Director of Natural Language Processing at Amazon, and he led the NLP team working on Query and Document Understanding at Facebook. From 2013 to 2015, he was with Google Research working on natural language processing. From 2009 to 2013, he led research in semantic search, graph data processing systems, and distributed query processing at Microsoft Research Asia. He had been a research staff member at IBM T. J. Watson Research Center from 2000 to 2009. He received the Ph.D. degree in Computer Science from the University of California, Los Angeles in 2000. He has published more than 150 research papers in referred international journals and conference proceedings. He served as PC Chairs of conferences such as SIGKDD'21, and he is on the editorial board of journals such as IEEE Transactions of Knowledge and Data Engineering (TKDE) and Journal of Computer Science and Technology (JCST). He won the best paper award in ICDE 2015, 10-year best paper award in ICDM 2013, and the best paper award of ER 2009.
4:15 – 5:15: Optimization and Learning Systems (Session Chair: Lei Tang)
- The Incentives Platform at Lyft, Alex Wood-Doughty, Cam Bruggeman (Lyft)
The Incentives Platform team at Lyft has developed a platform for applying new methodologies at the intersection of causal inference, machine learning, and reinforcement learning to problems at scale. We utilize heterogeneous treatment effect algorithms to predict how different users (riders, drivers) will respond to a specific treatment (coupon, incentive, message, etc.). We then can apply various optimization algorithms to choose which users get which treatment while using bandit methodologies to balance an explore/exploit trade-off. This platform dramatically increases the degree to which we can customize the user experience and hit business goals while reducing the operational load of doing so. This platform lets us understand how our users differ; letting us optimally target users based on individual treatment effect predictions; and evaluate the results of these predictions. The platform is built in a flexible way to allow us to plug-and-play with different algorithms, which lets us compare performance and develop improvements. We have integrated Off-Policy Evaluation into the platform allowing us to make unbiased (backtesting) evaluations of causal effects, without needing to run an AB test. While the scale of our data and complexity of these algorithms requires substantial engineering infrastructure, we have built the platform in a modular way that allows for separation between the science and engineering code. This makes it easy for data scientists to iterate on these models without worrying (as much) about infrastructure or distributed systems.Bio:
Alex Wood-Doughty is a Staff Data Scientist at Lyft on the Incentives Platform team. His work focuses on heterogeneous treatment effect models and scaling this methodology to production systems. He has a Ph.D. in Economics from the University of California, Santa Barbara. Cameron Bruggeman is a Data Science Manager at Lyft on the Incentives Platform team. He previously led Lyft’s ETA prediction and bike operations teams, and has done extensive work on marketplace experimentation. He has a Ph.D. in Math from Columbia University.
- Exploration in Recommender Systems, Minmin Chen (Google)
In the era of increasing choices, recommender systems are becoming indispensable in helping users navigate the million or billion pieces of content on recommendation platforms. Most of the recommender systems are powered by ML models trained on a large amount of user-item interaction data. Such a setup however induces a strong feedback loop that creates the rich gets richer phenomenon where head contents are getting more and more exposure while tail and fresh contents are not discovered. At the same time, it pigeonholes users to contents they are already familiar with. We believe exploration is key to break away from the feedback loop and to optimize long term user experience on recommendation platforms. The exploration-exploitation tradeoff, being the foundation of bandits and RL research, has been extensively studied in RL. While effective exploration is believed to positively influence the user experience on the platform, the exact value of exploration in recommender systems has not been well established. In this talk, we examine the roles of exploration in recommender systems in three facets: 1) system exploration to surface fresh/tail recommendations based on users’ known interests; 2) user exploration to identify unknown user interests or introduce users to new interests; and 3) online exploration to utilize real-time user feedback to reduce extrapolation errors in performing system and user exploration. We discuss the challenges in measurements and optimization in different types of exploration, and propose initial solutions. We showcase how each aspect of exploration contributes to the long term user experience through offline and live experiments on industrial recommendation platforms. We hope this talk can inspire more follow up work in understanding and improving exploration in recommender systems.Bio:
Minmin Chen is a Research Scientist in Google. She received her PhD from Washington University in St. Louis. Her main research interests are in reinforcement learning and bandits algorithms for applications in recommender systems. She publishes at ML and RecSys conferences such as NeurIPS, ICML, ICLR, KDD, WSDM and Recsys, and regularly serves as area chair or senior program committee for NeurIPS, ICML, ICLR, AISTATs and AAAI. Email: email@example.com Google Scholar: https://scholar.google.com/citations?user=kR7DersAAAAJ&hl=en
- A Practical Guide to Robust Multimodal Machine Learning and Its Application in Education, Zitao Liu (TAL Education Group)
Recently we have seen a rapid rise in the amount of education data available through the digitization of education. This huge amount of education data usually exhibits in a mixture form of images, videos, speech, texts, etc. It is crucial to consider data from different modalities to build successful applications in AI in education (AIED). This talk targets AI researchers and practitioners who are interested in applying state-of-the-art multimodal machine learning techniques to tackle some of the hard-core AIED tasks. These include tasks such as automatic short answer grading, student assessment, class quality assurance, knowledge tracing, etc. In this talk, I will share some recent developments of successfully applying multimodal learning approaches in AIED, with a focus on those classroom multimodal data. Beyond introducing the recent advances of computer vision, speech, natural language processing in education respectively, I will discuss how to combine data from different modalities and build AI driven educational applications on top of these data. Participants will learn about recent trends and emerging challenges in this topic, representative tools and learning resources to obtain ready-to-use models, and how related models and techniques benefit real-world AIED applications.Bio:
Zitao Liu is the Head of Engineering, Xueersi 1 on 1 at TAL Education Group (NYSE:TAL), one of the largest leading education and technology enterprises in China. His research is in the area of machine learning, and includes contributions in the areas of artificial intelligence in education, multimodal knowledge representation and user modeling. He has published his research in highly ranked conference proceedings, such as NeurIPS, AAAI, WWW, AIED, etc. and serves as the executive committee of the International AI in Education Society and top tier AI conference/workshop organizers/program committees. He won the 1st place at NeurIPS 2020 education challenge (Task 3), 1st place at Ubicomp 2020 time series classification challenge, 1st place at CCL 2020 humor computation competition and 2nd place at EMNLP 2020 ClariQ challenge. He is a recipient of ACM/CCF Distinguished Speaker and Beijing Nova Program 2020. Before joining TAL, Zitao was a senior research scientist at Pinterest and received his Ph.D degree in Computer Science from University of Pittsburgh.
5:15-5:30: Closing Remarks (Lei Tang, Shaili Jain)