WSDM Cup 2023 features a USD20,000 pool of prizes distributed across five tasks. The top-3 teams for each task will also be supported with one full conference registration to attend and present their solutions in Singapore.
Unbiased Learning to Rank and Pre-training for Web Search
Task Sponsor: Baidu and Michigan State University
Prize Pool: USD7,000 across two tasks, plus sponsored conference registration for representatives of top-3 teams of each task
Tasks abstract: Learning to Rank (LTR), aiming to measure documents' relevance with respect to queries, is a popular research topic in information retrieval with huge practical usage in web search engines, e-commerce, and multiple different streaming services. With the vogue of deep learning, the heavy burden of data annotation drives the academia and industry communities to the study of learning to rank using implicit user feedback or pre-training language model (PLM) with self-supervised learning. However, directly optimizing the model with click data results in unsatisfied performance due to the strong bias on implicit user feedback, such as position bias, trust bias, and click necessary bias. Unbiased learning to rank (ULTR) is then proposed for debiasing user feedback with counterfactual learning algorithms. However, real-world user feedback can be more complex than synthetic feedback generated with specific user behavior assumptions like position-dependent click model and ULTR algorithms with good performance on synthetic datasets may not show consistently good performance in the real-world scenario. Furthermore, it is nontrivial to directly apply the recent advancements in PLMs to web-scale search engine systems since explicitly capturing the comprehensive relevance between queries and documents is crucial to the ranking task. However, existing pre-training objectives, either sequence-based tasks (e.g., masked token prediction) or sentence pair-based tasks (e.g., permuted language modeling), learn contextual representations based on the intra/inter-sentence coherence relationship, which cannot be straightforwardly adapted to model the query-document relevance relations. Therefore, in this competition, we focus on unbiased learning to rank and pre-training for web search under real long-tail user feedback dataset from Baidu Search (Baidu is the biggest Chinese search engine with 6.32 million monthly active users that has a great ambition and responsibility to promote the technique development in the community).
Multilingual Information Retrieval Across a Continuum of Languages
Task Sponsors: University of Waterloo and Huawei Noah's Ark Lab
Prize Pool: USD2,000, USD1,000, and USD500 for the top 3 teams of each task, plus conference registrations
Task abstract: MIRACL (Multilingual Information Retrieval Across a Continuum of Languages) focuses on evaluation of monolingual retrieval systems across a continuum of 18 diverse languages. These languages have diverse typology, origin, language families and external resource coverage (containing high- and low-resource languages, i.e., languages whose textual web presence and dataset availability is either high or low). Prior work and datasets have focused on retrieval in English, overshadowing the progress of monolingual retrieval in other languages and multilingual retrieval in multiple languages. The goal of our competition is to better evaluate the progress of retrieval systems and help identify limitations across diverse language settings. MIRACL has two tracks: Known-Languages Retrieval and Surprise-Languages Retrieval. We require participants to submit the top-most relevant passages for both tasks. Each participant will be evaluated for retrieval performance measured in terms of NDCG@10, averaged over the 16 known languages (for Known-Language Track) or the 2 surprise languages (for Surprise-Language Track). To foster training of multilingual systems, we provide the project MIRACL dataset: a large balanced monolingual retrieval dataset containing human-annotated data for 18 languages. In total, MIRACL provides 71k queries and 726k annotation pairs. We provide different model architectures as baselines, such as BM25, and mDPR. We hope the competition encourages participants to collaborate and develop systems that are robust, efficient and able to retrieve information quickly and effectively across a multitude of languages.
Visual Question Answering Challenge
Task Sponsor: Toloka
Prize Pool: USD6,000 plus sponsored conference registration for representatives of top-3 teams
Task abstract: We challenge you with a visual question answering task! Given an image and a textual question, draw the bounding box around the object correctly responding to that question. For example, a bathroom photo might have a question like “Where do I wash my hands?” with the sink selected as the answer. Every image-question pair contains the response, and there is only one correct response per image. The winner provides the correct responses to the largest number of questions that is quantified with the average intersection over union (IoU) score on the hidden test set.