The first Learning and Mining with Noisy Labels Challenge is a part of IJCAI-ECAI 2022. The dataset is CIFAR-N, a recently collected human-annotations for CIFAR. The goal of the challenge is to prepare the community shift from synthetic label noise benchmarks to tasks with real human-annotated data. Unlike the webvision challenge, CIFAR-N is small in size, containing a set of easily accessible, easy-to-use, and verifiable datasets, making training and evaluating CIFAR-N very resource-friendly.
This year, we organize two tasks to evaluate the learned knowledge and representation on CIFAR-N:
(1) Image Classification Task.
(2) Label Noise Detection Task.
The participants can form as a group taking part in one or two tasks. Please make sure that you have thoroughly read the task description before joining the task challenge. Each group must submit a report describing their method and discussing the results. The formatting guidelines for the report follow the main track in IJCAI-ECAI 2022. Note that the report should be 2-8 pages which describes your method and technical details. We also require each group to attach the Github link to their code in the report. The code should have a clear description for model selection for each task which is crucial because we will examine the correctness of each method. The registration and report is submitted via this link. The timelines are scheduled as follows.
Click the above red button to complete a preliminary survey regarding your preferences for participatation
Click the above red button to complete a preliminary survey regarding your preferences for servering as a PC.
If you have further questions about the challenge, please contact us via lmnl.challenge@gmail.com.