REsponsible & Accountable Learning (REAL)
@ University of California, Santa Cruz

Basic Content

Download

In the Download page, we provide the research community with:
A download link to get access to noisy labels in CIFAR-N;
A starter code (in Pytorch) to train CIFAR-N with CE loss.

Observations

In the Observations page, we share our major observations on CIFAR-N, the real-world human annotated noisy labels.

Leaderboard

We welcome researchers to share their method performances on CIFAR-N, and contribute to an abundant leaderboard.

Contributor

Researchers who contribute to CIFAR-N or the leaderboard.

Noise levels of CIFAR-N

Label Set CIFAR-10N
Aggregate
CIFAR-10N Random 1
CIFAR-10N Random 2 CIFAR-10N Random 3 CIFAR-10N Worst CIFAR-100N Coarse CIFAR-100N Fine
Noise Rate 9.03% 17.23% 18.12% 17.64% 40.21% 25.60% 40.20%

Reference

If you use this dataset of our reproduced results, please cite:

Learning with Noisy Labels Revisited: A Study Using Real-World Human Annotations.

Jiaheng Wei*, Zhaowei Zhu*, Hao Cheng, Tongliang Liu, Gang Niu, and Yang Liu. (*: equal contributions)

The BibTex infomation is detached as: 
@inproceedings{
wei2022learning,
title={Learning with Noisy Labels Revisited: A Study Using Real-World Human Annotations},
author={Jiaheng Wei and Zhaowei Zhu and Hao Cheng and Tongliang Liu and Gang Niu and Yang Liu},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=TBWA6PLJZQm}
}

Contact

Please contact us via {yangliu, jiahengwei, zwzhu, haocheng}@ucsc.edu, if you have any concerns regarding this dataset.