1 
ProMix 
96.34±0.23 
ProMix: Combating Label Noise via Maximizing Clean Sample Utility (Code)

2 
PLS 
93.78±0.30 
PLS: Robustness to label noise with two stage detection. (Code)

3 
ILL 
93.55±0.14 
Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations (Code)

4 
SOP 
93.24±0.21 
Robust Training under Label Noise by Overparameterization. (Code)

5 
Protosemi 
92.97±0.18 
Rethinking Noisy Label Learning in Realworld Annotation Scenarios from the Noisetype Perspective (Code)

6 
PES (semi) 
92.68±0.22 
Understanding and Improving Early Stopping for Learning with Noisy Labels. (Code)

7 
DivideMix 
92.56±0.42 
Dividemix: Learning with noisy labels as semisupervised learning. (Code)

8 
CORES* 
91.66±0.09 
Learning with InstanceDependent Label Noise: A Sample Sieve Approach. (Code)

9 
ELR+ 
91.09±1.60 
EarlyLearning Regularization Prevents Memorization of Noisy Labels. (Code)

10 
BKD 
87.41±0.28 
Blind Knowledge Distillation for Robust Image Classification. (Code)

11 
CAL 
85.36±0.16 
A SecondOrder Approach to Learning with InstanceDependent Label Noise. (Code)

12 
CoTeaching 
83.83±0.13 
Coteaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels. (Code)

13 
CORES 
83.60±0.53 
Learning with InstanceDependent Label Noise: A Sample Sieve Approach. (Code)

14 
ELR 
83.58±1.13 
EarlyLearning Regularization Prevents Memorization of Noisy Labels. (Code)

15 
JoCoR 
83.37±0.30 
Combating noisy labels by agreement: A joint training method with coregularization. (Code)

16 
CoTeaching+ 
83.26±0.17 
How does Disagreement Help Generalization against Label Corruption? (Code)

17 
NegativeLS 
82.99±0.36 
Understanding Generalized Label Smoothing when Learning with Noisy Labels.

18 
PositiveLS 
82.76±0.53 
Does Label Smoothing Mitigate Label Noise?

19 
Fdiv 
82.53±0.52 
When Optimizing fDivergence is Robust with Label Noise? (Code)

20 
Peer Loss 
82.53±0.52 
Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates. (Code)

21 
NVRM 
81.19±0.05 
Artifical Neural Variability for Deep Learning: On Overfitting, Noise Memorization, and Catastrophic Forgetting. (Code)

22 
GCE 
80.66±0.35 
Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels. (Code)

23 
VolMinNet 
80.53±0.20 
Provably endtoend labelnoise learning without anchor points. (Code)

24 
TRevision 
80.48±1.20 
Are Anchor Points Really Indispensable in LabelNoise Learning? (Code)

25 
ForwardT 
79.79±0.46 
Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach. (Code)

26 
CE 
77.69±1.55 

27 
BackwardT 
77.61±1.05 
Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach. (Code)
