1 
SOP 
93.24±0.21 
Robust Training under Label Noise by Overparameterization. (Code)

2 
PES (semi) 
92.68±0.22 
Understanding and Improving Early Stopping for Learning with Noisy Labels. (Code)

3 
DivideMix 
92.56±0.42 
Dividemix: Learning with noisy labels as semisupervised learning. (Code)

4 
CORES* 
91.66±0.09 
Learning with InstanceDependent Label Noise: A Sample Sieve Approach. (Code)

5 
ELR+ 
91.09±1.60 
EarlyLearning Regularization Prevents Memorization of Noisy Labels. (Code)

6 
CAL 
85.36±0.16 
A SecondOrder Approach to Learning with InstanceDependent Label Noise. (Code)

7 
CoTeaching 
83.83±0.13 
Coteaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels. (Code)

8 
CORES 
83.60±0.53 
Learning with InstanceDependent Label Noise: A Sample Sieve Approach. (Code)

9 
ELR 
83.58±1.13 
EarlyLearning Regularization Prevents Memorization of Noisy Labels. (Code)

10 
JoCoR 
83.37±0.30 
Combating noisy labels by agreement: A joint training method with coregularization. (Code)

11 
CoTeaching+ 
83.26±0.17 
How does Disagreement Help Generalization against Label Corruption? (Code)

12 
NegativeLS 
82.99±0.36 
Understanding Generalized Label Smoothing when Learning with Noisy Labels.

13 
PositiveLS 
82.76±0.53 
Does Label Smoothing Mitigate Label Noise?

14 
Fdiv 
82.53±0.52 
When Optimizing fDivergence is Robust with Label Noise? (Code)

15 
Peer Loss 
82.53±0.52 
Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates. (Code)

16 
NVRM 
81.19±0.05 
Artifical Neural Variability for Deep Learning: On Overfitting, Noise Memorization, and Catastrophic Forgetting. (Code)

17 
GCE 
80.66±0.35 
Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels. (Code)

18 
VolMinNet 
80.53±0.20 
Provably endtoend labelnoise learning without anchor points. (Code)

19 
TRevision 
80.48±1.20 
Are Anchor Points Really Indispensable in LabelNoise Learning? (Code)

20 
ForwardT 
79.79±0.46 
Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach. (Code)

21 
CE 
77.69±1.55 

22 
BackwardT 
77.61±1.05 
Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach. (Code)
