1 |
SOP |
93.24±0.21 |
Robust Training under Label Noise by Over-parameterization. (Code)
|
2 |
PES (semi) |
92.68±0.22 |
Understanding and Improving Early Stopping for Learning with Noisy Labels. (Code)
|
3 |
Divide-Mix |
92.56±0.42 |
Dividemix: Learning with noisy labels as semi-supervised learning. (Code)
|
4 |
CORES* |
91.66±0.09 |
Learning with Instance-Dependent Label Noise: A Sample Sieve Approach. (Code)
|
5 |
ELR+ |
91.09±1.60 |
Early-Learning Regularization Prevents Memorization of Noisy Labels. (Code)
|
6 |
CAL |
85.36±0.16 |
A Second-Order Approach to Learning with Instance-Dependent Label Noise. (Code)
|
7 |
Co-Teaching |
83.83±0.13 |
Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels. (Code)
|
8 |
CORES |
83.60±0.53 |
Learning with Instance-Dependent Label Noise: A Sample Sieve Approach. (Code)
|
9 |
ELR |
83.58±1.13 |
Early-Learning Regularization Prevents Memorization of Noisy Labels. (Code)
|
10 |
JoCoR |
83.37±0.30 |
Combating noisy labels by agreement: A joint training method with co-regularization. (Code)
|
11 |
Co-Teaching+ |
83.26±0.17 |
How does Disagreement Help Generalization against Label Corruption? (Code)
|
12 |
Negative-LS |
82.99±0.36 |
Understanding Generalized Label Smoothing when Learning with Noisy Labels.
|
13 |
Positive-LS |
82.76±0.53 |
Does Label Smoothing Mitigate Label Noise?
|
14 |
F-div |
82.53±0.52 |
When Optimizing f-Divergence is Robust with Label Noise? (Code)
|
15 |
Peer Loss |
82.53±0.52 |
Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates. (Code)
|
16 |
NVRM |
81.19±0.05 |
Artifical Neural Variability for Deep Learning: On Overfitting, Noise Memorization, and Catastrophic Forgetting. (Code)
|
17 |
GCE |
80.66±0.35 |
Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels. (Code)
|
18 |
VolMinNet |
80.53±0.20 |
Provably end-to-end label-noise learning without anchor points. (Code)
|
19 |
T-Revision |
80.48±1.20 |
Are Anchor Points Really Indispensable in Label-Noise Learning? (Code)
|
20 |
Forward-T |
79.79±0.46 |
Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach. (Code)
|
21 |
CE |
77.69±1.55 |
|
22 |
Backward-T |
77.61±1.05 |
Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach. (Code)
|