1 |
ProMix |
96.34±0.23 |
ProMix: Combating Label Noise via Maximizing Clean Sample Utility (Code)
|
2 |
PLS |
93.78±0.30 |
PLS: Robustness to label noise with two stage detection. (Code)
|
3 |
ILL |
93.55±0.14 |
Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations (Code)
|
4 |
SOP |
93.24±0.21 |
Robust Training under Label Noise by Over-parameterization. (Code)
|
5 |
Proto-semi |
92.97±0.18 |
Rethinking Noisy Label Learning in Real-world Annotation Scenarios from the Noise-type Perspective (Code)
|
6 |
PES (semi) |
92.68±0.22 |
Understanding and Improving Early Stopping for Learning with Noisy Labels. (Code)
|
7 |
Divide-Mix |
92.56±0.42 |
Dividemix: Learning with noisy labels as semi-supervised learning. (Code)
|
8 |
CORES* |
91.66±0.09 |
Learning with Instance-Dependent Label Noise: A Sample Sieve Approach. (Code)
|
9 |
ELR+ |
91.09±1.60 |
Early-Learning Regularization Prevents Memorization of Noisy Labels. (Code)
|
10 |
BKD |
87.41±0.28 |
Blind Knowledge Distillation for Robust Image Classification. (Code)
|
11 |
CAL |
85.36±0.16 |
A Second-Order Approach to Learning with Instance-Dependent Label Noise. (Code)
|
12 |
Co-Teaching |
83.83±0.13 |
Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels. (Code)
|
13 |
CORES |
83.60±0.53 |
Learning with Instance-Dependent Label Noise: A Sample Sieve Approach. (Code)
|
14 |
ELR |
83.58±1.13 |
Early-Learning Regularization Prevents Memorization of Noisy Labels. (Code)
|
15 |
JoCoR |
83.37±0.30 |
Combating noisy labels by agreement: A joint training method with co-regularization. (Code)
|
16 |
Co-Teaching+ |
83.26±0.17 |
How does Disagreement Help Generalization against Label Corruption? (Code)
|
17 |
Negative-LS |
82.99±0.36 |
Understanding Generalized Label Smoothing when Learning with Noisy Labels.
|
18 |
Positive-LS |
82.76±0.53 |
Does Label Smoothing Mitigate Label Noise?
|
19 |
F-div |
82.53±0.52 |
When Optimizing f-Divergence is Robust with Label Noise? (Code)
|
20 |
Peer Loss |
82.53±0.52 |
Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates. (Code)
|
21 |
NVRM |
81.19±0.05 |
Artifical Neural Variability for Deep Learning: On Overfitting, Noise Memorization, and Catastrophic Forgetting. (Code)
|
22 |
GCE |
80.66±0.35 |
Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels. (Code)
|
23 |
VolMinNet |
80.53±0.20 |
Provably end-to-end label-noise learning without anchor points. (Code)
|
24 |
T-Revision |
80.48±1.20 |
Are Anchor Points Really Indispensable in Label-Noise Learning? (Code)
|
25 |
Forward-T |
79.79±0.46 |
Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach. (Code)
|
26 |
CE |
77.69±1.55 |
|
27 |
Backward-T |
77.61±1.05 |
Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach. (Code)
|