1/7: Although DL methods are highly promising for cognitive decoding, their widespread use is generally hindered by:
— Armin Thomas (@rmin_thomas) August 17, 2021
1) their lack of interpretability,
2) difficulties in applying them to small datasets, and
3) difficulties in ensuring their reproducibility and robustness.
3/7: We argue that the explanations of backward decomposition approaches, such as layer-wise relevance propagation and DeepLIFT, are best-suited to faithfully reveal the features of the neuroimaging data that underlie the cognitive decoding decisions of DL models.
— Armin Thomas (@rmin_thomas) August 17, 2021
5/7: We argue that weakly and unsupervised learning techniques, such as data programming or contrastive learning, are well-suited to pre-train DL models at scale across vast and diverse public neuroimaging data: pic.twitter.com/x9KeQHYBCg
— Armin Thomas (@rmin_thomas) August 17, 2021
7/7: To improve the reproducibility of DL model performances and to make them more robust towards the diversity of real-world data, we make specific recommendations, for example, to randomize as many aspects of the training pipeline as possible when evaluating model performances.
— Armin Thomas (@rmin_thomas) August 17, 2021
This comment has been removed by a blog administrator.
ReplyDelete