Result 1: self-supervised learning suffices to make this algorithm learn brain-like representations (i.e. most brain areas significantly correlate with its activations in response to the same speech input). pic.twitter.com/MMWKoJgW8W
— Jean-Rémi King (@JeanRemiKing) June 6, 2022
Result 3: With an additional 386 subjects, we show that wav2vec 2.0 learns both the speech-specific and the language-specific representations of the prefrontal and temporal cortices, respectively. pic.twitter.com/329u5xyqkP
— Jean-Rémi King (@JeanRemiKing) June 6, 2022
By J Millet*, @c_caucheteux*, @PierreOrhan, Y Boubenec, @agramfort, E Dunbar, @chrplr and myself at @MetaAI, @ENS_ULM, @Inria & @Neurospin
— Jean-Rémi King (@JeanRemiKing) June 6, 2022
🙏Thanks @samnastase, @HassonUri, John Hale, @nilearn, @pyvista and the open-source and open-science communities for making this possible!
Interested in this line of research?
— Jean-Rémi King (@JeanRemiKing) June 6, 2022
Check out our latest paper on the convergence between deepnets and the brain: https://t.co/oDmMoKlWzx
No comments:
Post a Comment