Explainable Artificial Intelligence for Neuroscience: Behavioral Neurostimulation

Jean Marc Fellous, Guillermo Sapiro, Andrew Rossi, Helen Mayberg, Michele Ferrante

Research output: Contribution to journalArticlepeer-review

97 Scopus citations

Abstract

The use of Artificial Intelligence and machine learning in basic research and clinical neuroscience is increasing. AI methods enable the interpretation of large multimodal datasets that can provide unbiased insights into the fundamental principles of brain function, potentially paving the way for earlier and more accurate detection of brain disorders and better informed intervention protocols. Despite AI’s ability to create accurate predictions and classifications, in most cases it lacks the ability to provide a mechanistic understanding of how inputs and outputs relate to each other. Explainable Artificial Intelligence (XAI) is a new set of techniques that attempts to provide such an understanding, here we report on some of these practical approaches. We discuss the potential value of XAI to the field of neurostimulation for both basic scientific inquiry and therapeutic purposes, as well as, outstanding questions and obstacles to the success of the XAI approach.

Original languageEnglish
Article number1346
JournalFrontiers in Neuroscience
Volume13
DOIs
StatePublished - 13 Dec 2019

Keywords

  • behavioral paradigms
  • closed-loop neurostimulation
  • computational psychiatry
  • data-driven discoveries of brain circuit theories
  • explain AI
  • machine learning
  • neuro-behavioral decisions systems

Fingerprint

Dive into the research topics of 'Explainable Artificial Intelligence for Neuroscience: Behavioral Neurostimulation'. Together they form a unique fingerprint.

Cite this