University of Sussex
Browse
fninf-12-00068.pdf (4.48 MB)

Code generation in computational neuroscience: a review of tools and techniques

Download (4.48 MB)
journal contribution
posted on 2023-06-09, 15:25 authored by Inga Blundell, Romain Brette, Thomas A Cleland, Thomas G Close, Daniel Coca, Andrew P Davison, Sandra Diaz-Pier, Carlos Fernandez Musoles, Padraig Gleeson, Dan F M Goodman, Michael Hines, Michael W Hopkins, Pramod Kumbhar, David R Lester, Boris Marin, Abigail Morrison, Eric Müller, Thomas NowotnyThomas Nowotny, Alexander Peyser, Dimitri Plotnikov, Paul Richmond, Andrew Rowley, Bernhard Rumpe, Marcel Stimberg, Alan B Stokes, Adam Tomkins, Guido Trensch, Marmaduke Woodman, Jochen Martin Eppler
Advances in experimental techniques and computational power allowing researchers to gather anatomical and electrophysiological data at unprecedented levels of detail have fostered the development of increasingly complex models in computational neuroscience. Large-scale, bio-physically detailed cell models pose a particular set of computational challenges, and this has led to the development of a number of domain-specific simulators. At the other level of detail, the ever growing variety of point neuron models increases the implementation barrier even for those based on the relatively simple integrate-and-fire neuron model. Independently of the model complexity, all modeling methods crucially depend on an efficient and accurate transformation of mathematical model descriptions into efficiently executable code. Neuroscientists usually publish model descriptions in terms of the mathematical equations underlying them. However, actually simulating them requires they be translated into code. This can cause problems because errors may be introduced if this process is carried out by hand, and code written by neuroscientists may not be very computationally efficient. Furthermore, the translated code might be generated for different hardware platforms, operating system variants or even written in different languages and thus cannot easily be combined or even compared. Two main approaches to addressing this issues have been followed. The first is to limit users to a fixed set of optimized models, which limits flexibility. The second is to allow model definitions in a high level interpreted language, although this may limit performance. Recently, a third approach has become increasingly popular: using code generation to automatically translate high level descriptions into efficient low level code to combine the best of previous approaches. This approach also greatly enriches efforts to standardize simulator-independent model description languages. In the past few years, a number of code generation pipelines have been developed in the computational neuroscience community, which differ considerably in aim, scope and functionality. This article provides an overview of existing pipelines currently used within the community and contrasts their capabilities and the technologies and concepts behind them.

Funding

Green brain; G0924; EPSRC-ENGINEERING & PHYSICAL SCIENCES RESEARCH COUNCIL; EP/J019690/1

Brains on Board: Neuromorphic Control of Flying Robots; G1980; EPSRC-ENGINEERING & PHYSICAL SCIENCES RESEARCH COUNCIL; EP/P006094/1

History

Publication status

  • Published

File Version

  • Published version

Journal

Frontiers in Neuroinformatics

ISSN

1662-5196

Publisher

Frontiers Media

Issue

68

Volume

12

Page range

1-35

Department affiliated with

  • Informatics Publications

Research groups affiliated with

  • Centre for Computational Neuroscience and Robotics Publications

Full text available

  • Yes

Peer reviewed?

  • Yes

Legacy Posted Date

2018-10-10

First Open Access (FOA) Date

2018-11-05

First Compliant Deposit (FCD) Date

2018-10-09

Usage metrics

    University of Sussex (Publications)

    Categories

    No categories selected

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC