Detailed brain modeling has been presenting significant challenges to the world of high-performance computing (HPC), posing computational problems that can benefit from modern hardware-acceleration technologies. We explore the capacity of GPUs for simulating large-scale neuronal networks based on the Adaptive Exponential neuron-model, which is widely used in the neuroscientific community. Our GPU-powered simulator acts as a benchmark to evaluate the strengths and limitations of modern GPUs, as well as to explore their scaling properties when simulating large neural networks. This work presents an optimized GPU implementation that outperforms a reference multicore implementation by 50x, whereas utilizing a dual-GPU configuration can deliver a speedup of 90x for networks of 20,000 fully interconnected AdEx neurons.
|Title of host publication||Proceedings - 2019 IEEE 19th International Conference on Bioinformatics and Bioengineering, BIBE 2019|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||5|
|Publication status||Published - Oct 2019|
|Event||19th International Conference on Bioinformatics and Bioengineering, BIBE 2019 - Athens, Greece|
Duration: 28 Oct 2019 → 30 Oct 2019
|Series||Proceedings - 2019 IEEE 19th International Conference on Bioinformatics and Bioengineering, BIBE 2019|
|Conference||19th International Conference on Bioinformatics and Bioengineering, BIBE 2019|
|Period||28/10/19 → 30/10/19|
Bibliographical noteFunding Information:
This research is supported by European Commission H2020 project EXA2PRO for FETHPC-02-2017 Transition to Exascale Computing (Grant agreement ID: 801015). The work was also supported by computational time granted from the Greek Research and Technology Network (GRNET) in the National HPC facility Advanced Research Information System - ARIS
Publisher Copyright: © 2019 IEEE.