用户名: 密码: 验证码:
Framework for modeling spiking neural networks on high-performance graphics processors.
详细信息   
  • 作者:Moorkanikara Nageswaran ; Jayram.
  • 学历:Doctor
  • 年:2010
  • 导师:Dutt, Nikil,eadvisorKrichmar, Jeffrey L.,eadvisorNicolau, Alexecommittee memberVeidenbaum, Alexecommittee member
  • 毕业院校:University of California
  • Department:Computer Science - Ph.D
  • ISBN:9781124209227
  • CBH:3422219
  • Country:USA
  • 语种:English
  • FileSize:6390941
  • Pages:163
文摘
Spiking neural network SNN) models are emerging as a plausible paradigm for characterizing neural dynamics in the cerebral cortex. Traditionally these SNN models were simulated on large-scale clusters, super-computers, or on dedicated VLSI architectures. Alternatively, Graphics Processing Units GPUs) can provide a low-cost, programmable, and high-performance computing platform for simulation of SNNs. This thesis proposes a systematic framework for modeling and simulation of biologically realistic large-scale spiking neural networks on high-performance graphics processors. The first part of the framework consists of a high-level specification to quickly build arbitrary, large-scale spiking neural network for different applications. Various features have been included in the specification to capture the properties of biologically realistic neurons and synaptic plasticity, different types of connection topologies between neuronal groups, and techniques to probe and capture the network state. The high-level SNN specification is converted to a sparse adjacency matrix representation and mapped on the GPUs. Further, we present a collection of new techniques related to parallelism extraction, mapping of irregular communication, and compact adjacency matrix representation for effective simulation of SNNs on GPUs. These optimization techniques enable real-time simulation of GPU accelerated SNN models with 100K neurons and 10 Million synaptic connections. Another challenging problem faced by computational neuroscientists is the tuning and selection of parameter values to operate the network in a stable firing regime. This problem is exacerbated due to simulation of increasingly complicated network models that exhibit non-linear dynamics. The last part of the generic framework proposes an evolutionary approach to automate parameter tuning in spiking neural networks. The evolutionary approach generates a population of SNN with different parameters for simulations. At the end of each simulation, the user-specified fitness condition is evaluated to determine the effectiveness of different members of the population. Using CPU-based evolutionary tuning, SNN models can be tuned at least 10x faster than full parameter sweep for networks of size 1000 neurons with 5 parameters. Further improvement in performance can be achieved by GPU accelerarated simulation and fitness evaluation of the entire SNN populations. The GPU-based evolutionary tuning technique is shown to be 6x to 20x faster than CPU-based evolutionary parameter tuning for networks of different sizes. We applied the entire framework for the simulation of different spike-based computation applications. In one of the applications we interfaced a 128x128 pixel spike-based neuromorphic sensor to a spiking neural network running on a GPU for real-time convolution-based feature extraction. The work described in this thesis should be useful to both computational neuroscientists, who can use the information for large-scale SNN simulation, as well as computer scientist, who can get insights into high-performance computational challenges for simulating brain-inspired models.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700