• neural architecture;
  • distributed population coding;
  • binocular energy model;
  • disparity estimation;
  • data representation;
  • parallel computational schemes


The spread of graphics processing unit (GPU) computing paved the way to the possibility of reaching high-computing performances in the simulation of complex biological systems. In this work, we develop a very efficient GPU-accelerated neural library, which can be employed in real-world contexts. Such a library provides the neural functionalities that are the basis of a wide range of bio-inspired models, and in particular, we show its efficacy in implementing a cortical-like architecture for visual feature coding and estimation. In order to fully exploit the intrinsic parallelism of such neural architectures and to manage the huge amount of data that characterizes the internal representation of distributed neural models, we devise an effective algorithmic solution and an efficient data structure. In particular, we exploit both data parallelism and task parallelism, with the aim of optimally taking advantage from the computational capabilities of modern graphics cards. Moreover, we assess the performances of two different development frameworks, both supplying a wide range of basic signal processing GPU-accelerated functions. A systematic analysis, aiming at comparing different algorithmic solutions, shows the best data structure and parallelization computational scheme to compute features from a distributed population of neural units. Copyright © 2013 John Wiley & Sons, Ltd.