Deep neural networks (DNN) have recently achieved extraordinary results in domains like computer vision and speech recognition. An essential element for this success has been the introduction of high performance computing (HPC) techniques in the critical step of training the neural network. This paper describes the implementation and analysis of a network-agnostic and convergence-invariant coarse-grain parallelization of the DNN training algorithm. The coarse-grain parallelization is achieved through the exploitation of the batch-level parallelism. This strategy is independent from the support of specialized and optimized libraries. Therefore, the optimization is immediately available for accelerating the DNN training. The proposal is compatible with multi-GPU execution without altering the algorithm convergence rate. The parallelization has been implemented in Caffe, a state-of-the-art DNN framework. The paper describes the code transformations for the parallelization and we also identify the limiting performance factors of the approach. We show competitive performance results for two state-of-the-art computer vision datasets, MNIST and CIFAR-10. In particular, on a 16-core Xeon E5-2667v2 at 3.30GHz we observe speedups of 8× over the sequential execution, at similar performance levels of those obtained by the GPU optimized Caffe version in a NVIDIA K40 GPU.
Mon 14 Mar Times are displayed in time zone: Greenwich Mean Time : Belfast change
10:00 - 10:25 Talk | Coarse Grain Parallelization of Deep Neural Networks Main conference Link to publication DOI | ||
10:25 - 10:50 Talk | High Performance Model Based Image Reconstruction Main conference Xiao WangPurdue University, USA, Amit SabneSchool of Electrical and Computer Engineering, Purdue University, Sherman KisnerHigh Performance Imaging LLC, Anand RaghunathanSchool of Electrical and Computer Engineering, Purdue University, Charles BoumanSchool of Electrical and Computer Engineering, Purdue University, Samuel MidkiffSchool of Electrical and Computer Engineering, Purdue University Link to publication DOI | ||
10:50 - 11:15 Talk | Exploiting Accelerators for Efficient High Dimensional Similarity Search Main conference Link to publication DOI |