Speedup Critical Stage of Machine Learning with Batch Scheduling in GPU - Network and Parallel Computing Access content directly
Conference Papers Year : 2014

Speedup Critical Stage of Machine Learning with Batch Scheduling in GPU

Yuan Gao
  • Function : Author
Rui Wang
  • Function : Author
  • PersonId : 994396
Ning An
  • Function : Author
Yanjiang Wei
  • Function : Author
Depei Qian
  • Function : Author

Abstract

As a superior data analysis method, Machine Learning suffers the bottleneck from limited computing capability for many years. With the advent of numerous parallel computing hardwares, modern GPU is becoming a promising carrier for the tasks of Machine Learning. In this paper, we propose an efficient GPU execution framework to speedup the forward propagation process of convolution neural network. By extending the convolution unrolling method to fit this batch mode, we get a significant increase of throughput but very little overhead.
Fichier principal
Vignette du fichier
978-3-662-44917-2_43_Chapter.pdf (1.15 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-01403124 , version 1 (25-11-2016)

Licence

Attribution

Identifiers

Cite

Yuan Gao, Rui Wang, Ning An, Yanjiang Wei, Depei Qian. Speedup Critical Stage of Machine Learning with Batch Scheduling in GPU. 11th IFIP International Conference on Network and Parallel Computing (NPC), Sep 2014, Ilan, Taiwan. pp.522-525, ⟨10.1007/978-3-662-44917-2_43⟩. ⟨hal-01403124⟩
345 View
78 Download

Altmetric

Share

Gmail Facebook X LinkedIn More