Web: https://towardsdatascience.com/how-to-reduce-the-training-time-of-your-neural-network-from-hours-to-minutes-fe7533a3eec5?source=rss----7f60cf5620c9---4

May 6, 2022, 5:07 a.m. | Bhaskar Agarwal

Towards Data Science - Medium towardsdatascience.com

Part 2 of the articles on AI with HPC: parallelising a CNN with Horovod and GPUs to obtain a 75x-150x speed-up.

Photo by Robert Katzki on Unsplash

In part 1 of the series we looked at how it is possible to get a ~1500x speed-up in IO operations with a few lines of Python using the multiprocessing module. In this article, we will look at parallelising a deep learning code and reducing the training time from roughly 13 hours to …

deep-dives deep learning gpu network neural neural network neural networks parallel-computing reduce time training

Data Analyst, Patagonia Action Works

@ Patagonia | Remote

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC