Neural networks’ learning process acceleration
Abstract
This study is devoted to evaluating the process of training of a parallel system in the form of an artificial neural network, which is built using a genetic algorithm. The methods that allow to achieve this goal are computer simulation of a neural network on multi-core CPUs and a genetic algorithm for finding the weights of an artificial neural network. The performance of sequential and parallel training processes of artificial neural network is compared.
Problems in programming 2020; 2-3: 313-321
Keywords
Full Text:
PDFReferences
Chapter 2. Introduction [Electronic resource] / Peter Radko: Portal "Neural networks".
Kononyuk A. NEURAL NETWORKS AND GENETIC ALGORITHMS / A. Kononyuk - Kyiv: Korniychuk, 2008.
Chapter 3. Fundamentals of INS [Electronic resource] / Peter Radko: Portal "Neural networks".
Classification of neural networks [Electronic resource]: Artificial Intelligence Portal Project.
Hawkins, Douglas M. The problem of overfitting. Journal of Chemical Information and Computer Sciences - 2004 - 4 4.1 CrossRef
Ting Qin, et al. A CMAC learning algorithm based on RLS. Neural Processing Letters 2004. 19. 1 CrossRef
Crick F. The recent excitement about neural networks Nature. 1989. 337
https://doi.org/10.1038/337129a0">CrossRef
Edwards C. Growing Pains for Deep Learning. Communications of the ACM. 2015. Vol. 58, Issue 7 CrossRef
Panchenko T. Genetic algorithms / Panchenko T. - Ed. Astrakhan University House, 2007.
Andrews G. Fundamentals of Multithreaded, Parallel, and Distributed Programming: Trans. with English. - M.: Williams House, 2003.
Bogachev K. Fundamentals of parallel programming / Bogachev K. - M.: BINOM. Lab. Knowledge, 2003.
DOI: https://doi.org/10.15407/pp2020.02-03.313
Refbacks
- There are currently no refbacks.