Stochastic One-Step Training for Feedforward Artificial Neural Networks Article uri icon

abstract

  • This paper studies the use and application of a fast method (non-iterative and instantaneous) for Feedforward Neural Networks training in which the weights of the hidden layer are assigned randomly, and the weights of the output layer are trained through a linear regression adjustment. The method solves two of the problems that are present in traditional training: training time and optimal structure. While traditional iterative training methods require long periods to train a single structure, the proposed method allows training a structure in a single step (not iterative). In this way, by scanning the number of neurons in the hidden layer, many structures are trained in a short time, and it is possible to obtain an optimal topology. A quality control criterion of the predictions is proposed based on the coefficient of determination that guarantees short times and an optimal number of hidden neurons to characterize a specific problem. The feasibility of the proposed method is tested by comparing its performance against building functions of the artificial neural networks toolbox in Matlab®, resulting superior in both approximation quality and training time. A rigorous study and analysis are performed for the regression of simulated data on two different surfaces with a specific noise and different topologies of the neural network. The resulting process time is at least 150 times shorter for proposed training than with the iterative training that Matlab uses, thus obtaining well-founded learning rules. A novel way of an amputated matrix is proposed that breaks the paradigm of the way multiple-output systems are trained and improves the quality of predictions with no detriment to training times. © 2020, Springer Science%2bBusiness Media, LLC, part of Springer Nature.

publication date

  • 2020-01-01