UNDERSTANDING AND STUDY OF WEIGHT INITIALIZATION IN ARTIFICAL NEURAL NETWORKS WITH BACK PROPAGATION ALGORITHM

Main Article Content

Farhana Kausar, Dr. Aishwarya P., Dr. Gopal Krishna Shyam

Abstract

There are various important choices that need to be assumed when building and training a neural network. One has to determine which loss function to be used, how many layers to be include, what stride and kernel size to use for each layer, which optimization algorithm is best suited for the network and so on. Assuming all the above condition, it decided to initialize the neural network training by different weight initialization techniques. This process is carried out in affiliation or with respect to with random learning rate so that we can get better result. We have calculated the mean test error for newly proposed paradigm and traditional approach. The newly proposed paradigm Xavier Weight Initialization less error in comparison to the traditional approach of Uniform and Gaussian Weight initialization (Random Initialization).

Article Details

Section
Articles