TITLE

Relationship between phase and amplitude generalization errors in complex- and real-valued feedforward neural networks

AUTHOR(S)
Hirose, Akira; Yoshida, Shotaro
PUB. DATE
June 2013
SOURCE
Neural Computing & Applications;Jun2013, Vol. 22 Issue 7/8, p1357
SOURCE TYPE
Academic Journal
DOC. TYPE
Article
ABSTRACT
We compare the generalization characteristics of complex-valued and real-valued feedforward neural networks. We assume a task of function approximation with phase shift and/or amplitude change in signals having various coherence. Experiments demonstrate that complex-valued neural networks show smaller generalization error than real-valued networks having doubled input and output neurons in particular when the signals have high coherence, that is, high degree of wave nature. We also investigate the relationship between amplitude and phase errors. It is found in real-valued networks that abrupt change in amplitude is often accompanied by steep change in phase, which is a consequence of local minima in real-valued supervised learning.
ACCESSION #
87909736

 

Related Articles

  • Generalisation over Details: The Unsuitability of Supervised Backpropagation Networks for Tetris. Lewis, Ian J.; Beswick, Sebastian L. // Advances in Artificial Neural Systems;4/27/2015, Vol. 2015, p1 

    We demonstrate the unsuitability of Artificial Neural Networks (ANNs) to the game of Tetris and show that their great strength, namely, their ability of generalization, is the ultimate cause. This work describes a variety of attempts at applying the Supervised Learning approach to Tetris and...

  • STUDYING THE POSSIBILITY OF NEURAL NETWORK APPLICATION IN THE DIAGNOSTICS OF A SMALL FOUR-STROKE PETROL ENGINE BY WEAR PARTICLE CONTENT. Lisjak, Dragutin; Marić, Gojko; Štefanić, Nedjeljko // Tehnicki vjesnik / Technical Gazette;Oct-Dec2012, Vol. 19 Issue 4, p857 

    This paper presents the application of artificial neural network (ANN) in engine diagnostics. One-layer feed-forward neural network which is trained using a back-propagation algorithm that updates the weights and biases values according to Levenberg-Marquardt algorithm has been established to...

  • Multivariate numerical approximation using constructive $$ L^{2} (\mathbb{R}) $$ RBF neural network. Muzhou, Hou; Xuli, Han // Neural Computing & Applications;Feb2012, Vol. 21 Issue 1, p25 

    For the multivariate continuous function, using constructive feedforward $$ L^{2} (\mathbb{R}) $$ radial basis function (RBF) neural network, we prove that a $$ L^{2} (\mathbb{R}) $$ RBF neural network with n + 1 hidden neurons can interpolate n + 1 multivariate samples with zero error. Then, we...

  • Direct adaptive neural control of nonlinear systems with extreme learning machine. Rong, Hai-Jun; Zhao, Guang-She // Neural Computing & Applications;Mar2013, Vol. 22 Issue 3/4, p577 

    A direct adaptive neural control scheme for a class of nonlinear systems is presented in the paper. The proposed control scheme incorporates a neural controller and a sliding mode controller. The neural controller is constructed based on the approximation capability of the single-hidden layer...

  • Blind image deconvolution by neural recursive function approximation. Jiann-Ming Wu; Hsiao-Chang Chen; Chun-Chang Wu; Pei-Hsun Hsu // World Academy of Science, Engineering & Technology;Nov2010, Issue 47, p844 

    No abstract available.

  • RANDOM NEURAL NETWORK MODEL FOR SUPERVISED LEARNING PROBLEMS. Basterrech, S.; Rubino, G. // Neural Network World;2015, Vol. 25 Issue 5, p457 

    Random Neural Networks (RNNs) are a class of Neural Networks (NNs) that can also be seen as a specific type of queuing network. They have been successfully used in several domains during the last 25 years, as queuing networks to analyze the performance of resource sharing in many engineering...

  • The extreme learning machine learning algorithm with tunable activation function. Li, Bin; Li, Yibin; Rong, Xuewen // Neural Computing & Applications;Mar2013, Vol. 22 Issue 3/4, p531 

    In this paper, we propose an extreme learning machine (ELM) with tunable activation function (TAF-ELM) learning algorithm, which determines its activation functions dynamically by means of the differential evolution algorithm based on the input data. The main objective is to overcome the problem...

  • A Novel Multiple Instance Learning Method Based on Extreme Learning Machine. Wang, Jie; Cai, Liangjian; Peng, Jinzhu; Jia, Yuheng // Computational Intelligence & Neuroscience;2/3/2015, Vol. 2015, p1 

    Since real-world data sets usually contain large instances, it is meaningful to develop efficient and effective multiple instance learning (MIL) algorithm. As a learning paradigm, MIL is different from traditional supervised learning that handles the classification of bags comprising unlabeled...

  • Manifold regularized extreme learning machine. Liu, Bing; Xia, Shi-Xiong; Meng, Fan-Rong; Zhou, Yong // Neural Computing & Applications;Feb2016, Vol. 27 Issue 2, p255 

    Extreme learning machine (ELM) works for generalized single-hidden-layer feedforward networks (SLFNs), and its essence is that the hidden layer of SLFNs need not be tuned. But ELM only utilizes labeled data to carry out the supervised learning task. In order to exploit unlabeled data in the ELM...

Share

Read the Article

Courtesy of THE LIBRARY OF VIRGINIA

Sorry, but this item is not currently available from your library.

Try another library?
Sign out of this library

Other Topics