Paper
22 March 1996 Function approximation using a sinc neural network
Wael R. Elwasif, Laurene V. Fausett
Author Affiliations +
Abstract
Neural networks for function approximation are the basis of many applications. Such networks often use a sigmoidal activation function (e.g. tanh) or a radial basis function (e.g. gaussian). Networks have also been developed using wavelets. In this paper, we present a neural network approximation of functions of a single variable, using sinc functions for the activation functions on the hidden units. Performance of the sinc network is compared with that of a tanh network with the same number of hidden units. The sinc network generally learns the desired input-output mapping in significantly fewer epochs, and achieves a much lower total error on the testing points. The original sinc network is based on theoretical results for function representation using the Whittaker cardinal function (an infinite series expansion in terms of sinc functions). Enhancements to the original network include improved transformation of the problem domain onto the network input domain. Further work is in progress to study the use of sinc networks for mappings in higher dimension.
© (1996) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Wael R. Elwasif and Laurene V. Fausett "Function approximation using a sinc neural network", Proc. SPIE 2760, Applications and Science of Artificial Neural Networks II, (22 March 1996); https://doi.org/10.1117/12.235959
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neural networks

Associative arrays

Network architectures

Wavelets

Control systems

Algorithm development

Applied mathematics

RELATED CONTENT


Back to Top