PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
A neural fuzzy system can learn an agent profile of a user when it samples user question-answer data. A fuzzy system uses if-then rules to store and compress the agent's knowledge of the user's likes and dislikes. A neural system uses training data to form and tune the rules. The profile is a preference map or a bumpy utility surface defined over the space of search objects. Rules define fuzzy patches that cover the surface bumps as learning unfolds and as the fuzzy agent system gives a finer approximation of the profile. The agent system searches for preferred objects with the learned profile and with a new fuzzy measure of similarity. The appendix derives the supervised learning law that tunes this matching measure with fresh sample data. We test the fuzzy agent profile system on object spaces of flowers and sunsets and test the fuzzy agent matching system on an object space of sunset images. Rule explosion and data acquisition impose fundamental limits on the system designs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Evolutionary computation is becoming a common technique for solving difficult, real-world problems in industry, medicine, and defense. This paper reviews some of the practical advantages to using evolutionary algorithms as compared with classic methods of optimization or artificial intelligence. Specific advantages include the flexibility of the procedures, as well as their ability to self-adapt the search for optimum solutions on the fly. As desktop computers increase in speed, the application of evolutionary algorithms will become routine.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Team robotics is slowly but surely emerging as a useful and effective solution to a number of practical problems. There are a number of important motivations for developing robotic teams of cooperating modules. Differences in task requirements, resources, and environments provide a very strong motivation not to attempt to develop a single complicated robotic system which performs all functions, in all conditions, poorly. Consideration of multiple cooperative robotic teams, provide an attractive, effective, and practical alternative. Individual robots in the team are typically simpler to design, enhancing the system reliability. Also, teams are more efficient and more fault tolerant. In our own research, we have investigated a number of important issues underlying the development of cooperative robotic teams. These research studies have emphasized experimental validation of the concepts. Highlights of these studies are presented below.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Genetic algorithms are a computational paradigm modeled after biological genetics. They allow one to efficiently search a very large optimization-space for good solutions. In this paper we report on two methods of maintaining genetic diversity in a population of organisms being acted on by a genetic algorithm. In both cases the organisms are on a square grid and only interact with their nearest neighbors. The number of interactions is based on the fitness. One method results in ecological niches in sizes from a few organisms to several dozen. In the second method almost every organism in the population remains in a unique ecological niche searching the fitness landscape. The two methods can be used in finding multiple solutions. These methods have been applied to a semiconductor manufacturing process in developing robust plasma etch recipes that reduce the variance about a target mean and allow the dc bias to drift within 15% of a nominal value. The tapered via etch process in our production environment results in an oxide film with a mean value of about 7093 angstroms and a standard deviation of 730 angstroms. In simulations using real production data and a neural network model of the process our new recipes have reduced the standard deviation below 200 angstroms. These results indicate that significant improvement in the proces can be realized by applying these techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Genetic algorithms (GAs) have been found to be very effective in solving numerous optimization problems, especially those with many (possibly) conflicting and noisy objectives. However, there seems to be no consensus as to what fitness measure to use in such situations, and how to rank individuals in a population on the basis of several conflicting objectives. Fuzzy logic provides an effective and easy way of dealing with such class of problems. In this work, we present a fuzzy genetic algorithm (FGA), which combines the parallel and robust search properties of GA with the expressive power of fuzzy logic. In the proposed FGA, the fitness of individuals is evaluated based on fuzzy logic rules expressed on linguistic variables modeling the desired objective criteria of the problem domain. FGA is compared to a weighted sum GA (WS-GA) where the fitness is set equal to a weighted sum of the objective criteria. Also, sever fitness fuzzification approaches are evaluated. Experimental evaluation was conducted using as a testbed the floorplanning of very large scale integrated (VLSI) circuits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The methodologies of soft computing (fuzzy logic, neural networks, evolutionary computation, etc.), represent a new computational paradigm which may usefully complement the conventional digital computing approaches in the application to real life problems. This paper considers the potential role of soft computing (SC) in electronic product engineering. The problem is first treated on general grounds, using electronic design as example. Then a more specific discussion is presented, concerning the use of SC methodologies (fuzzy logic and neural nets) in a representative problem of testing, the analysis and interpretation of infrared thermal maps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We investigate the optical implementation of fuzzy logic operations. After a brief introduction of the aspects of fuzzy logic operations and fuzzy set theory, we discuss the optical implementations of fuzzy logic operation, fuzzy set reasoning, and fuzzy associative memory using various linear and nonlinear optical techniques. We also propose and discuss a fuzzy neural network model for optical pattern recognition. Experimental results achieved in our laboratory are presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we suggest the application of connectionist models to information retrieval problems. In particular we propose the application of neural networks to the task of assigning a score to web pages in order to guide the user in navigating the web. Recent extensions of neural models to the processing of structured domains are a natural framework to build oracles capable to score hypertexts. Recursive neural networks can process directed acyclic graphs and thus can be employed to evaluate a hypertext taking into account also information coming from the pages referred by the hyperlinks it contains. Moreover, we can take advantage from the learning algorithms to adapt the scoring process to the user's preferences and habit. We show how different levels of the scoring process can be implemented using connectionist processing: we propose a method to summarize documents using multi-layer neural networks used as autoassociators; we suggest to use recurrent neural networks as predictors for the user's trajectories in the page domain; we explore the application of recursive neural networks to score hypertextual pages with respect to the context they are in.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The poor scaling behavior of grid-partitioning fuzzy systems in case of increasing data dimensionality suggests using fuzzy systems with a scatter-partition of the input space. Jang has shown that zero-order Sugeno fuzzy systems are equivalent to radial basis function networks (RBFNs). Methods for finding scatter partitions for RBFNs are available, and it is possible to use them for creating scatter-partitioning fuzzy systems. A fundamental problem, however, is the structure identification problem, i.e., the determination of the number of fuzzy rules and their positions in the input space. The supervised growing neural gas method uses classification or regression error to guide insertions of new RBF units. This leads to a more effective positioning of RBF units (fuzzy rule IF-parts, resp.) than achievable with the commonly used unsupervised clustering methods. Example simulations of the new approach are shown demonstrating superior behavior compared with grid-partitioning fuzzy systems and the standard RBF approach of Moody and Darken.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the first part of this paper a new on-line fully self- organizing artificial neural network model (FSONN), pursuing dynamic generation and removal of neurons and synaptic links, is proposed. The model combines properties of the self- organizing map (SOM), fuzzy c-means (FCM), growing neural gas (GNG) and fuzzy simplified adaptive resonance theory (Fuzzy SART) algorithms. In the second part of the paper experimental results are provided and discussed. Our conclusion is that the proposed connectionist model features several interesting properties, such as the following: (1) the system requires no a priori knowledge of the dimension, size and/or adjacency structure of the network; (2) with respect to other connectionist models found in the literature, the system can be employed successfully in: (a) a vector quantization; (b) density function estimation; and (c) structure detection in input data to be mapped topologically correctly onto an output lattice pursuing dimensionality reduction; and (3) the system is computationally efficient, its processing time increasing linearly with the number of neurons and synaptic links.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with the application of a new competitive, on-line, neuro-fuzzy architecture, the fully self-organizing simplified adaptive resonance theory (FOSART), to the analysis of remote sensed Antarctic data, in a classification experiment. FOSART employs fuzzy set memberships in the weights updating rule; it applies an ART-based vigilance test to control neuron proliferation and takes advantage of the fact that it employs a new version of the competitive Hebbian Rule to dynamically generate and remove synaptic links between neurons, as well as neurons. As a consequence, FOSART can develop disjointed subnets. The results obtained with FOSART have been compared with those obtained with other neuro-fuzzy unsupervised architecture: FuzzySART, FLVQ, SOM. The finding suggests that FOSART performances are lower, at convergence, than those of FLVQ and SOM, even if it shows a faster adaptivity to the input data structure, due to its topological and on-line characteristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It has been previously shown that evolving modular programs can improve the efficiency of induction for a specific class of modular programs called automatically defined functions (ADFs). ADFs, a variant of genetic programming, induce hierarchically decomposed programs similar to those a human programmer might construct. This paper demonstrates that multiple interacting programs (MIPs), an evolutionary program that induces systems of equations, also provides an efficiency increase when multiple equations are evolved rather than a single equation. This result is demonstrated on two different Boolean problems using both a numerical and Boolean representation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modeling of real world data requires many choices to be made, about the size and type of model used parameter value settings and validation criteria. The group method of data handling, GMDH, builds a data-driven polynomial model by constructing a hierarchy of increasingly complex terms. At each level, terms which perform baldly on independent validation data are rejected. Thus the GMDH algorithm performs a search over a small set of different models to find the best. Drawbacks of this method are that the model is conditioned to fit the validation data set and so may not be able to generalize well, and its ability to find a good model is affected by the choice of polynomial terms used. In the work described in this paper, we demonstrate a new method of optimizing the basic GMDH approach using genetic algorithms which avoids an exhaustive search of all possible polynomials. Specifically, multi- objective genetic algorithms can be used to optimize the model to several different constraints, encouraging a good bias- variance trade-off. To illustrate this, the method is tested on data arising from the financial markets and the weather.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an evolutionary approach for reconstructing CT images; the algorithm reconstructs two-dimensional unknown images from four one-dimensional projections. A genetic algorithm works on a randomly generated population of strings each of which contains encodings of an image. The traditional, as well as new, genetic operators are applied on each generation. The mean square error between the projection data of the image encoded into a string and original projection data is used to estimate the string fitness. A Laplacian constraint term is included in the fitness function of the genetic algorithm for handling smooth images. Two new modified versions of the original genetic algorithm are presented. Results obtained by the original algorithm and the modified versions are compared to those obtained by the well-known algebraic reconstruction technique (ART), and it was found that the evolutionary method is more effective than ART in the particular case of limiting projection directions to four.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Manual design of membership functions and rule bases for fuzzy systems often produces non optimal controllers, both in terms of performance and rule-base complexity. Even algorithms for automatic generation of these two components do generally miss their simultaneous optimal determination, therefore producing fuzzy system with lower performance. This paper addresses the use of a genetic algorithm for the optimization of a working fuzzy controller through the simultaneous tuning of membership functions and fuzzy rules. The parameter coding used by the method does allow the fine tuning of membership functions and, a the same time, the simplification of the rule base by identifying the necessary rules and by selecting the relevant inputs for each of them. Results obtained by applying the method to a fuzzy controller implementing the wall-following task for a real mobile robot are shown and compared, both in terms of performance and rule base complexity, with those provided by the original non-optimized version.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with the use of fuzzy logic for complex dynamical systems. If not analytical information on the system, such as differential equations, is available, but only numerical data, then fuzzy logic methods can be successfully applied.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Color image coding at low bit rates is an area of research that is just being addressed in recent literature since the problems of storage and transmission of color images are becoming more prominent in many applications. Current trends in image coding exploit the advantage of subband/wavelet decompositions in reducing the complexity in optimal scalar/vector quantizer (SQ/VQ) design. Compression ratios (CRs) of the order of 10:1 to 20:1 with high visual quality have been achieved by using vector quantization of subband decomposed color images in perceptually weighted color spaces. We report the performance of a recently developed adaptive vector quantizer, namely, AFLC-VQ for effective reduction in bit rates while maintaining high visual quality of reconstructed color as well as monochrome images. For 24 bit color images, excellent visual quality is maintained upto a bit rate reduction to approximately 0.48 bpp (for each color plane or monochrome 0.16 bpp, CR 50:1) by using the RGB color space. Further tuning of the AFLC-VQ, and addition of an entropy coder module after the VQ stage results in extremely low bit rates (CR 80:1) for good quality, reconstructed images. Our recent study also reveals that for similar visual quality, RGB color space requires less bits/pixel than either the YIQ, or HIS color space for storing the same information when entropy coding is applied. AFLC-VQ outperforms other standard VQ and adaptive SQ techniques in retaining visual fidelity at similar bit rate reduction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reports the compression ratio performance of the RGB, YIQ, and HSV color plane models for the lossless coding of the National Library of Medicine's Visible Human (VH) color data set. In a previous study the correlation between adjacent VH slices was exploited using the RGB color plane model. The results of that study suggested an investigation into possible improvements using the other two color planes, and alternative differencing methods. YIQ and HSV, also know a HSI, both represent the image by separating the intensity from the color information, and we anticipated higher correlation between the intensity components of adjacent VH slices. However the compression ratio did not improve by the transformation from RGB into the other color plane models, since in order to maintain lossless performance, YIQ and HSV both require more bits to store each pixel. This increase in file size is not offset by the increase in compression due to the higher correlation of the intensity value, the best performance being achieved with the RGB color plane model. This study also explored three methods of differencing: average reference image, alternating reference image, and cascaded difference from single reference. The best method proved to be the first iteration of the cascaded difference from single reference. In this method, a single reference image is chosen, and the difference between it and its neighbor is calculated. Then the difference between the neighbor and its next neighbor is calculated. This method requires that all preceding images up to the reference image be reconstructed before the target image is available. The compression ratios obtained from this method are significantly better than the competing methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Flexible micro-endoscopes with dimensions of as small as 1.0 mm in outer diameter and as long as 3.0 m in length produce images that have a 'honeycomb' pattern due to the spaces between the individual collection optical fibers contained in the imaging conduit. This pattern is found to exhibit a definable spatial frequency that is discrete from that of the desired information or the actual images of interest. By applying a filter and by sharpening the contrast between adjacent pixels, it was possible to remove the honeycomb pattern without significant degradation to the visual quality of the image. The technique described employs Fourier analysis to analyze the image so as to define the 'noise' component. Then a discrete band-reject frequency filter was applied to both the original and sharpened images, resulting in the effective removal of the honeycomb pattern. The advantages and limitations of the image processing technique are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Currently early detection of breast cancer is primarily accomplished by mammography and suspicious findings may lead to a decision for performing a biopsy. Digital enhancement and pattern recognition techniques may aid in early detection of some patterns such as microcalcification clusters indicating onset of DCIS (ductal carcinoma in situ) that accounts for 20% of all mammographically detected breast cancers and could be treated when detected early. These individual calcifications are hard to detect due to size and shape variability and inhomogeneous background texture. Our study addresses only early detection of microcalcifications that allows the radiologist to interpret the x-ray findings in computer-aided enhanced form easier than evaluating the x-ray film directly. We present an algorithm which locates microcalcifications based on local grayscale variability and of tissue structures and image statistics. Threshold filters with lower and upper bounds computed from the image statistics of the entire image and selected subimages were designed to enhance the entire image. This enhanced image was used as the initial image for identifying the micro-calcifications based on the variable box threshold filters at different resolutions. The test images came from the Texas Tech University Health Sciences Center and the MIAS mammographic database, which are classified into various categories including microcalcifications. Classification of other types of abnormalities in mammograms based on their characteristic features is addressed in later studies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report on an on-going study to assess potential benefits using soft computing methods in forecasting problems. Our goal is to forecast natural phenomena represented by time series that show chaotic features. We use a neuro-fuzzy system for its ability to adapt to numerical data and for the possibility to input and extract expert knowledge expressed in words. We present results of experiments designed to study how to shape a neuro-fuzzy systems to forecast chaotic time series. Our main conclusions are: (1) The neuro-fuzzy system is able to forecast a synthetic chaotic time series with high accuracy if the number of inputs and the time delay between them are chosen adequately. (2) The Takens-Mane theorem from chaos theory gives a useful lower bound on the minimal number of inputs. (3) The time delay between the inputs can not be set a priori. It has to be tuned for every different times series. (4) The number of fuzzy rules seems related to the size of the learning set and not to the structure of the chaotic dynamical system. We tentatively try to interpret the rules that the neuro-fuzzy system has learned. Finally we discuss the adequacy of the whole set of fuzzy rules to forecast locally the dynamical system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A toolkit of software for advanced spectral methods has been in the public domain since 1995 and an extensive revision has been made available on the Worldwide Web in Spring 1997. The toolkit emphasizes, besides the more traditional Blackman- Tukey and maximum-entropy methods, singular-spectrum analysis (SSA) and the multi-taper method (MTM); hence its name. The original developers of the SSA Toolkit (at first), the researchers involved in its recent extension as an SSA-MTM toolkit, and many independent investigators have applied successive versions of the toolkit to a variety of practical problems. Some of the methodological basics are reviewed, and applications to time-series prediction are outlined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Qualitative information about the structure of a mapping can surely be of help in learning a mapping by a collection of input-output pairs. However, there are conditions in which time and some other constraints make guessing the only plausible means for interpreting data. In this paper, the problem of the plasma boundary reconstruction in 'Tokamak' nuclear fusion rectors is assessed. The problem is formulated as an inverse 'identification' problem and the mapping is derived by a properly generated database of simulated experiments. Real data coming from experiments are also available to validate both numerically generated data and extracted model. The identification problem is solved for two different databases by using neural networks and more conventional models. The introduction of techniques derived from soft computing is shown to improve the performance in various respects. Dynamic identification systems appear to be rather demanding also for such systems, for the need of rapidly interpreting real time data for discharge control. Soft computing approaches may yet yield some low cost ways to take decisions during plasma evolution. The approximate analysis of experimental data could also improve the knowledge on the particular problem allowing an evolution of the knowledge base. Experimental data related to ASDEX-Upgrade machine are presented in this work and preliminary processed. Soft computing techniques also allow to simply get ideas about two other interesting problems in plasma engineering, namely, the fault tolerance and the minimization of the number of sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Topological methods have recently been developed for the classification, analysis, and synthesis of chaotic time series. These methods can be applied to time series with a Lyapunov dimension less than three. The procedure determines the stretching and squeezing mechanisms which operate to create a strange attractor and organize all the unstable periodic orbits in the attractor in a unique way. Strange attractors are identified by a set of integers. These are topological invariants for a two dimensional branched manifold, which is the infinite dissipation limit of the strange attractor. It is remarkable that this topological information can be extracted from chaotic time series. The data required for this analysis need not be extensive or exceptionally clean. The topological invariants: (1) are subject to validation/invalidation tests; (2) describe how to model the data; and (3) do not change as control parameters change. Topological analysis is the first step in a doubly discrete classification scheme for strange attractors. The second discrete classification involves specification of a 'basis set' set of periodic orbits whose presence forces the existence of all other periodic orbits in the strange attractor. The basis set of orbits does change as control parameters change. Quantitative models developed to describe time series data are tested by the methods of entrainment. This analysis procedure has been applied to analyze a number of data sets. Several analyses are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Evolutionary programming (EP) has been successfully applied to many parameter optimization problems. We propose a mean mutation operator, consisting of a linear combination of Gaussian and Cauchy mutations. Preliminary results indicate that both the adaptive and non-adaptive versions of the mean mutation operator are capable of producing solutions that are as good as, or better than those produced by Gaussian mutations alone. The success of the adaptive operator could be attributed to its ability to self-adapt the shape of the probability density function that generates the mutations during the run.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this presentation, a fuzzy logic adaptive genetic algorithm (FLAGA) software engine is applied to hypercompression pre- processing. The FLAGA has a genetic algorithm (GA)-engine, tunable by fuzzy-logic rules. As a result, basic GA-engine operations, such as spanning, crossover, and mutation, have tunable rates, according to progress in the convergence process. Since the rates of these operations are not fixed but optimized in real-time, FLAGA convergence speed is at least one-order-of-magnitude higher than equivalent speed for a standard GA. In this paper, we present theoretical analysis and simulation results for this specific fuzzy logic application, as well as further considerations related to the application of FLAGA to video imaging and edge-extraction ATR (automatic target recognition).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Genetic algorithm performance has been improved by adaptively modifying genetic operators, and by filtering out recurring chromosomes from the fitness evaluation process. The enhanced genetic algorithm has been applied to neural network topology selection and function optimization. The performance of the algorithm was evaluated in multiple function and problem domains, where it showed superior convergence speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Artificial neural networks have proven to be powerful tools for sensor fusion, but they are not adaptable to sensor failure in a sensor suite. Physical Optics Corporation (POC) presents a new sensor fusion algorithm, applying fuzzy logic to give a neural network real-time adaptability to compensate for faulty sensors. Identifying data that originates from malfunctioning sensors, and excluding it from sensor fusion, allows the fuzzy neural network to achieve better results. A fuzzy logic-based functionality evaluator detects malfunctioning sensors in real time. A separate neural network is trained for each potential sensor failure situation. Since the number of possible sensor failure situations is large, the large number of neural networks is then fuzzified into a small number of fuzzy neural networks. Experimental results show the feasibility of the proposed approach -- the system correctly recognized airplane models in a computer simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In classification tasks the appearance of high dimensional feature vectors and small datasets is a common problem. It is well known that these two characteristics usually result in an oversized model with poor generalization power. In this contribution a new way to cope with such tasks is presented which is based on the assumption that in high dimensional problems almost all data points are located in a low dimensional subspace. A way is proposed to design a fuzzy system on a unified framework, and to use it to develop a new model for classification tasks. It is shown that the new model can be understood as an additive fuzzy system with parameter based basis functions. Different parts of the models are only defined in a subspace of the whole feature space. The subspaces are not defined a priori but are subject to an optimization procedure as all other parameters of the model. The new model has the capability to cope with high feature dimensions. The model has similarities to projection pursuit and to the mixture of experts architecture. The model is trained in a supervised manner via conjugate gradients and logistic regression, or backfitting and conjugate gradients to handle classification tasks. An efficient initialization procedure is also presented. In addition a technique based on oblique projections is presented which enlarges the capabilities of the model to use data with missing features. It is possible to use data with missing features in the training and in the classification phase. Based on the design of the model, it is possible to prune certain basis functions with an OLS (orthogonal least squares) based technique in order to reduce the model size. Results are presented on an artificial and an application example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The acoustic approach to speech recognition has an important advantage compared with pattern recognition approach: it presents a lower complexity because it doesn't require explicit structures such as the hidden Markov model. In this work, we show how to characterize some phonetic classes of the Italian language in order to obtain a speaker and vocabulary independent speech recognition system. A phonetic data base is carried out with 200 continuous speech sentences of 12 speakers, 6 females and 6 males. The sentences are sampled at 8000 Hz and manual labelled with Asystem Sound Impression Software to obtain about 1600 units. We analyzed several speech parameters such as formants, LPC and reflection coefficients, energy, normal/differential zero crossing rate, cepstral and autocorrelation coefficients. The aim is the achievement of a phonetic recognizer to facilitate the so- called lexical access problem, that is to decode phonetic units into complete sense word strings. The knowledge is supplied to the recognizer in terms of fuzzy systems. The utilized software is called adaptive fuzzy modeler and it belongs to the rule generator family. A procedure has been implemented to integrate in the fuzzy system an 'expert' knowledge in order to obtain significant improvements in the recognition accuracy. Up to this point the tests show a recognition rate of 92% for the vocal class, 89% for the fricatives class and 94% for the nasal class, utilizing 1000 phonemes in phase of learning and 600 phonemes in phase of testing. Our intention is to complete the fuzzy recognizer extending this work to the other phonetic classes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Distortions are introduced by standard TV cameras on the filmed images. These are due to optics and electronics and cause a systematic displacement of the image points with respect to their true position which is given by the geometrical perspective. In order to correct this error, in classical photogrammetrical approaches, the coordinates of the points are transformed through global polynomials whose coefficients are estimated from a set of reference points placed on regular grids. These approaches do not work well when, as in most of the cases, the distortions are highly irregular, and residual errors over the images can be relatively high. In this paper a solution based on RBF networks is proposed. It takes advantage of the quasi-local nature of the network elements to get a uniform small residual distortion error over all the TV image. The structural parameters of the network (namely their number and variance) are set according to criteria inspired to linear filtering theory and the weights are computed following a MAP criterion. Tests on simulated distortions and on real data have been carried. The results reported here show that the RBF networks achieve a better reduction of the distortions in all the tested conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The application of wavelet transforms to edge detection has improved edge localization. The image produced by the local maxima of the wavelet modulus needs to be thresholded to extract out the relevant edge pixels. This is currently done manually. In this paper, we apply a fuzzy thresholding approach for automatic determination of the threshold level for wavelet maxima. A membership function is used to determine the characterization of the candidate edges based on a particular threshold. The threshold which yields the best characteristic or lowest uncertainty is selected. Non-crisp thresholding is achieved by re-evaluating edge pixel membership values to identify those pixels that may have been improperly classified. This results in the closure of small gaps between edge segments and a reduction in the size and number of larger gaps. For disjoint edge segments with a separation of less than six pixels, their endpoints can be linked by fuzzy reasoning based on membership values, distance, and their wavelet angles. Experimental results on test images have demonstrated the effectiveness of this method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.