PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
A simple but naive way to fit a model to a given data or to solve an inverse problem is to match directly the sequence of observed data with the output of the model by minimizing some measure of mismatch between them. This approach can give satisfaction when the number of unknown parameters describing the solution is very small with respect to the number of independent data. In other cases, a prior knowledge on the solution is needed to find a satisfactory solution. The regularization theory then gives satisfactory solutions, but to deal with inaccuracies on data and uncertainties on models and to give some measure of the confidence on the solution is easier in a probabilistic approach. However, these two approaches are intimely related. The main object of this work is to present, in a simple and unifying way, this relation and discuss on the main limitations and advantages of each approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cryo electron microscopy of viruses provides 2D projections of the scattering intensity of the viral particle but the orientation of the projections is not known. We describe an approach to reconstructing the 3D scattering intensity in spite of the unknown projection orientations using nonlinear least squares ideas where the reconstruction is guaranteed to have the icosahedral symmetry known to be present in the viral particle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a Bayesian framework for reconstructing missing regions of a color image sequence. Because the three color channels are not independent a multichannel median image model is chosen. Since the model extends through time to previous and following frames it incorporates motion estimation to compensate for the effects of motion in the original scene. The paper discusses methods for detecting the missing data which exploit the temporally uncorrelated nature of typical degradation. A Markov chain Monte Carlo Gibb's Sampling scheme is adopted for drawing samples for the missing data. The method draws these from the full posterior distributions for the missing data in each of the YUV color channels. The nature of the model means that the multivariate probability distributions for the missing data are difficult to sample from. The paper shows how this can be overcome with a numerical approach to the sampling. The efficiency of this approach relies on the fact that there are only a small and finite number of values that the data can take.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The structure completion problem in x-ray fiber diffraction is addressed from a Bayesian perspective. The experimental data are sums of the squares of the amplitudes of particular sets of Fourier coefficients of the electron density. In addition, a part of the electron density. In addition, a part of the electron density is known. The image reconstruction problem is to estimate the missing part of the electron density. A Bayesian approach is taken in which the prior model for the image is based on the fact that it consists of atoms, i.e., the unknown electron density consists of separated sharp peaks. The posterior for the Fourier coefficients typically takes the form of an independent and identically distributed multivariate normal density restricted to the surface of a hypersphere. However, the electron density often exhibits symmetry, in which case, the Fourier coefficient components are not longer independent or identically distributed. A diagonalization process results in an independent multivariate normal probability density function, restricted to a hyperspherical surface. the analytical form for the mean of the posterior density function is derived. The mean can be expressed as a weighting function on the Fourier coefficients of the known part of the electron density. The weighting function for the hyperellipsoidal and hyperspherical cases are compared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In magnetic resonance imaging (MRI), three unobservable physical quantities are combined at the pixel level to produce the image. Control parameters can be pre-set to highlight contrast between different tissue types but the optimal values may be problem- and patient-specific and not known in advance. The aim in synthetic MRI is to estimate the underlying physical quantities from three images, taken at conventional settings, and to use these to synthesize images for arbitrary control parameters. Standard least squares methods are inadequate for this ill-conditioned inverse problem. The paper describes several forms of Bayesian reconstruction and suggests that these provide satisfactory alternatives.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The performance of high resolution imaging with large optical instruments is severely limited by the atmospheric turbulence. Adaptive optics offers a real time compensation of the turbulence. The correction is however only partial and the long exposure images must be deconvolved to restore the fine details of the object. The aim of this communication js to further study and validate on AO images a recently proposed "myopic" deconvolution scheme. This approach takes into account the noise in the image, the imprecise knowledge of the PSF, and the available a priori information on the object to be restored as well as on the PSF. The PSF is characterized by its ensemble mean and power spectral density which can be derived from the turbulence statistics. Various object priors are tested (quadratic and L norm regularization). The myopic deconvolution is first compared, on a simulated astronomical extended source, to classical deconvolution. It is shown to improve the object restoration particularly in the case poor PSF estimations due to rapidly evolving turbulence conditions. The myopic deconvolution is then applied on the experimental image of a triple star. A good astrometric and photometric precision is obtained, especially when using a multiple star model for the object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A problem of blind deconvolution arises when attempting to restore a short-exposure a short-exposure image that has been degraded by random atmospheric turbulence. We attack the problem by using two short-exposure images as data inputs. The Fourier transform of each is taken, an the two are divided. The unknown object spectrum cancels. What remains is the quotient of the two unknown transfer functions that formed the images. These are expressed, via the sampling theorem, as Fourier series in the corresponding PSFs, the unknowns of the problem. Cross-multiplying the division equation gives an equation that is linear in the unknowns. However, the problem is rank deficient in the absence of prior knowledge. We use the prior knowledge that the object and the PSFs have finite support extensions, and also are positive. The linear problem is least-squares solved many times over, assuming different support values and enforcing positivity. The two support values that minimize the rms image data inconsistency define the final solution. This regularizes the solution to the presence of 4-15 percent additive noise of detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In blind image restoration the parameters of the imaging system are unknown, and must be estimated along with the restored image. Assuming that the images are piecewise smooth, the most part of the information needed for the estimation of the degradation parameters is expected to be located across the discontinuity and hence a better estimation of the paper we adopt a fully Bayesian approach which enables the joint MAP estimation of the image field and the ML estimations of the degradation parameters and the MRF hyperparameters. Owing to the presence of an explicit, binary line process, we exploit suitable approximations to greatly reduce the computational cost of the method. In particular, we employ a mixed-annealing algorithm for the estimation of the intensity and the line fields, periodically interrupted for updating the degradation parameters and the hyperparameters, based on the current estimate of the image field. The degradation parameters are updated by solving a least square problem of very small size. To update the hyperparameters we exploit MCMC techniques and saddle point approximations to reduce the computation of expectations to low cost time averages over binary variables only.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A numerically efficient approach to the problem of automatically segmenting images into regions of statistical stationary is proposed in this paper. The technique is fully unsupervised, in that no prior knowledge of the number of regions, or their attributes, is required. Instead, this knowledge is inferred via a dynamic learning phase. Specifically, image features are extracted from windows forming a tessellation of the image, by fitting the realization in each window with a Gaussian Markov Random Field. An approach to cluster formation in feature space is described, based on a finite Gaussian mixture model. This phase of the algorithm permits a threshold parameter - and subsequently, the number of texture classes and their parameters - to be inferred. A very fast approach to fine segmentation - which uses the result of the clustering phase as inputs - is then implemented, yielding a class label inference for a dynamically-chosen sparse set of pixel sites. The scheme is iterated to convergence, yielding a label realization for all pixel sites.To further enhance the identification of textural borders, a post-processing algorithm, using ICM-based estimation, is activated in areas of high edge activity, using the results of the previous stages as estimates of the label realizations in such areas. The performance of the scheme in synthetic and real image contexts is considered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A classical inverse problem arising in seismological and ultrasound imaging is the identification from wave-field data of those spatially varying parameters which determine the propagation of the acoustic wave field. This is often called the inverses scattering problem or the diffraction tomography problem. We define a physically based likelihood for 2D acoustic imaging of an inhomogeneous, isotropic acoustic medium. We then turn to the problem of decomposing this likelihood into a form that is amenable to efficient Markov chain Monte Carlo simulation. In particular, we give an approximation to the likelihood allowing those localized MCMC updates which are rejected to the computed with worst case time complexity of order constant.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since the beginning o the 80's, starting with the work by L. Shepp and Y. Vardi, the maximum likelihood approach using the expectation maximization (EM) algorithm has been a powerful tool to solve the estimation problem arising in emission computed tomography (ECT). Important drawbacks of this approach were: slowness of the EM algorithm and its inherent difficult to extend it to handle 'a priori' information. Recently, we presented a new EM-like algorithm, that is based on a decomposition by blocks, with one or more projections in each block, achieving a sped-up of tow orders of magnitudes. On the other hand, in 1995, we extended the EM algorithm, in a natural way, allowing regularization terms. In this article, we present the extension of our work to the case of regularized likelihood estimation; that is, a method that preserves the main properties of the one, but significantly faster, allowing fast Bayesian estimation in ECT. We illustrate the practical behavior of our method with PET simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many real-life situations, it is very difficult or even impossible to directly measure the quantity y in which we are interested: e.g., we cannot directly measure a distance to a distant galaxy or the amount of oil in a given well. Since we cannot measure such quantities directly, we can measure them indirectly: by first measuring some relating quantities x1,...,xn, and then by using the known relation between xi and y to reconstruct the value of the desired quantity y. In practice, it is often very important to estimate the error of the resulting indirect measurement. In this paper, we show that in a natural statistical setting, the problem of estimating the error of indirect measurement can be formulated as a simplified version of a tomography problem. In this paper, we use the ideas of invariance to find the optimal algorithm for solving this simplified tomography problem, and thus, for solving the statistical problem or error estimation for indirect measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We examine sample based Bayesian inference from impedance imaging data. We report experiments employing low level pixel based priors with mixed discrete and continuous conductivities. Sampling is carried out using Metropolis- Hasting Markov chain Monte Carlo, employing both large scale, Langevin updates, and state-adaptive local updates. Computing likelihood ratios of conductivity distributions involves solving a second order linear partial differential equation. However our simulation is rendered computationally tractable by an update procedure which employs a linearization of the forward map and thereby avoids solving the PDE for those updates which are rejected.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Olga Kosheleva, Sergio D. Cabrera, Roberto A. Osegueda, Carlos M. Ferregut, Soheil Nazarian, Debra L. George, Mary J. George, Vladik Kreinovich, Keith Worden
The inverse problem is usually difficult because the signal that we want to reconstruct is weak. Since it is weak, we can usually neglect quadratic and higher order terms, and consider the problem to be linear. Since the problem is linear, methods of solving this problem are also, mainly, linear. In most real-life problems, this linear description works pretty well. However, at some point, when we start looking for a better accuracy, we must take into consideration non-linear terms. This may be a minor improvement for normal image processing, but these non- linear terms may lead to a major improvement and a great enhancement if we are interested in outliers such as faults in non-destructive evaluation or bumps in mammography. Non- linear terms give a great relative push to large outliers, and thus, in these non-linear terms, the effect of irregularities dominate. The presence of the non-linear terms can serve, therefore, as a good indication of the presence of irregularities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Satellite imaging is nowadays one of the main sources of geophysical and environmental information. It is, therefore, extremely important to be able to solve the corresponding inverse problem,: reconstruct the actual geophysics- or environmental-related image from the observed noisy data. Traditional image reconstruction techniques have been developed for the case when we have a single observed image. This case corresponds to a single satellite photo. Existing satellites take photos in several wavelengths. To press this multiple-spectral information, we can use known reasonable multi-image modifications of the existing single-image reconstructing techniques. These modifications, basically, handle each image separately, and try to merge the resulting information. Currently, a new generation of image satellites is being launched, that will enable us to collect visual images for about 500 different wavelengths. This two order of magnitude increase in data amount should lead to a similar increase in the processing time, but surprisingly, it does not. An analysis and explanation of this paradoxical simplicity is given in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of mixed signals occurs in many different contexts; one of the most familiar being acoustics. The forward problem in acoustics consists of finding the sound pressure levels at various detectors resulting from sound signals emanating from the active acoustic sources. The inverse problem consists of using the sound recorded by the detectors to separate the signals and recover the original source waveforms. In general, the inverse problem is unsolvable without additional information. This general problem is called source separation, and several techniques have been developed that utilize maximum entropy, minimum mutual information, and maximum likelihood. In previous work it has been demonstrated that these techniques can be recast in a Bayesian framework. This paper demonstrates the power of the Bayesian approach, which provides a natural means for incorporating prior information into a source model. An algorithm is developed that utilizes information regarding both the statistics of the amplitudes of the signals meted by the sources and the relative locations of the detectors. Using this prior information,the algorithm finds the most probable source behavior and configuration. Thus, the inverse problem can be solved by simultaneously performing source separation and localization. It should be noted that this algorithm is not designed to account for delay times that are often important in acoustic source separation. However, a possible application of this algorithm is in the separation of electrophysiological signals obtained using electroencephalography and magnetoencephalography.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most practical applications of statistical methods are based on the implicit assumption that if an event has a very small probability, then it cannot occur. For example, the probability that a kettle placed on a cold stove would start boiling by itself is not 0, it is positive, but it is so small, that physicists conclude that such an event is simply impossible. This assumption is difficult to formalize in traditional probability theory, because this theory only describes measures on sets and does not allow us to divide functions into 'random' and non-random ones. This distinction was made possible by the idea of algorithmic randomness, introduce by Kolmogorov and his student Martin- Loef in the 1960s. We show that this idea can also be used for inverse problems. In particular, we prove that for every probability measure, the corresponding set of random functions is compact, and, therefore, the corresponding restricted inverse problem is well-defined. The resulting techniques turns out to be interestingly related with the qualitative esthetic measure introduced by G. Birkhoff as order/complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the blind inversion of digital images, the blur or point spread function (PSF) has to be estimated from the observed image. This general problem can be divided into several levels of difficulty depending mainly on the properties of the blur. Here, the first level of difficulty is addressed and space-invariant analytic PSF are considered. In this case, the generalized cross-validation criterion (GCV), using an AR modeling of the original image, is well known to be a robust estimator. This study use a weak constraint of smoothness on the original image and applies the GCV criterion in a myopic scheme. The problem is to choose which PSF in a set, if any, has blurred the observed image. The minimum of the GCV criterion for each candidate PSF yields a vector of estimated parameters. The actual PSF and its parameters should correspond to the lowest value of the GCV criterion. If the observed image is not blurred, the GCV criterion should attain its lowest value when the candidate PSF is reduce to a single pixel. Simulation results show that this approach yields good results on various kind of images with low signal-to-noise ratio and can discriminate between blurred and unblurred images. A near optimal value of the regularization parameter is estimated at the same time. If the effective PSF does not belong to the set of candidates, the optimization of the regularization operator, as a mean to compensate for the distance between the two PSF, is investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this article, we address the problem of Bayesian deconvolution of point sources in nuclear imaging under the assumption of Poissonian statistics. The observed image is the result of the convolution by a known point spread function of an unknown number of point sources with unknown parameters. To detect the number of sources and estimate their parameters we follow a Bayesian approach. However, instead of using a classical low level prior model based on Markov random fields, we prose a high-level model which describes the picture as a list of its constituent objects, rather than as a list of pixels on which the data are recorded. More precisely, each source is assumed to have a circular Gaussian shape and we set a prior distribution on the number of sources, on their locations and on the amplitude and width deviation of the Gaussian shape. This high-level model has far less parameters than a Markov random field model as only s small number of sources are usually present. The Bayesian model being defined, all inference is based on the resulting posterior distribution. This distribution does not admit any closed-form analytical expression. We present here a Reversible Jump MCMC algorithm for its estimation. This algorithm is tested on both synthetic and real data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The inverse problem of object detection in a turbid medium such as breast tissue has been studied extensively recently using advanced optical techniques and high power laser. We propose an alternative approach to using photon migration statistics and Bayesian decision procedure in analyzing the transmitted speckle profile provided by low power lasers. The non-uniform turbid medium was imitated by an oil colloidal system embedded with millimeter size tubes simulating the connective ducts as observed in x-ray breast mammograms. Therefore, the tube depth positions of the colloidal system were intentionally left as an unknown quantity with prior probability. The object is a ceramic particle mimicking the calcified object in the tissue that is undetectable by x-ray mammograms. Bayesian decision procedure, using object size as action, was applied to a region of interest. The expected loss for each action was taken as the departure of the observed data from the predicted results obtained from the migration statistics with respect to the tube's prior probabilities. It appears that the proposed Bayesian decision procedure is viable for detecting small objects embedded in non-uniform turbid media and would be a supplement to the current x-ray mammography.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an unsupervised segmentation algorithm comprising a simulated annealing process on a single Markov Chain to directly calculate the MAP segmentation over a viable number of regions. The algorithm is applied to both Isotropic and Gaussian Hierarchical Markov Random Field (MRF) Models, which may be combined with low level line processes. The annealing algorithm utilizes a sampling framework that unified the processes of model selection, parameter estimation and image segmentation in a single Markov Chain. To achieve this, reversible jumps are incorporated to allow movement between different model spaces. A new method for generating jump proposals is given, which is more efficient than existing methodologies and is applicable to other, less specific model selection problems. It is based on the use of partial decoupling, rather than the more traditional Gibbs Sampler, to update the labels of the MRF. Partial decoupling is a derivative of the better known Swendsen-Wang algorithm in which an auxiliary variable bondmap is used to define regions of the image whose labels are then updated independently. We further propose a novel mechanisms by which deterministic methods, such as Gabor filtering, may be incorporated into this algorithm to sped up the convergence of the MCMC sampling process and hence, that of simulated annealing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image sequence restoration has been steadily gaining in importance with the arrival of digital video broadcasting. Automated treatment of archived video material typically involves dealing with replacement noise in the form of 'blotches' with varying intensity levels and additive 'grain' noise. In the case of replacement noise the problem is essentially one of missing data which must be detected and then reconstructed based upon surrounding spatio- temporal information, while the additive noise can be treated as a noise reduction problem. This paper introduces a fully Bayesian specification of the problem, Markov chain Monte Carlo methodology is applied to the joint detection and removal of both replacement and additive noise components. The work presented builds upon the Bayesian image detection/interpolation methods developed in including now the ability to reduce noise in an image sequence as well as reconstruct the image intensity information within missing regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper present an automated approach for al ow-level vision edge detector. The approach we have taken is to formulate the problem in terms of Bayesian inferencing. This provides meaningful performance functionals. The focus of this work is on the use of Markov Random Fields for specifying the a prior probability for an object or a scene. Local moles for regions and edges in the image are generated and by suing local map estimation approach, we find the edge configuration and the region intensity for each site in the image. The local results for regions and edges are combined by using Markov Random Field. The clique coefficient of the Markov Random Field which describes our model is estimated by using the 'coding method' presented Besag; a practical method to estimate the Gibbs distribution parameters is to use the histogram method presented by Derin and Elliot. Our approach is unsupervised and the solution to the problems of interest is presented along with experimental result. In addition there is comparative in the result of the Canny edge detector.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a new class of models, derived form classical Markov Random Fields, that may be used for the solution of ill-posed problems in image processing and computational vision. They lead to reconstruction algorithms that are flexible, computationally efficient and biological plausible. To illustrate their use, we present their application to the reconstruction of the dominant orientation field and to the adaptive quantization and filtering of images in a variety of situations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we treat the problem of robust entropy estimation given a multidimensional random sample from an unknown distribution. In particular, we consider estimation of the Renyi entropy of fractional order which is insensitive to outliers, e.g. high variance contaminating distributions, using the k-point minimal spanning tree. A greedy algorithm for approximating the NP-hard problem of computing the k-minimal spanning tree is given which is a generalization of the potential function partitioning method of Ravi et al. The basis for our approach is asymptotic theorem establishing that the log of the overall length or weight of the greedy approximation is a strongly consistent estimator of the Renyi entropy. Quantitative robustness of the estimator to outliers is established using Hampel's method of influence functions. The structure of the influence function indicates that the k-MST is a natural extension of the 1D, (alpha) -trimmed mean for multi- dimensional data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radiological experiments, designed to study absorbed dose in irradiated microscopic biological tissues, play a central role in microdosimetry. They yield data that cannot directly reveal the distribution of charge per event, but indirectly, through appropriate models, can lead to estimates of desired quantities. In particular, the measurements can be considered as independent random variables whose distribution is a mixture of Gamma densities with unknown but related parameters. The main data processing tasks is to estimate the weights of the components from the experimentally obtained measurements, which are subsequently used for quantifying the physically meaningful distribution of ion pairs per particle crossing the irradiated tissue volume. In the paper, the processing of the mixtures is addressed, and a procedure for estimating all the unknown model parameters proposed. A Bayesian approach to the problem is adopted based on the reversible jump Markov chain Monte Carlo sampling scheme. Samples from the unknown parameters are obtained from their posterior distributions either by Gibbs sampling or by implementing the Metropolis- Hastings scheme. After convergence, the so obtained samples are used to find the estimate of all the unknowns.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We study the problem of simulating a class of Gibbs random field models, called morphologically constrained Gibbs random fields, using Markov chain Monte Carlo sampling techniques. Traditional single site updating Markov chain Monte Carlo sampling algorithm, like the Metropolis algorithm, tend to converge extremely slowly when used to simulate these models, particularly at low temperatures and for constraints involving large geometrical shapes. Moreover, the morphologically constrained Gibbs random fields are not, in general, Markov. Hence, a Markov chain Monte Carlo sampling algorithm based on the Gibbs sampler is not possible. We prose a variant of the Metropolis algorithm that, at each iteration, allows multi-site updating and converges substantially faster than the traditional single- site updating algorithm. The set of sites that are updated at a particular iteration is specified in terms of a shape parameter and a size parameter. Computation of the acceptance probability involves a 'test ratio,' which requires computation of the ratio of the probabilities of the current and new realizations. Because of the special structure of our energy function, this computation can be done by means of a simple; local iterative procedure. Therefore lack of Markovianity does not impose any additional computational burden for model simulation. The proposed algorithm has been used to simulate a number of image texture models, both synthetic and natural.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present simple conditions related to geometric ergodicity of Markov chains which ensure the convergence in a given sense of the simulated annealing algorithm. We prove that convergence of the algorithm occurs for a proper sequence of temperatures when a local minorization condition of the transition kernels and a drift condition are satisfied. This result may be useful in a Bayesian framework, where it is possible to take advantage of the statistical structure of the problem in order to perform efficient optimization. This is illustrated on several examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of deconvolution is a classical problem in Gamma imaging. Using the conventional statistical model introduced by Shepp and Verdi (1982), many authors have adopted a Bayesian approach to address this problem and to estimate the different numbers of Gamma rays emitted A = (As)SET at each pixel s. As prior on A to regularize the picture, one uses a Gauss Markov random Field. However, this approach has one major drawback: the choice of the regularization parameter 3. If3 is too high, the discontinuities are oversmoothed, and if /3is too low, the picture is not regularized enough. In this paper, we introduce a new hierarchical prior model for A , inwhich 3 is not constant over the picture. This hierarchical prior model uses a Markov random field to describe spatial variation of the logarithm of the smoothing parameter log /3= (log3s)sET fl a second random field which describes the spatial variation in A. The coupled Markov random fields are used as prior distributions. Similar ideas have occurred in Aykroyd (1996), but our prior model is quite different.Our new hierarchical prior model is applied for the problem of deconvolution of radioactive sources in Gamma imaging.The estimation of A and i3 is based on their joint posterior density, following a Bayesian framework. This estimation is performed using a new SAGE EM algorithm (Hero and Fessler,1995), where the parameters A and 3 are updated sequentially. Our new prior model is tested on synthetic and real data and compared to the conventional Gauss Markov random field prior model : our algorithm increases significantly the results obtained by using a classical prior model.
keywords : adaptive smoothing, Compound Gauss-Markov random Fields, Doubly stochastic random fields, SAGE EM algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with the problem of generating samples for a commonly used form of Laplacian distribution. The algorithm was developed particularly for use in generating samples from priors which define morsel for images. It is shown that by ranking the independent variables in the distribution, an analytic expression for the Cumulative Density function ca be derived. This can be used to generate random samples by transforming a uniformly distributed random variable. Issues of scaling are addressed which make the numerical application of these functions possible on finite precision machines. Some discussion is given about the convergence of the Gibbs sampler using this sampling method compared with using direct methods or the Metropolis algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This communication presents new results about convergence of stochastic gradient algorithms for maximum likelihood estimation of Markov random fields. We first present theoretical results dealing with the convergence of a generalized Robbins-Montro procedure. These results provide rigorous justifications for simple numerical strategies which can be employed in practice; they are illustrated by numerical experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we introduce the L-hypersurface method for use in linear inverse problems. The new methods is intended to select multiple regularization parameters simultaneously. It is a multidimensional extension of classical L-curve method and hence does not require any specific knowledge about the noise level or signal semi-norm. We give examples of the L-hypersurface method applied the linear inverse problems posed in the wavelet domain and evaluate the performance of the new method on a signal restoration experiment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we examine the problem of estimating the hyperparameters in image restoration when the point-spread function (PSF) of the degradation system is partially known. For this problem the PSF is assumed to be the sum of a known deterministic and an unknown random component. In this paper two iterative algorithms are proposed that simultaneously restor the image and estimate the hyperparameters of the restoration filter using hyperprior. These algorithms are based on evidence analysis within the hierarchical Bayesian framework. This work was motivated by the observation that it is not possible to simultaneously estimate all the necessary hyperparameters for this problem without any prior knowledge about them. More specifically, we observed in our previous work that we cannot estimate accurately at the same time the hyperparameters and thus facilitate this estimation problem. The proposed iterative algorithms can be derived in the discrete Fourier transform domain, therefore, they are computationally efficient even for large images. Numerical experiments are presented where the benefits of introducing hyperpriors are demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperparameter estimation for incomplete data in Markov Random Field image restoration is investigated. Assuming linear dependence of energies wrt hyperparameters framework, we use a classical cumulant expansion technique for Maximum Likelihood estimation of hyperparameters of the prior, pixel regularization probability density function. The particular case where the prior potential is an homogenous function of pixels is fully analyzed. This approach is then extended to an explicit joint boundary-pixel process aimed to preserve discontinuities. A generalized stochastic gradient (GSG) algorithm with a fast sampling technique is devised aiming to achieve simultaneous hyperparameter estimation, pixel and boundary restoration. Image restoration performances of posterior mean performed during GSG convergence and of simulated annealing performed after GSG convergence are compared experimentally. Results and perspectives are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
NonGaussian Markov image models are effective in the preservation of edge detail in Bayesian formulations of restoration and reconstruction problems. Included in these models are coefficients quantifying the statistical links among pixels in local cliques, which are typically assumed to have an inverse dependence on distance among the corresponding neighboring pixels. Estimation of these coefficients is a nontrivial task for Non Gaussian models. We present rules for coefficient estimation for edge- preserving models which are particularly effective for edge preservation and noise suppression, using a predictive technique analogous to estimation of the weights of optimal weighted median filters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.