With the gradual increase in the spatial and spectral resolution of hyperspectral images, the size of image data becomes larger and larger, and the complexity of processing algorithms is growing, which poses a big challenge to efficient massive hyperspectral image processing. Cloud computing technologies distribute computing tasks to a large number of computing resources for handling large data sets without the limitation of memory and computing resource of a single machine. This paper proposes a parallel pixel purity index (PPI) algorithm for unmixing massive hyperspectral images based on a MapReduce programming model for the first time in the literature. According to the characteristics of hyperspectral images, we describe the design principle of the algorithm, illustrate the main cloud unmixing processes of PPI, and analyze the time complexity of serial and parallel algorithms. Experimental results demonstrate that the parallel implementation of the PPI algorithm on the cloud can effectively process big hyperspectral data and accelerate the algorithm.
Hyperspectral image (HSI) analysis is attracting a growing interest in real-world applications, many of which can finally be transformed into classification tasks. Traditional spectral-spatial HSI classification methods take advantage of the identical spatial information that is available everywhere, but this is not always the case, especially in the class boundary. A method for HSI classification based on the spectral information and the adaptive spatial context is proposed. First, we introduce a high-dimensional steering kernel to describe the adaptive spatial context and select the spatial correlative pixels of a given test pixel according to the adaptive spatial context. The selected pixels can be simultaneously sparse represented by linear combinations of a few common training samples. Then, a classifier imposing the adaptive spatial context to determine the final label of the test pixel is proposed. Experimental results on real HSIs show that our algorithm outperforms other state-of-art algorithms.
For tomography reconstruction, the iteration methods based on spare regularization have recently emerged and are proven effective especially in the situation that projection data is insufficient or noisy in low radiation dose. Because iterative tomography reconstruction algorithms have heavy computational demanding especially for clinical data sets and is far from being close to real-time reconstruction. So there is incentive to develop fast algorithms of the optimization problem. We present new accelerating iterative shrinkage algorithms for sparse-based tomography reconstruction, which base on the existing shrinkage algorithms and combine with traditional algebra methods, linear search method and preconditioning techniques to solve large dense linear systems. We give two different weighted matrixes as the preconditioner and get the different convergence speeds. From the experimental results it can be seen that using the sparsity in the transform domain as the regularization term can greatly improve the visual effect of the reconstructed images compared with corresponding algebraic algorithms, and the linear search method can obviously accelerates the converge rate.
KEYWORDS: Radio over Fiber, Image denoising, Image processing, Visualization, Signal to noise ratio, Signal processing, Image enhancement, Image filtering, Electronic imaging, Computer science
The iterative regularization method proposed by Osher et al. for total variation based image denoising can preserve textures well and has received considerable attention in the signal and image processing community in recent years. However, the iteration sequence generated by this method converges monotonically to the noisy image, and therefore this iteration must be terminated opportunely with an "optimal" stopping index, which is difficult in practice. To overcome this shortcoming, we propose a novel fractional-order iterative regularization model by introducing the fractional-order derivative. The new model can be considered as an interpolation between the traditional total variation model and the traditional iterative regularization model. Numerical results demonstrate that with a fitting order of derivative, the denoised image sequence generated by this model can converge to a denoised image with high peak signal to noise ratio and high structural similarity index after a few iteration steps, and therefore we can terminate the iteration according to some most used termination conditions. Moreover, we propose an experience method to choose the order of derivative adaptively for the partly textured images to improve the performance of noise removal and texture preservation. The adaptive method has low computational cost and can improve the result visually efficiently.
Image deconvolution is an ill-posed, low-level vision task, restoring a clear image from the blurred and noisy observation. From the perspective of statistics, previous work on image deconvolution has been formulated as a maximum a posteriori or a general Bayesian inference problem, with Gaussian or heavy-tailed non-Gaussian prior image models (e.g., a student's t distribution). We propose a Parseval frame-based nonconvex image deconvolution strategy via penalizing the l0-norm of the coefficients of multiple different Parseval frames. With these frames, flexible filtering operators are provided to adaptively capture the point singularities, the curvilinear edges and the oscillating textures in natural images. The proposed optimization problem is implemented by borrowing the idea of recent penalty decomposition method, resulting in a simple and efficient iteration algorithm. Experimental results show that the proposed deconvolution scheme is highly competitive among state-of-the-art methods, in both the improvement of signal-to-noise ratio and visual perception.
In this paper, we propose an improved method for simultaneous estimation of the bias field and segmentation of tissues
for magnetic resonance images, which is an extension of the method in. Firstly, the bias field is modeled as a linear
combination of a set of basis functions, and thereby parameterized by the coefficients of the basis functions. Then we
model the distribution of intensity in each tissue as a Gaussian distribution, and use the maximum a posteriori probability
and total variation (TV) regularization to define our objective energy function. At last, an efficient iterative algorithm
based on split Bregman method is used to minimize our energy function at a fast rate. Comparisons with other
approaches demonstrate the superior performance of this algorithm.
Chlorophyll-a (Chl-a) retrieval in case II waters is of intense research now. And due to the high turbidity of case II
waters, most of the Chl-a information we have retrieved is the signal of suspended sediment concentrations. In order to
improve the accuracy, we not only study the new retrieval algorithm, but also get more in-situ data sets. Thus, this paper
studies the in-situ data in the Changjiang River Estuary and adjacent sea from Apr. 5th to May 5th in 2007, and the
results show that the Changjiang diluted water (CDW) extends offshore with a bimodal structure during the observation,
one extending toward the southeast, the other toward the northeast, the main axis of the CDW extending toward the
northeast. There exists two centers of higher Chl-a concentration near the Changjiang River mouth, (122.45E,31.75N)
and (123.2E, 30.5N), and the maximum concentrations have reached 6.5ug/L,6.3ug/L respectively. The Chl-a
concentration would be increased significantly by the continual strong winds. The horizontal Chl-a maximum
distribution is closely related to the position of CDW and the current structure.
Sea surface temperature (SST) is both an important variable for weather and ocean forecasting, but also a key indicator
of climate change. Predicting future SST at different time scales constitutes an important scientific problem. The
traditional approach to prediction is achieved through numerical simulation, but it is difficult to obtain a detailed
knowledge of ocean initial conditions and forcing. This paper proposes a improved prediction system based on SOFT
proposed by Alvarez et al and studies the predictability of SST at different time scales, i.e., 5 day, 10 day, 15 day, 20 day
and month ahead. This method is used to forecast the SST in the Yangtze River estuary and its adjacent areas. The period
of time ranging from Jan 1st 2000 to Dec 31st 2005 is employed to build the prediction system and the period of time
ranging from Jan 1st 2006 to Dec 31st 2007 is employed to validate the performance of this prediction system. Results
indicate: The prediction errors of 5 day,10 day,15 day, 20 day and monthly ahead are 0.78°C,0.86°C,0.90°C,1.00°C and
1.45°C respectively. The longer of time scales prediction, the worse of prediction capability. Compared with the SOFT
system proposed by Alvarez et al, the improved prediction system is more robust. Merging more satellite data and trying
to better reflect the real state of ocean variables, we can greatly improve the predictive precision of long time scale.
A complete data set is crucial for many applications of satellite images. Therefore, this paper tries to reconstruct the
missing data sets by combining Empirical Orthogonal Functions(EOF) decomposition with Kriging methods. The
EOF-based method is an effective way of reconstructing missing data for large gappiness and can maintain the
macro-scale and middle-scale information of oceanographic variables. As for sparse data area (area without data or with
little data all the time), EOF-based method breaks down, while Kriging interpolation turns effective. Here are the main
procedures of EOF-Kriging(EOF-K) method: firstly, the data sets are processed by the EOF decomposition and the
spatial EOFs and temporal EOFs are obtained; then the temporal EOFs are analyzed with Singular Spectrum
Analysis(SSA); thirdly, the sparse data area is interpolated in the spatial EOFs by using Kriging interpolation; lastly, the
missing data is reconstructed by using the modified spatial-temporal EOFs. Furthermore, the EOF-K method has been
applied to a large data set, i.e. 151 daily Sea Surface Temperature satellite images of the East China Sea and its adjacent
areas. After reconstruction with EOF-K, comparing with original data sets, the root mean square error (RMSE) of
cross-validation is 0.58 °C, and comparing with in-situ Argo data, the RMSE is 0.68 °C. Thus, it has been proved that
EOF-K reconstruction method is robust for reconstructing satellite missing data.
It is shown that the watermarking algorithm presented in another paper [Ganic and Eskicioglu, J. Electron. Imaging 14, 043004 (2005)] has a very high probability of a false-positive answer and has its limitations in practice. Furthermore, the intrinsic reasons of the high false-alarm probability are as follows: the basis space of singular value decomposition is image content dependent, there is no one-to-one correspondence between singular value vector and image content, because singular value vectors have no information on the structure of image. Thus, the most important reason is a result of a false conception to insert watermark singular value vectors without information on the structure of the watermark. Finally, some examples are given to prove our results of theoretical analysis.
Many magnification algorithms have been proposed in
past decades, most of which concentrate on the smooth reconstruction
of edge structures. Edge reconstruction, however, can destroy
corners, thus producing perceptually unpleasant rounded corner
structures. In this work, corner shock filtering is designated for enhancing
corners relative to the known edge shock filtering, based on
a new measure of corner strength and the theory of level-sets motion
under curvature. By combining directional diffusion, edge shock
filtering, and corner shock filtering, a regularized partial differential
equation (PDE) approach for magnification is proposed to simultaneously
reconstruct the edges and preserve the corners. Moreover,
the proposed PDE approach is also robust to random noises. Experimental
results in both cases of grayscale and color images confirm
the effectiveness of our approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.