In this presentation, we deal with the design of a filter based on the geodesic distance affinity which can able to suppress the low frequency artifacts as a post-processing step after the main Winner filter restoration. We consider the multispectral signal model in the case of the Winner filter restoration independently in each image channel. The impulse response of the obtained filter is a linear combination of generalized filters optimized with respect to different criteria. The main idea of the proposed algorithm is based on assumption that low frequency outliers and the additional noise in the different image channels are not correlate to each other, thus the affinity space formed by of the opposite channels can effectively suppress the main restoration artifacts. The performance of the proposed filter analyzed and compared in terms of the PSNR accuracy. The proposed method demonstrates the ability to suppress distortion due to the low frequency artifacts.
The paper deals with the design of a composite correlation filter from noisy training images for reliable recognition and localization of distorted targets embedded into cluttered linearly degraded and noisy scenes. We consider the nonoverlapping signal model for the input scene and additive noisy model for the reference. The impulse response of the obtained filter is a linear combination of generalized filters optimized with respect to the peak-to-output energy. The performance of the proposed composite correlation filter is analyzed in terms of discrimination capability and accuracy of target location when the reference objects and input scenes are degraded.
The paper deals with correction of color images distorted by spatially nonuniform illumination. A serious distortion occurs in real conditions when a part of the scene containing 3D objects close to a directed light source is illuminated much brighter than the rest of the scene. A locally-adaptive algorithm for correction of shadow regions in color images is proposed. The algorithm consists of segmentation of shadow areas with rank-order statistics followed by correction of nonuniform illumination with human visual perception approach. The performance of the proposed algorithm is compared to that of common algorithms for correction of color images containing shadow regions.
We propose a new method to estimate motion blur parameters based on the autocorrelation function of a blurred image. This blurred image is considered as a superposition of M shifted images identical to the original nonblurred image. In this case, convolution of the blurred image with itself can be considered as M2 pairwise convolutions, which contribute to the resultant autocorrelation function producing a distinguishable line corresponding to the estimated motion blur angle. The proposed method demonstrates the comparable accuracy of the motion blur angle estimation in comparison with state-of-the-art methods. Our method possesses lower computational complexity than popular accurate methods based on Radon transform. The proposed model also allows to accurately estimate motion blur length. Our results of length estimation, in general, outperform the accuracy of the methods based on Radon transform.
The paper deals with restoration of decimated images degraded by space-variant distortions. Such distortions occur in
real conditions when the camera in an actual shoot is shaken and rotated in three dimensions while its shutter is open.
The proposed method is locally adaptive image restoration in the domain of a sliding orthogonal transform. It is assumed
that the signal distortion operator is spatially homogeneous in a small sliding window. A fast preview restoration
algorithm for degraded images is proposed. To achieve the image restoration with low resolution at high rate, a fast
recursive algorithm for computing the sliding discrete cosine transform with arbitrary step is utilized. The proposed
algorithm is tested with spatially nonuniform distortion operators and obtained results are discussed.
Two consecutive frames of a lateral navigation camera video sequence can be considered as an appropriate approximation to epipolar stereo. To overcome edge-aware inaccuracy caused by occlusion, we propose a model that matches the current frame to the next and to the previous ones. The positive disparity of matching to the previous frame has its symmetric negative disparity to the next frame. The proposed algorithm performs probabilistic choice for each matched pixel between the positive disparity and its symmetric disparity cost. A disparity map obtained by optimization over the cost volume composed of the proposed probabilistic choice is more accurate than the traditional left-to-right and right-to-left disparity maps cross-check. Also, our algorithm needs two times less computational operations per pixel than the cross-check technique. The effectiveness of our approach is demonstrated on synthetic data and real video sequences, with ground-truth value.
Captured real images are often affected by environmental and technical interferences such as linear homogeneous distortion, nonuniform illumination, sensors’ noise, geometrical scene distortion, etc. Among these issues, the former is particularly interesting because various physical problems can be modeled by such degradations. In this work we propose a blind algorithm for identification of the linear distortion operator based on the analysis of zero-crossings and phase distribution of the distorted image spectrum. The algorithm is tested with common real linear distortion operators and its identification rate is discussed.
The paper describes a strategy we use to translate an existing conventional archive into a digital form. The method is directed to large archives comprising documents with essential graphic constituent (handwritten texts, photographs, drawings, etc.) that result in images. Our technology of image digitization, storage and presentation in multiple resolutions is specifically discussed. Main structural components of the digital archive are relational database and image bank, physically separated but logically linked together. The components make up three-level distributed structure consisting of primary archive, its regional replicas, and various secondary archives (among them subsets presented in the Web and CD/DVD-ROM collections). Only authorized user is allowed to access two upper levels, and the bottom level is open for free public access. A secondary archive is created and updated automatically without special development. Images in the bank are stored in multiple resolutions, and linking the proper image to a database record comes to be dynamical, dependent of user interaction context (e.g. channel bandwidth, user permissions, etc.) Such construction allows us to combine reliable storage, easy access and avoid intellectual property protection issues. We also presents several digital archives already implemented on this basis in the Archive of the Russian Academy of Sciences.
There are different techniques available for solving of the restoration problem including Fourier domain techniques, regularization methods, recursive and iterative filters to name a few. But without knowing at least approximate parameters of the blur, these methods often show poor results. If incorrect blur model is chosen then the image will be rather distorted much more than restored. The original solution of the blur and blur parameters identification problem is presented in this paper. A neural network based on multi-valued neurons is used for the blur and blur parameters identification. It is shown that it is possible to identify the type of the distorting operator by using simple single-layered neural network. Four types of blur operators are considered: defocus, rectangular, motion, and Gaussian ones. The parameters of the corresponding operator are identified by using a similar neural network. After identification of the blur type and its parameters the image can be restored using different methods. Some fundamentals of image restoration techniques are also considered.
As a rule, blur is a form of bandwidth reduction of an ideal image owing to the imperfect image formation process. It can be caused by relative motion between the camera and the original scene, or by an optical system that is out of focus. Today there are different techniques available for solving of the restoration problem including Fourier domain techniques, regularization methods, recursive and iterative filters to name a few. But without knowing at least approximate parameters of the blur, these filters show poor results. If incorrect blur model is chosen then the image will be rather distorted much more than restored. The original solution of the blur and blur parameters identification problem is presented in this paper. A neural network based on multi-valued neurons is used for the blur and blur parameters identification. It is shown that using simple single-layered neural network it is possible to identify the type of the distorting operator. Four types of blur are considered: defocus, rectangular, motion and Gaussian ones. The parameters of the corresponding operator are identified using a similar neural network. After a type of blur and its parameters identification the image can be restored using several kinds of methods. Some fundamentals of image restoration are also considered.
The paper describes a methodology we use to translate an existing conventional archive into a digital one. The method does well for large archives comprising documents with essential graphic constituent (handwritten texts, photographs, drawings, etc.). Main structural components of our digital archive are relational database and image bank which are physically separated but logically linked together. The components make up three-level distributed structure consisting of primary archive, its regional replicas, and various secondary archives (among them subsets presented in the Web and collections of compact discs). Only authorizes user are allowed to access two upper levels, and the bottom level is open for free public access. A secondary archive is created and updated automatically without special development. Such construction allows us to combine reliable storage, easy access and protection of intellectual property. The paper also presents several digital archives already implemented in the Archive of the Russian Academy of Sciences.
CD-ROM, CD-I, VCD, CD-DA, Photo-CD, DVD is a partial list of storage devices widely used in multimedia application. Unlimited possibilities available for multimedia developers requires the adequate understanding of possibilities, advantages and disadvantages of each of them. This report gives the analysis of available hardware and software for developing and authoring multimedia projects. Digital image processing methods are considered. The described methods were realized in some multimedia projects.
Research results and practical developments performed to create an automated database of archival photo-documents are presented. The proposed approach is realized in a system consisting of three components: (1) a subsystem for document input (for picture transformation to either gray-scale or color digital images); (2) a subsystem for digital image processing (for effective noise suppression, defect elimination and image enhancement); and (3) an archiving subsystem (for effective lossless image compression supplemented with an interface to universal database management system). On the basis of the proposed approach a database prototype of archival photo-documents of Russian Academy of Sciences has been created.
The paper presents results of research and development made to create an automated data base of archival photo- documents. The proposed approach is realized in a system consisting of three components: (1) a subsystem for document input; (2) a subsystem for digital image processing; and (3) an archiving subsystem. On the basis of the proposed approach a database prototype of archival photo-documents of Russian Academy of Sciences has been created. The work focuses on the development of economical and portable solutions suitable for both centralized and distributed database architecture allowing Web access.
KEYWORDS: Holograms, Computer generated holography, 3D image reconstruction, 3D displays, Holography, 3D modeling, Mathematical modeling, Solids, Digital holography, Photography
Two computer-generated display macro holograms (CGDMH) have been synthesized to demonstrate the possibility of holographic display of 3D objects given by their mathematical descriptions only. Three dimensional models of the objects and shaded 2D projections in varying viewing directions were generated using the methods of computer graphics. For each projection, a Fourier hologram was synthesized and encoded by the kinoform method. The recording of the obtained digital kinoforms on a commercially available photographic film was done by a computer controlled laser device. This process produces, after film development and bleaching, a facet CGDMH. The complete CGDMHs have a size of 672 X 672 mm2 and consist of 900 elementary holograms of 256 X 256 samples each, calculated for different directions within the solid angle of +/- 90 degree(s). They allow the visual representation of 3D objects with good quality.
This paper presents an application of digital image processing in the historical sciences. it deals with the processing of x-ray recordings of watermark images taken from Middle Ages codices. A sequence of processing steps for image enhancement, geometrical transformations, watermark extraction,a nd binarization is suggested. The watermarks are stored together with alphanumeric information in a database, allowing the historian to retrieve and compare watermarks and to measure parameters of watermarks such as height, length, distance between special points, and radii.
Two large-scale computer generated holograms (CGH) were synthesized using methods of computer graphics for calculating 3D object models and 2D projections from them. 900 elementary projections (views) of the objects were calculated and subsequently encoded as CGHs using the kinoform method. The recording of the holograms was done on customary photographic material with succeeding bleaching. The whole multiplex large-scale CGH has a size of 600 * 600 mm2 and requires 3.6 Mbytes of storage. All software was written in C under the UNIX operating system. The CGHs are appropriate for representation of 3D objects with high quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.