Unlike most previous work, which used a random number of a sequence of bits or an image as watermark directly, this paper proposed a new image vectorisation method for digital image watermarking.
A watermark image (image to be embedded) is firstly contourised into a sequence of contour curves by constructing a "vector" for each of the grey level values. In the contourisation process, a topology analysis method is applied for looking for local maxima, minima and saddle points to implement a topology table. It is well known that the volume of "vector" data from real image contourisation may be up to an order of magnitude greater than the raster representation.
Therefore, a simplification method is adopted to analysis the great expansion of data, and to determine which contours it is necessary to preserve in a given image and which contours can be discarded in that image. With help of previously obtained the topology table, image is decomposed into a number of adjacent sub regions known as catchments basins, each of which typically surrounds of a local maximum or minimum and is defined by a contour which is referred to as a
watershed or watershed boundary. Then the simplified contour points are embedded as watermark onto the cover image by using the well known spread spectrum technique. After the contour points are extracted from watermarked image, the watermark image is reconstructed by constructing the triangle mesh defined by the contour map and rendering it using conventional rendering method.
A design method for synthetic discriminant functions is described that optimizes filter performance by an appropriate weighting of the phase and of amplitude components of the training set images. The optimization criteria are the quality of discrimination of in-class and rejection of out-of-class images, filter efficiency and robustness in the presence of noise. The comparison is with the POF/fSDF. A practical demonstration is found in the application of a so optimized filter in a hybrid correlator.
A synthetic discriminant function (SDF) fringe-adjusted joint transform correlator is proposed that is able to provide a high degree of image distortion invariance and classify different objects in the input scene. The SDF reference function, which is displayed alongside the input scene, is a linear combination of the training image set. An iterative algorithm is presented and utilized to obtain the linear combination coefficients from the nonlinear equations of the fringe-adjusted joint transform correlation (JTC) system. When compared with the SDF-based classical JTC and binary JTC, the SDF fringe-adjusted JTC delivers a better capability to give localized equal correlation peak heights for the same class of objects. Furthermore, when the input scene contains the different objects from the different classes of images, the SDF fringe-adjusted JTC is shown to efficiently classify the different target objects and reject the nontarget object in the input scene, whereas the SDF-based classical JTC and binary JTC fail to achieve this.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.