A novel spectral imaging sensor based on dual direct vision prisms is described. The prisms project a spectral image onto
the focal plane array of an infrared camera. The prism set is rotated on the camera axis and the resulting spectral
information is extracted as an image cube (x, y, λ), using tomographic techniques. The sensor resolves more than 40
spectral bands (channels) at wavelengths between 1.2 μm and 2.5 μm wavelength. The sensor dispersion characteristic is
determined by the vector sum of the dispersions of the two prisms. The number of resolved channels, and the related
signal strength per channel, varies with the angle between the prism dispersion axes. This is a new capability for this
class of spectral imaging sensor. Reconstructed short-wave imagery and spectral data is presented for field and
laboratory scenes and for standard test sources.
Unlike straightforward registration problems encountered in broadband imaging, spectral imaging in fielded instruments
often suffers from a combination of imaging aberrations that make spatial co-registration of the images a challenging
problem. Depending on the sensor architecture, typical problems to be mitigated include differing focus, magnification,
and warping between the images in the various spectral bands due to optics differences; scene shift between spectral
images due to parallax; and scene shift due to temporal misregistration between the spectral images. However, typical
spectral images sometimes contain scene commonalities that can be exploited in traditional ways. As a first step toward
automatic spatial co-registration for spectral images, we exploit manually-selected scene commonalities to produce
transformation parameters in a four-channel spectral imager. The four bands consist of two mid-wave infrared channels
and two short-wave infrared channels. Each of the four bands is blurred differently due to differing focal lengths of the
imaging optics, magnified differently, warped differently, and translated differently. Centroid location techniques are
used on the scene commonalities in order to generate sub-pixel values for the fiducial markers used in the
transformation polygons, and conclusions are drawn about the effectiveness of such techniques in spectral imaging
applications.
KEYWORDS: Deconvolution, Video, Super resolution, Data modeling, Computer simulations, Cameras, Video surveillance, Computed tomography, Tomography, Point spread functions
Super-resolution based on sequences of low-resolution images has many applications. Among these is improving the image quality of video images, particularly images of historical interest and images from security cameras. Successive frames have slightly different views, or projections, of the object. Not unlike the methods used in computerized tomography, these projections can be combined to produce an image with better resolution than any of the low-resolution views. We observe that in real images even the simplest objects are warped in successive frames. We estimate the warping parameters of each frame and then estimate the object by iterative deconvolution. This forces an appropriate match between a model for the data and the actual data. We show computer simulations of the method and we show some experimental results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.