PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 6810, including the Title Page, Copyright information, Table of Contents, Introduction, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although digital multispectral imaging-particularly
ultraviolet-induced fluorescence imaging-is a very common
examination tool, its interpretation remains fraught with difficulties. Interpretation is strongly dependent on the capture
methodology, requires an understanding of the physical and chemical characteristics and interactions among materials in
artworks and is affected by data-analysis procedures.
The present research, which began with imaging of paint materials of known composition and proceeded to a range of
representative case studies, confirmed that fluorescence emissions by painting materials-such as organic binders or
colorants-are generally severely affected by the presence of absorbing non-fluorescing materials, such as inorganic
pigments. Application of a mathematical model based on the
Kubelka-Munk theory, resulted in the possibility of
distinguishing between real and apparent fluorescence emissions. Real emissions correspond to the presence of materials
which de facto exhibit fluorescent properties (typically organic binders and colorants), while apparent emissions relate to
the optical interactions among fluorescent materials and surrounding non-fluorescent materials (typically inorganic
pigments). Correction for the 'pigment-binder interaction' can also provide useful information on the presence of
materials whose fluorescence is almost obliterated by absorbing pigmented particles. Therefore, this image-processing
methodology can be used to characterise and reveal emissions that are dimmed or altered by re-absorption. This capacity
to reveal the presence of weakly fluorescing emitters has important conservation implications and informs the sampling
strategy for further analytical investigations.
Examples of the application of this data analysis to images made at the Grotto Site in Dunhuang, China, and at the
British Museum are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a fast, low-cost technique to gather high-contrast 'relightable' photographs of desktop-sized objects.
Instead of an elaborate light stage, we follow Mohan et al.; we place the object and a digitally steered spotlight
inside a white cardboard box, aim the spotlight at the box interior, and move the spot to light the object from
N repeatable lighting directions. However, strong ambient lighting from box interreflections causes 'shallow'
shadows and reduces contrasts in all basis images. We show how to remove this ambient lighting computationally
from the N images, by measuring an N ×N matrix of coupling factors between lighting directions using a mirrorsphere
light probe. This linear method, suitable for any light stage, creates physically accurate 'deep shadow'
basis images, yet imposes only a modest noise penalty, and does not require external light metering or illumination
angle measurements. Results from our demonstration system support these claims.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Archimedes Palimpsest is a thousand-year old overwritten parchment manuscript, containing several treatises by
Archimedes. Eight hundred years ago, it was erased, overwritten and bound into a prayer book. In the middle of the
twentieth century, a few pages were painted over with forged Byzantine icons. Today, a team of imagers, scholars and
conservators is recovering and interpreting the erased Archimedes writings. Two different methods have been used to
reveal the erased undertext. Spectral information is obtained by illuminating the manuscript with narrow-band light from
the ultraviolet, through the visible wavebands and into the near-infrared wavelengths. Characters are extracted by
combining pairs of spectral bands or by spectral unmixing techniques adapted from remote sensing. Lastly, since all of
the text was written with iron gall ink, X-Ray fluorescence has been used to expose the ink underneath the painted icons.
This paper describes the use of color to enhance the erased text in the processed images and to make it visible to the
scholars. Special pseudocolor techniques have been developed that significantly increase the contrast of the erased text
and make it possible to be read by the scholars despite the presence of the obscuring, overlaid text.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Archimedes palimpsest is one of the most significant early texts in the history of science that has survived to the
present day. It includes the oldest known copies of text from seven treatises by Archimedes, along with pages from other
important historical writings. In the 13th century, the original texts were erased and overwritten by a Christian prayer
book, which was used in religious services probably into the 19th century. Since 2001, much of the text from treatises of
Archimedes has been transcribed from images taken in reflected visible light and visible fluorescence generated by exposure of the parchment to ultraviolet light. However, these techniques do not work well on all pages of the manuscript, including the badly stained colophon, four pages of the manuscript obscured by icons painted during the first half of the 20th century, and some pages of non-Archimedes texts. Much of the text on the colophon and overpainted pages has been recovered from X-ray fluorescence (XRF) imagery. In this work, the XRF images of one of the other pages were combined with the bands of optical images to create hyperspectral image cubes and processed using standard statistical classification techniques developed for environmental remote sensing to test if this improved the recovery of the original text.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays, photographs are one of the most used media for communication. Images are used for the representation
of documents, Cultural goods, and so on: they are used to pass on a wedge of historical memory of the society.
Since its origin, the photographic technique has got several improvements; nevertheless, photos are liable to
several damages, both concerning the physical support and concerning the colors and figures which are depicted
in it: for example, think about scratches or rips happened to a photo, or think about the fading or red (or yellow)
toning concerning the colors of a photo. In this paper, we propose a novel method which is able to assess the original beauty of digital reproductions of aged photos, as well as digital reproductions of faded goods. The method is based on the comparison of the degraded image with a not-degraded one showing similar contents; thus, the colors of the not-degraded image can be transplanted in the degraded one. The key idea is a dualism between the analytical mechanics and the color theory: for each of the degraded and not-degraded images we compute first a scatter plot of the x and y normalized coordinates of their colors; these scatter diagrams can be regarded as a system of point masses, thus provided of inertia axes and an inertia ellipsoid. Moving the scatter diagram of the degraded image over the one belonging to the not-degraded image, the colors of the degraded image can be restored.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compared with colorimetric imaging, multispectral imaging has the advantage of retrieving spectral reflectance factor
of each pixel of a painting. Using this spectral information, pigment mapping is concerned with decomposing the spectrum into its constituent pigments and their relative contributions. The output of pigment mapping is a series of spatial concentration maps of the pigments comprising the painting. This approach was used to study Vincent van Gogh's The Starry Night. The artist's palette was approximated using ten oil pigments, selected from a large database of pigments used in oil paintings and a priori analytical research on one of his self portraits, executed during the same time period. The pigment mapping was based on single-constant Kubelka-Munk theory. It was found that the region of blue sky where the stars were located contained, predominantly, ultramarine blue while the swirling sky and region surrounding the moon contained, predominantly, cobalt blue. Emerald green, used in light bluish-green brushstrokes surrounding the moon, was not used to create the dark green in the cypresses. A measurement of lead white from Georges Seurat's La Grande Jatte was used as the white when mapping The Starry Night. The absorption and scattering properties of this white were replaced with a modern dispersion of lead white in linseed oil and used to simulate the painting's appearance before the natural darkening and yellowing of lead white oil paint. Pigment mapping based on spectral imaging was found to be a viable and practical approach for analyzing pigment composition, providing new insight into an artist's working method, the possibility for aiding in restorative inpainting, and lighting design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents image processing algorithms designed to analyse the colour CIE Lab histogram of high resolution
images of paintings. Three algorithms are illustrated which attempt to identify colour clusters, cluster shapes due to
shading and finally to identify pigments. Using the image collection and pigment list of the National Gallery London
large numbers of images within a restricted period have been classified with a variety of algorithms. The image
descriptors produced were also used with suitable comparison metrics to obtain content-based retrieval of the images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
From the fifteenth to the nineteenth centuries, prints were a common form of visual communication, analogous to
photographs. Copperplate prints have many finely engraved black lines which were used to create the illusion of
continuous tone. Line densities generally are 100-2000 lines per square centimeter and a print can contain more than a
million total engraved lines 20-300 micrometers in width. Because hundreds to thousands of prints were made from a
single copperplate over decades, variation among prints can have historical value. The largest variation is plate-related,
which is the thinning of lines over successive editions as a result of plate polishing to remove time-accumulated
corrosion. Thinning can be quantified with image analysis and used to date undated prints and books containing prints.
Print-related variation, such as over-inking of the print, is a smaller but significant source. Image-related variation can
introduce bias if images were differentially illuminated or not in focus, but improved imaging technology can limit this
variation. The Print Index, the percentage of an area composed of lines, is proposed as a primary measure of variation.
Statistical methods also are proposed for comparing and identifying prints in the context of a print database.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of our work is the development of image analysis tools and methods for the investigation of drawings and drawn drafts in order to investigate the authorship, to identify copies or more general to allow for a comparison of different types of drawings. It was and is common for artists to draw their design as several drafts on paper. These drawings can show how some elements were adjusted until the artist was satisfied with the composition. Therefore it can bring insights into the practice of artists and painting and/or drawing schools. This information is useful for art historians, because it can relate artists to each other. The goal of this paper is to describe a stroke classification algorithm which can recognize the drawing tool based on the shape of the endings of an open stroke. In this context, "open" means that both endings of a stroke are free-standing, uncovered and do not pass into another stroke. These endings are prominent features whose shape carries information about the drawing tool and are therefore used as features to distinguish different drawing tools. Our results show that it is possible to use these endings as input a drawing tool classificator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pencil drawings like portraits or landscapes comprise dozens of strokes. The segmentation and identification of
individual strokes is an interesting question in analyzing the drawings since it allows art historians to analyze
the development of the stroke formations in the picture in more detail. In this study we are going to identify
individual strokes in stroke formations and to reconstruct the original drawing trace of the artist. The method
is based on a thinning algorithm and a following analysis of the accrued skeleton. In order to detect the original
stroke and the natural drawing trace we use the curvilinearity information of the thinned sub-strokes. A sub-stroke
runs from either a real end point to a crossing point, or between two crossing points. The selection of
corresponding strokes in crossing points is based on the angle at the end points of the sub-strokes. The individual
strokes drawn through are represented by a one pixel wide line which approximates the original drawing trace
of the artist by a cubic B-spline. The whole process is parameter free: we use the automatic calculated stroke
width for the skeleton pruning process, for the calculation of the angles at the sub-stroke endings and as the
distance for the spline control points.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We used digital image processing and statistical clustering algorithms to segment and classify brush strokes in
master paintings based on two-dimensional space and
three-dimensional chromaticity coordinates. For works
executed in sparse overlapping brush strokes our algorithm identifies candidate clusters of brush strokes of the
top (most visible) layer and digitally removes them. Then, it applies modified inpainting algorithms based on
statistical structure of strokes to fill in or "inpaint" the remaining, partially hidden brush strokes. This processes
can be iterated, to reveal and fill in successively deeper (partially hidden) layers of brush strokes-a process we
call "de-picting." Of course, the reconstruction of strokes at each successively deeper layer is based on less and
less image data from the painting and requires cascading estimates and inpainting; as such our methods yield
poorer accuracy and fidelity for such deeper layers. Our current software is semi-automatic; the operator such
as a curator or art historian guides certain steps. Future versions of our software will be fully automatic, and
estimate more accurate statistical models of the brush strokes in the target painting yield better estimates of
hidden brush strokes. Our software tools may aid art scholars in characterizing the images of paintings as well
as the working methods of some master painters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent work has shown that the mathematics of fractal geometry can be used to provide a quantitative signature
for the drip paintings of Jackson Pollock. In this paper we discuss the calculation of a related quantity, the "entropy
dimension" and discuss the possibility of its use a measure or signature for Pollock's work. We furthermore
raise the question of the robustness or stability of the fractal measurements with respect to variables like mode
of capture, digital resolution, and digital representation and include the results of a small experiment in the step
of color layer extraction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hans Memling's 1487 diptych Virgin and Child and Maarten van Nieuwenhove is one of the most celebrated Early
Netherlandish paintings, but little is known about the practical use of such objects in late medieval devotional
practice. A particular point of debate, spurred by the reflection in the painted convex mirror behind the Virgin,
concerns the question if the two hinged panels were to be used while set at an angle, and, if so, at what angle.
It was recently discovered that the mirror was not part of the painting's initial design, but instead added later
by Memling. We created a simple computer graphics model of the tableau in the diptych to test whether the
image reflected in the mirror conformed to the image of the model reflected in the mirror. We find two significant
deviations of the depicted mirror from that predicted from our computer model, and this in turn strongly suggests
that Memling did not paint the mirror in this diptych while viewing the scene with a model in place, but that
the mirror was more likely painted without a model present. In short, our findings support the notion that the
mirror was an afterthought. This observation might have implications for the understanding of how the diptych
was used in devotional practice, since it affects the ideal viewing angle of the wings for the beholder.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A recent theory claims that the late-Italian Renaissance painter Lorenzo Lotto secretly built a concave-mirror
projector to project an image of a carpet onto his canvas and trace it during the execution of Husband and
wife (c. 1543). Key evidence adduced to support this claim includes "perspective anomalies" and changes in
"magnification" that the theory's proponents ascribe to Lotto refocusing his projector to overcome its limitations
in depth of field. We find, though, that there are important geometrical constraints upon such a putative optical
projector not incorporated into the proponents' analyses, and that when properly included, the argument for the
use of optics loses its force. We used Zemax optical design software to create a simple model of Lotto's studio
and putative projector, and incorporated the optical properties proponents inferred from geometrical properties
of the depicted carpet. Our central contribution derives from including the 116-cm-wide canvas screen; we found
that this screen forces the incident light to strike the concave mirror at large angles (≥ 15°) and that this, in
turn, means that the projected image would reveal severe off-axis aberrations, particularly astigmatism. Such
aberrations are roughly as severe as the defocus blur claimed to have led Lotto to refocus the projector. In short,
we find that the projected images would not have gone in and out of focus in the way claimed by proponents,
a result that undercuts their claim that Lotto used a projector for this painting. We speculate on the value of
further uses of sophisticated ray-tracing analyses in the study of fine arts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem in computer vision of inferring the illumination direction is well studied for digital photographs of
natural scenes and recently has become important in the study of realist art as well. We extend previous work
on this topic in several ways, testing our methods on Jan Vermeer's Girl with a pearl earring (c. 1665-1666).
We use both model-independent methods (cast-shadow analysis, occluding-contour analysis) and model-based
methods (physical models of the pearl, of the girl's eyes, of her face). Some of these methods provide an estimate
of the illuminant position in the three dimensions of the picture space, others in just the two dimensions of the
picture plane. Our key contributions are a Bayesian evidence integration scheme for such disparate sources of
information and an empirical demonstration of the agreement, or at least consistency, among such estimates in a
realist painting. Our methods may be useful to humanist art scholars addressing a number of technical problems
in the history of art.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer graphics models of tableaus in paintings provide a principled and controlled method for exploring
alternate explanations of artists' praxis. We illustrate the power of computer graphics by testing the recent
claim that Georges de la Tour secretly built an optical projector to execute Christ in the carpenter's studio,
specifically that he traced projected images in two "exposures," with the illuminant in a different position in
each. The theory's originator adduces as evidence his informal impressions that the shadows and highlights in the
depicted image imply that the illuminant is in positions other than that of the depicted candle. We tested this
projection claim by creating a computer graphics model of the tableau and adjusting the location of the model's
illuminants so as to reproduce as closely as possible the pattern of shadows and highlights in the depicted scene.
We found that for one "exposure" the model illuminant was quite close to the depicted candle, rather than in
the position demanded by the projection theory. We found that for the other "exposure" no single illuminant
location explained all highlights perfectly but the evidence was most consistent with the illuminant being in the
location of the candle. Our simulation evidence therefore argues against the projection theory for this painting, a
conclusion that comports with those from earlier studies of this and other paintings by de la Tour. We conclude
with general lessons and suggestions on the use of computer graphics in the study of two-dimensional visual art.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the past quarter century, measures of statistical regularities of natural scenes have emerged as important tools in
explaining the coding properties of the mammalian visual system. Such measures have recently been extended to the
study of art. Our own work has shown that a log nonlinearity is a reasonable first approximation of the type of luminance
compression that artists perform when they create images. But how does this nonlinearity compare to those that artists
actually use? In this paper, we propose a model of the global luminance compression strategy used by one artist. We also
compare the curves required to transform natural scenes so that the scene luminance histograms matched the histograms
of a number of collections of art, and we tested the response of observers to those scenes. The collections included a
group of Hudson River School paintings; a group of works deemed to be "abstract" works in a forced-choice paradigm;
collections of paintings from the Eastern and Western hemispheres; and other classes. If a single transform were
sufficient to compress images in the way artists do, we would expect these transforms all to be log-like and on average,
there should be little or no difference in observer preference for the collection of natural scenes when they are
compressed according to these transforms. We find instead that these groupings of art have distinct transforms and that
Western observers prefer many of these transforms over a log transform. Together these findings offer evidence that a
painter's global luminance compression strategy-or "artist's look-up table"-may be a fundamental property of a given
painter or grouping of paintings, though further study is needed to establish what factors determine the shape of this
transform. We discuss a number of possible factors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.