Khao Dawk Mali 105 (KDML105), internationally known as "Jasmine Rice", is one of the most famous and major commercial rice in Thailand. Physical appearance of polished grain is one of key factors that influences rice’s price. One of major traits is a degree of chalkiness. Rice breeding scientists thus make a great effort in reducing grain chalkiness in order to meet market quality and match consumer preference. The routine task in breeding process is visually inspection of chalkiness level in rice. Since human visual inspection is slow, subjective, and not consistent over a long period, we propose to use global thresholding methods to automatically segment chalk area in order to improve speed of chalkiness inspection and provide objective and consistent results. However, due to the characteristics of rice chalk that causes several difficulties in a thresholding mechanism, we thus proposed a new method for improving histogram in order that the global threshold value can be computed easier. The proposed histogram improvement method has several desirable advantages, such as very low computational cost, efficiently dealing with the problem of low contrast, and insensitivity to size and location of objects. The effectiveness of the proposed histogram improvement method is evaluated using four well-known global thresholding methods on a real 96 chalky grain images with different degrees and variety characteristics of chalkiness. The accuracy of segmented chalk area was verified by comparing it with human segmentations produced by rice researchers. Experimental results demonstrate that the proposed histogram improvement method can significantly improve the segmentation results.
Generally, the purpose of saliency detection models for saliency object detection and for fixation prediction is complementary. Saliency detection models for saliency object detection aim to discover as much as possible true positive, while saliency detection models for fixation prediction intend to generate few false positive. In this work, we attempt to combine their strength together. We accomplish this by, firstly, replacing high-level features that frequently used in a fixation prediction model with our new saliency location map in order to make the model more general. Secondly, we train a saliency detection model with human eye tracking data in order to make the model correspond well to the human eye fixation (without the use of top-down attention). We evaluate the performance of our new saliency location map on both saliency detection and fixation prediction datasets in comparison with six state-of-the-art saliency detection models. The experimental results show that the performance of our proposed method is superior to other methods in an application of saliency object detection on MSRA dataset [1]. For fixation prediction application, the results show that our saliency location map performs comparable to the high-level features, but requires much less computation time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.