Keratoconus is a chronic-degenerative disease which results in progressive corneal thinning and steepening leading to irregular astigmatism and decreased visual acuity that in severe cases may cause debilitating visual impairment. In recent years, different Machine Learning methods have been applied to distinguish either normal and keratoconic eyes. These methods utilize both corneal curvature maps and their corresponding numeric indices to perform the classification. The main objective of this study is to evaluate the performance of features extracted with Histograms of Oriented Gradients (HOG) and with Convolutional Neural Networks (CNN) in the classification of normal and keratoconic eyes, using axial map of the anterior corneal surface. Two distinct models were trained using the same Multilayer Perceptron (MLP) architecture: one of them using the HOG features as input, and the other with the CNN features. The Topographic Keratoconus Classification index (TKC) provided by Pentacam™ was used as a label and the KC2-labeled maps were defined as keratoconus. Each model was trained using 3,000 images of normal and 3,000 keratoconic eyes, and then validated and tested on 1,000 images of each label. The model trained with HOG features exhibited a sensitivity of 99.1% and specificity of 98.7%, with an Area Under the Curve (AUC) of 0.999143. The model trained with CNN features showed both sensitivity and specificity of 99.5%, and AUC = 0.999778. The results suggest that the performance of the classifier is similar for both types of features.
Precision Agriculture stands out as one of the most promising areas for the development of new technologies around the world. Some advances from this area include the mapping of productivity areas and the development of sensors for climate and soil analysis, improving the smart use of resources during crop management and helping farmers during the decision-making stages. Among the problems of modern agriculture, the intensive and non-localized use of herbicides causes environmental issues, contributes to elevated costs in farmers’ budgets and results in applications of chemical substances in non-target organisms. Although there are many selective herbicide spraying systems available for use, the majority working principle is based upon chlorophyll detectors, thus not being able to distinguish crop plants from weeds with high accuracy in crop’s post-emergence herbicide applications (“green-on-green” application). The main objective of this study is to develop a multispectral camera system for in-crop weed recognition using Computer Vision techniques. The system was built with four monochromatic CMOS sensor cameras with monochromatic wavelength bandpass filters (green, red, near infrared and infrared) and a RGB camera. Soybean and weed plants images were captured in a controlled environment using an automated v-slot rail system to simulate the movement of a spray tractor in the field. Infrared images presented higher precision (90.5%) and recall (89.3%) values compared to the other monochromatic bands, followed by RGB (87.0% and 86.1%, respectively) and near infrared images (83.6% and 87.9%), suggesting that infrared wavelengths plays an important role in plant detection and classification. Our results state that the combination of Computer Vision and multispectral images of plants is a more efficient approach for targeting weeds among crop plants for post-emergence herbicide applications.
Dry eye is one of the most reported eye health conditions and it is characterized by dryness, decreased tear production, or increased tear film evaporation. Middle-aged and elderly people are most commonly affected because of the high prevalence of contact lens usage, systemic drug effects, autoimmune diseases, and refractive surgeries. Corneal topography images have been recently used for noninvasive assessment, based on the Placido rings pattern. The rings in normal eyes are smooth and have no distortion, whereas they are distorted in affected eyes. We developed a method of analysis that process the corneal topography image to determine the Tear Break-up Time (TBUT), using the Tear Film Surface Quality (TFSQ) measurement. To avoid distortions not caused by the tear film break-up, the method dynamically removes eyelashes shadows from the image processing area. The results show that the proposed analysis is able to determine the TBUT based on the graphical analysis, and it can be used to help eye care specialists to diagnose dry eye disease.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.