Operational cleaning robot face numerous challenges when attempting to deftly and steadily grasp objects in cluttered scenes due to factors such as limited areas, stacked items, and restricted sensor perception. To address this issue, we propose CR-Graspnet, a six-degree-of-freedom (6-DoF) grasp generation network that utilizes point cloud contact features. This approach decouples the grasp pose in high-dimensional space by defining contact points, allowing for joint learning of contact point sampling, grasp parameter regression, and grasp quality classification. Our experimental results demonstrate the effectiveness and feasibility of this method, with a success rate of 93% achieved in single target grasping scenarios.
Compared to flat ground, objects in indoor environments have various curved surfaces, cleaning these surfaces is a higher-level task than traditional ground cleaning. To accomplish this task, we propose a new method to perform wiping on the curved surface of an object and complete the cleaning work through the end tool of the robotic arm. Our method is divided into two parts: An RGB-D semantic segmentation attention-based Feature Fusion Network (AFFNet), which can effectively fuse the features from the encoder and decoder to improve semantic segmentation accuracy; A path planning algorithm based on point cloud, which can autonomously generate the robotic arm operation path. The experimental results show that AFFNet achieves an mIOU accuracy of 46.24% on SUNRGBD dataset with excellent performance, the robotic arm can complete the curved surface cleaning operation with continuous and smooth path.
Fabric defect segmentation is an important part of ensuring the quality of product production. Using fabrics with surface defects will affect the quality and reputation of their products. In previous studies, some model compression methods have helped semantic segmentation models to be deployed on resource-limited working devices. However, the capacity reduction of models usually leads to a decline in detection performance. We propose a knowledge distillation method combining traditional KD loss and contrastive relational distillation, which makes student models learn the differential representation among various defects while receiving knowledge transfer from the teacher model. We use DeepLabV3+ and PSPNet with MobileNetV2 backbone networks as student models to validate our method. Experiment shows that our method performs better than direct training and traditional knowledge distillation methods on the DAGM and AITEX datasets. Our method enables lightweight models to achieve higher performance on fabric defect segmentation tasks.
Under low illumination environments, the insufficient visible light and the existence of near-infrared light will cause photon noise and color distortion for the imaging of night-vision CMOS sensor. The light source highly affects the imaging of surveillance camera and declines the accuracy of semantic segmentation. In this work, we report a modified convolutional neural network based on DeepLabV3+. We modify the backbone of the network from Xception to MobileNetV2 to deal with the real-time vision task of night-vision surveillance camera. Linear bottleneck and inverted residuals are adopted in MobileNetV2, and they greatly reduce parameters of the network. A real-world low-light dataset with fine annotations for night-vision surveillance camera is proposed to train and evaluated the new framework. Aiming at the problem of insufficient training samples, transfer learning and a new image enhancement strategy are carried out to complete the training. We also change the loss function to a joint loss function to further improve the results of segmentation. Comparing with other existing state-of-the-art algorithms, the modified neural network shows competitive performance on both subjective and objective assessments. The ablation study comparing with the baseline model proves the effectiveness of the modifications.
The structured light fields can be spoiled by the noisy environmental light. The defects will occur in reconstructive process due to the enormous change of surface reflectivity, which may ruin the results of the measurement. Thus, a structured light measuring system was proposed in this paper, taking the advantages of blue structured light, to reduce the disturbance of noise. A set of geometric feature parameters are proposed for characterizing the assembling errors of assembly parts, and the corresponding computation algorithms are presented based on the measured scattered points data. The proposed method can effectively reduce the influence of reflective deficiency. Experimental studies have been undertaken by measuring an assembly parts made by aluminum alloy, the measured results are also compared with those by a robotic coordinate measuring machine from Hexagon. The results show that the proposed measurement method and the developed system provides an efficient non-contact way for analyzing the feature parameters for assembly parts with high reflective surface in a high precision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.