PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12158 including the Title Page, Copyright information, and Table of Contents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Xichou County, Wenshan Prefecture, Yunnan suffered serious ecological damage from the 1960s to the 1980s, resulting in a serious decrease in the forest coverage rate of the county, and a significant increase in the area of rock desertification area. Nearly more than ten years, the county has achieved obvious results through ecological governance. With the help of GEE platform, this paper uses NDRI-based pixel dichotomy model to extract rocky desertification information in Xichou County in 2005, 2010, 2016 and 2021. The results show that although the status of local rocky desertification has intensified, timely measures have been taken to avoid further deterioration. Meanwhile, due to the strengthening of control and prevention efforts, the non-rocky desertification areas in the local area have been increasing from 2005 to 2021, and the area of very severe rocky desertification areas has been greatly reduced. This remarkable improvement also provides good conditions for the improvement of local living environment and economic development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In view of the current situation that the method for estimating the quality of the depth generalization in chart cartography is simple and extensive, a series of the quality quantitative indices and evaluation methods are proposed for evaluating the quality of the depth generalization. On the basis of “Specification for Chinese Nautical Charts”, three kinds of the quality indices, including the probability of adequate depth, the representativeness of depth and the distribution of depth, are proposed and designed for respectively evaluating the ship’s navigational safety, the available navigation resource and the arrangement of charted depth. Correspondingly, the quantitative evaluation methods of the comprehensive quality of nautical chart are proposed. The experimental results demonstrate that: (1) compared with the traditional method, the proposed method can accurately analyze and evaluate the quality of each depth for ensuring the navigational safety; (2) compared with the accuracy assessment method, the proposed method can accurately calculate the depth for representing the navigable resource such as in the sea channel, narrow channel; (3) compared with the average spacing method, the proposed method can quantitatively evaluate the location distribution of each depth annotation expressed on the chart, and accurately determine whether the distribution of each depth point is clear and reasonable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image segmentation is an important step of the object-oriented information extraction method, which is directly related to the accuracy of information extraction with high resolution remote sensing images. This paper mainly researched on the optimal segmentation scales about nine types of ground features in different layers with RMAS indicators, such as cultivated land, woodland, grassland, and so on, which were used to extract land use information. At the same time, the global optimal segmentation scale constructed with RMNE indicators was used to extract land use information, which was based on the five methods, such as Bayes, Nearest neighbor, Decision tree, Random Forest, and SVM under a single scale layer. The classification results of the two methods were compared and analyzed. The research results show that the multiscale optimal segmentation method adopted in this paper could effectively solve the problems of object confusion and ground object fragmentation in the classification results under a single scale layer, and the classification accuracy is better.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to explore the application of image ranging technology in geotechnical test, taking simple shear test as an example, the deformation measurement method of root-soil composite was studied, and the concrete realization steps and optimization measures of image ranging technology were described. Through comparative analysis, the results show that the image ranging technology can better monitor the law of shear deformation of shear boxes, and the error is basically controlled within 10%. The research results can further enrich the displacement monitoring methods of laboratory model tests.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computed Tomography (CT) is one of the essential techniques for non-destructive testing. The acquisition of accurate reconstructed images is the basis for the subsequent analytical processing tasks. This paper proposes a convolutional neural network-based CT reconstruction algorithm to generate reconstructed CT images directly from sinogram by the feature coding and decoding capability. The reconstruction of abdominal scanning data is carried out by this method, and the results show that we can quickly obtain corresponding reconstruction results. During the network training, we designed different data pre-processing methods. We analysed the role of each module in the network by visualizing the output features of each module. Finally, the role of different modules in feature extraction and image generation is further analysed. We found that the conversion from projection to image can be effectively achieved using only convolution operations. It is essential for reconstructing CT images using deep learning techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It has been proved that phonetics knowledge or data-driven method based acoustic landmarks are useful in detecting mispronunciation. The acoustic landmarks obtained by the two methods are not completely consistent. It shall be studied which method is better. The role that the acoustic landmarks play in the mispronunciation detection task needs to be further explored. This paper compared the consistency of different acoustic landmark detection methods. The paper also compared the effect of mispronunciation detection through TNDD-GOP architecture and hybrid CTC/Attention model. The paper verifies the role of acoustic landmark method in mispronunciation detection task with weighted method. The experimental results show that the higher the weight of acoustic landmark is added the better the detection performance is got. The results show that the mispronunciation detection performance of the updated acoustic landmark based on phonetics knowledge is better than that of the data-driven method. The DA and FRR are improved by 3.38% and 1.38%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduced a method to improve the feature map and Anchor box of Yolo V3 network on VOC data set, so as to improve the detection accuracy of target in specific data set. Meanwhile, we also improved the detection accuracy of small targets to some extent. We improved Yolo V3 network. We increased the number of feature extraction layers and deepened its network structure, so that it could better detect targets. For different data sets, we needed to adjust the size of Anchor box. We added the extraction method of some feature layers of SSD into Yolo V3 network to match relative data sets. At the same time, we use the characteristics of ResNet to solve the problem of small target distortion after multiple convolution. The modified model improved mAP by 2.93% on VOC2007.We put forward three points (1) To change the original network structure, increase the number of feature layers and add convolution layers to deepen the network depth. (2) Change the scale of the prior box to match the target scale of the data set. (3) Deepen ResNet in shallow network to extract small targets, select the appropriate size of Anchor box according to specific small targets, enhance small target data and then conduct training.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent advances in Generative Adversarial Networks (GANs) have shown impressive improvements for facial expression manipulation. However, previous methods still generate undesired artifacts and blurs in large-gap and large-angle situations. To address these problems, we propose a novel Landmark Guided Attentive GAN (LGA-GAN). A novel Expression Extraction Network (EENet) is proposed to extract expression-related features. At the heart of our method is a new landmark guided attentive (LGA) matrix that calculates where the expression of a pixel in the reference image should be applied in the synthesized result. With the help of LGA matrix and source image, the Expression Injection Network (EINet) decodes the transferred feature and outputs the synthesized image. Extensive experiments on both quantitative and qualitative evaluation demonstrate the improvements of our proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image semantic segmentation plays an important role in assisted driving systems and motor vehicle auto driving system. Due to the complexity of outdoor scenes and driving scenarios, algorithms that only use texture images have low robustness. In order to improve the performance of semantic segmentation, depth images can be used to assist texture images. In addition, the assisted driving system requires that the algorithm need to achieve real-time performance, but the existing algorithm is limited by the complexity of semantic segmentation, resulting in low operating efficiency. To address the above problems, a cross-scale feature extraction module for efficient RGBD image semantic segmentation is proposed. The cross-scale feature extraction module has the characteristics of small parameter amount, large receptive field, and the ability to merge multi-scale features, which can efficiently extract context features. The proposed model achieves a segmentation accuracy of 69.4% mIoU on the RGBD original resolution image of the outdoor scene dataset Cityscapes, and runs at a speed of up to 120 frames per second. Compared with related algorithms, the model proposed in this paper has obvious advantages in running speed, and has achieved a good balance between performance and efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The moiré pattern refers to the interference fringes generated by two equal amplitude sinusoids with close frequencies. In digital imaging, images collected in some scenes are vulnerable to moiré, such as images taken from knitted fabrics and images taken on LED screens, the visual quality of which is always damaged seriously. The difficulty of demoiréing lies in the moiré patterns distributed through different bands of frame frequencies and vary in colors and shapes. To fully learn the global information of the moiré images and remove the moiré patterns in a wide range of frequency bands, we proposed a multi-stage and multi-patch network, which can recover non-homogeneous moire images by aggregating the features of different spatial regions of patches in different stages. To increase the receptive field, we also introduce a novel Atrous Fusion Module in different atrous rates to learn multi-scale information. By taking advantage of these improvements, our proposed network can achieve superior accuracy than state-of-the-art approaches on the public dataset in the NTIRE2020 Single Image Demoiréing Challenge.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sea surface temperature, as one of the most important variables, is widely used in the research of climate change, sea-air heat exchange, ocean-atmosphere numerical simulation and prediction. So the construction of real-time global coverage high resolution sea surface temperature dataset is very important for weather forecasting and climate prediction. But due to the spatial limitation of in suit sea surface temperature observation, more and more satellite remote sensing of seasurface temperature data were widely used as the main input data of global sea surface temperature dataset. As the first part, quality assessment of satellite remote sensing of sea-surface temperature is very necessary to understand the performance and quantify the error.f In this paper, FY-3C/VIRR SST, Metop-B/AVHRR SST, and GCOM-W1/AMSR2 SST were collected as the main input data and evaluated using buoy observed SST from iQuam. The result shows that the RMSE of Metop-B/AVHRR SST data is 0.4059°C in day time and 0.4329°C in night time. The RMSE of AMSR2 SST data is a little larger than AVHRR, is 0.5823°C in day time and 0.5224 °C in night time. Although, the RMSE of VIRR SST data is a litter large, from the time series and spatial distribution, the VIRR SST can provide more useful information over the global region. And with the new instrument development, Fengyun satellite retrieval SST will be improved and play more and more important in ocean science.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the complex and varying underwater environment, the optical images obtained are low resolution and full of noise, which ultimately make it difficult to register and stich the optical images. To solve the above problems, we propose an improved image registration method based on MSRCR and SIFT. First, multi-scale retinex with color restoration (MSRCR) is applied to improve the underwater low-quality image, and the image contrast is improved by contrast limited adaptive histogram equalization (CLAHE). After that, the scale-invariant feature transform (SIFT) algorithm is adopted to extract feature points of the reference image and sensed image. Then coarsely match the feature points based on the K nearest neighbor (KNN)-based Ratio Matching algorithm, and the random sample consensus (RANSAC) is used to eliminate mismatched feature points and improve matching accuracy. Finally, the mosaic image is output after the transformation matrix is calculated. The experiment results show a better effect of underwater image registration and stitching, demonstrating that the algorithm can improve the accuracy of underwater image registration through image enhancement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Music is an indispensable part of human society since ancient times. Its evolution is affected by the development of human history and has a wide impact on humans. This paper establishes a musical impact model based on factor analysis and a music similarity model based on the European distance model to quantify and analyze music evolution. First, we use Graph Theory, connect all artists, establish a network and draw an image. In order to measure the influence of each musicians, we selected three indicators, using TOPSIS to establish a factor-based music influence model. Finally, a subnet connected to the network is generated with five artists for the parent node. Next, we have retained 14 music characteristics in the complete music data related to the similarity of artist. We use factor analysis to reduce these 14 music features to obtain five representative main components. Finally, the SPSS software has established a music similarity model based on the Euclidean distance model, and the similarity of artists within the genre and the artists is calculated, 0.7842 and 0.7804, respectively. It can be seen that the artists of the same flow are more similar
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article aims to describe, analyze and model three data sets, and to obtain the influence and change between different artists or music genres. Through this article, it provides a reference for the significance of music to the development of society.First, the paper uses the data to build a directed graph from influencers to followers, and calculate the influence of musicians through the degree. Secondly, we use the fuzzy comprehensive evaluation method and the influence of each genre, the number of musicians of each genre and other data to establish a music influence evaluation model, and evaluate and score all the genres. Then the five principal components obtained by principal component analysis are used as new attributes. Finaly, through the cosine similarity method, the attributes are vectorized to calculate the similarity between artists.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer Technology and Human-Computer Interaction Design
The alignment of the acquired projections is quite necessary for accurate reconstruction of nano computer tomography (nano CT) due to thermal drift. In this paper, a method based on features outlier elimination (OE) is proposed to reduce the drift artifacts from the reconstruction slices, and a series of reference sparse projections are required. The rough alignment is realized after the extraction from the Speeded Up Robust Features (SURF) of both the original projections and the reference projections, of which the structure similarity (SSIM) is utilized to eliminate the outlier features. Then, the rest features are used for the further alignment for reconstruction. The simulation results show that the proposed method is more accurate and robust than image registration method based on entropy correlation coefficient (ECC) and traditional SURF. Scanning results of bamboo stick show that the proposed method can preserve the details of slices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The agricultural losses caused by the difficulty of pest detection and unscientific detection are increasing year by year. The traditional detection and identification methods of agricultural pests have low accuracy and cannot meet the pest control needs of agricultural planters. The accuracy of agricultural pest detection is the most important thing in solving the problem. Therefore, this paper proposes an agricultural pest detection algorithm based on improved Faster RCNN. First, we use the improved FPN combined with the backbone network to expand the low-level receptive field and enhance the algorithm's feature extraction ability for small targets. Then use the bilinear interpolation method of the ROI Align algorithm to replace the rounding quantization in the ROI Pooling algorithm for calculation, thereby improving the detection accuracy of small targets. Finally, we add a Convolutional Block Attention Module (CBAM) to the backbone network to enhance the effectiveness of feature extraction. Experiments on the detection algorithm on the natural scene data set we have compiled, the mean average accuracy (mAP) reaches 87.7%, which is a large improvement compared with the original algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the process of power line inspection tasks based on online robots, machine vision needs to be used to identify and locate cables.However, strong light interference brings great difficulties to image detecting. This paper proposes a scheme based on laser structured light and optical filters to archive the requirements of online power cable inspection. This scheme realizes the recognition and extraction of cables through feature operator convolution and morphological processing, and realizes the depth acquisition of cable positions through one-line laser structured light marking and curve fitting. Experiments show that this scheme solves the problem of outdoor strong light interference, and satisfies the demand of online robots for power cable inspection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
License plate recognition is very important in intelligent transportation. Under the influence of uncontrollable conditions such as illumination, rain, snow, smog, blur and deformation, there are some difficulties and shortcomings in the field of license plate recognition. To solve this problem, this paper has made some improvements to YOLOV3 algorithm: 1. A K-means++ algorithm based on mean shift is proposed to select the anchor box. 2. CIOU is used as a regression function to make the fine-tuned detection result as close as possible to the ground truth box. 3. Adaptive spatial fusion is used for feature fusion, which avoids the conflict between features in the same layer and improves the efficiency of feature fusion. Experiments show that the improved algorithm has higher accuracy than the original algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the Rayleigh fading channel, this paper compares the communication performance of a reconfigurable intelligent surface (RIS)-assisted communication system and the traditional relay system in terms of amplify-and-forward (AF) and decode-and-forward (DF) protocols. First, the closed expressions of the outage probability (OP), average bit error rate (BER), and average system channel capacity under different conditions are given. Furthermore, we compare the probability of signal-to-noise ratio (SNR) gain in the three modes. Numerical results show that the RIS-assisted communication system has obvious advantages over traditional relays in performance. In addition, the performance of the systemsystem's performance is improved obviously with the increase of the number of reflector surfaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to solve the problem of lack of elevation datum for offshore unmanned islands, a GNSS level fitting method is proposed to establish the elevation datum of "Dongdanzhi island" and "Sheshan island" of Zhoushan unmanned island. Through the analysis and Research on the special terrain distribution of the survey area, the fitting model is selected as quadratic curve fitting. The experimental results show that the internal coincidence accuracy of GNSS leveling fitting method is within ± 2 mm, and the external coincidence accuracy compared with trigonometric leveling method is ± 4.5 mm. The quality of cross sea elevation transmission is qualified and reliable, which realizes a new method of offshore cross sea elevation transmission and provides a reference for Island elevation fitting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid economic development in recent years, people’s demand for resources is gradually increasing. Mineral resources and security have become a relatively big problem today. Therefore, researchers have made more changes to geophysical exploration technology. For in-depth research, the reality is always difficult. Sometimes even if the depth of the mine continues to increase, it is difficult to find the mineral sources we need. Geophysical exploration is based on advanced technologies and methods for basic geological research and mineral resources. Exploration provides a series of basic data and is also used in the fields of water, engineering, and environmental protection. However, it has certain limitations. Only under certain conditions can there be corresponding results, especially in Played a great role in geological prospecting. Up to now, my country has achieved remarkable results in resource exploration in the field of geophysical prospecting. The thesis is based on the application of geophysical prospecting technology and the main application of geophysical prospecting methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The quality of prefabricated building is stable, the construction period is fast, and the environmental pollution in the construction process is low. But at present, because its’ construction cost is higher than the traditional cast-in-situ concrete, especially in the production cost of components, transportation cost, installation cost and design cost etc., the development speed of prefabricated building is slowly. According to the system dynamics and the prefabricated building cost theory, this paper constructs the prefabricated building cost causal loop model and the stock flow model. The results show that in the early stage of the current market, the production technology is limited and the direct cost cannot be reduced. In order to reduce the cost of prefabricated buildings, policy support is needed to activate the market demand, and then feed back the production technology and the government to build a sustainable development route.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Firstly, this paper constructs the supplier evaluation system from the three aspects of overall strength, enterprise reputation and enterprise stability, makes a quantitative analysis of the supplier enterprises, selects the entropy weight method to calculate the weight of each index, and then uses the TOPSIS method to give the supplier importance ranking. Then, the planning model is established. According to the condition that the enterprise inventory needs to meet the production conditions of two weeks at least, it is reasonable that the opening inventory is five weeks. Compared with the importance ranking of suppliers, the accuracy is 92%. Then, this paper divides the most economical into two influencing factors, namely, the storage cost and the minimum procurement cost. Then, taking this as the goal, this paper uses the greedy algorithm to solve the procurement scheme. Finally, by predicting the 24- week supply and the loss rate of the transshiper, the transshipment cost is obtained by using the greedy algorithm and the dynamic combinatorial programming algorithm, and the cost of the procurement scheme and the loss rate of the transshipment scheme are calculated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autoencoder and deep autoencoder have been widely used for dimensionality reduction and anomaly detection. The ensemble learning method based on autoencoders further improves the accuracy of anomaly detection. However, neural networks are easy to overfit, and the current ensemble methods based on autoencoders cannot effectively make autoencoders diversified to avoid overfitting problems. For this reason, this paper proposes an ensemble method of autoencoders. The algorithm builds a cascaded model of deep autoencoders, and resample the training set of the next neural network by the anomaly detection results of the previous neural network, thereby improving the accuracy of the overall model. Experimental results show that the accuracy of the model is significantly improved compared to the current mainstream anomaly detection algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development of mold industry has made outstanding contributions to the development of China's industry. The mechanical design industry is also gradually rising. With the expansion of the application field of computer technology, we find that computer technology plays a high role in life[1]. On this basis, it is found that the efficiency of traditional mould design is very low. The new technology of mechanical mould design based on computer-aided technology is becoming more and more popular. This paper explains the relevant theories of computer-aided technology. At the end, this paper puts forward the relevant steps of mould design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
UWB signals are widely used in indoor precise positioning due to their good temporal resolution. Motion capture can be realized by tracking human joints in 3d space with the precise positioning characteristics of the UWB system. Compared to traditional motion capture technology, UWB technology has advantages in cost and system complexity. This paper reviews existing research of the motion capture based on UWB technology, and analyzes the design of the whole body motion capture system based on UWB technology from three aspects of the algorithm, application scenarios and technical difficulties. The research proves that UWB technology can be used as a substitute for the existing motion capture technology and has certain competitiveness. However, the research on motion capture system based entirely on UWB technology is still limited to studying individual limbs of human body. Therefore, the large-scale application of UWB system for motion capture under complex indoor conditions still needs further research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vehicle detection from traffic monitoring video, which lays the foundation to serials consequential operations such as vehicle counting and accident detection, is an essential part of traffic monitoring system. Traditional target detection methods always have some kind of drawbacks more or less, while target detection based on deep learning has the benefits of more abundant target feature extraction, thus much higher detection accuracy can be achieved, of which YOLO v3 is a typical representative. In this paper, the model structure and principles of YOLOv3 is analyzed in depth firstly, and then its application to vehicle detection in road surveillance video is carried out with the discussion of some existing problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Generative Adversarial Networks has shown impressive achievement in computer graphics applications, and it's now widely used worldwide. GAN is composed of generator and discriminator, which are crucial compositions of GAN. GAN can generate 3D models, graphics that are required in animated movies or video game characters. For example, a generative model could simply generate a new picture that looks like a specific type of animal. In the meantime, the discriminative model could distinguish the picture from a human being or an animal. GAN variants consist of progressive GAN as well as conditional GAN. Since GAN can be applied in so many different areas, in this essay, we are going to talk about how GAN is applied in image and computer vision. More specifically, face synthesis, image to image translation, and super resolution. Conclusively, GAN has a significant contribution to various areas, and it boosts the advancement in the domain of computer graphics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Big Data Modeling and Intelligent Model Recognition
In recent years, the development of digital technology has driven the continuous improvement of the digital level of the international logistics industry. Shipping companies, port companies, and freight forwarding companies have all explored digital transformation, and international logistics business has become a trend toward full visibility, online transactions, and smart operations. In this context, the international logistics industry has created demands for convenience, efficiency, green environmental protection, smart flexibility, safety and reliability.At present, my country's Logistics data exchange lacks a unified standard and standardized data exchange platform as a whole. At the same time, there are also some attempts at data exchange, such as the four reports of a transportation vehicle. Through the cooperation and data exchange of the four departments, the operating efficiency of the transportation vehicle carrier has been improved.There are two major optimization directions in the future. One is to improve the quality and efficiency of data exchange through collaboration between all parties in the entire process; the other is to increase the utilization rate of data through the government's disclosure of the data in its hands. There is reason to believe that the future is the era of data. With the development of my country's Internet and information technology, data sharing and exchange will become our development direction and our future economic growth point.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Farmland protection compensation policy is based on environmental protection and mobilizes farmers' enthusiasm to protect the ecological environment through economic incentives. Based on the survey data of farmers, this article analyzes the impact of farmland protection compensation policies on the household income, household expenditures, labor supply and non-agricultural labor transfer of the interviewed farmers. Studies have shown that the implementation of farmland protection compensation policies can significantly increase the per capita income and per capita expenditure of beneficiary farmers. Farmland protection compensation policies can increase the input of beneficiary farmers’ families in agricultural labor and reduce the transfer of non-agricultural labor from beneficiary farmers’ families. The purpose of farmland environmental protection is to encourage relevant stakeholders to cherish and rationally use the land, and to ensure the sustainable and reasonable use of the ecological environment
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to establish the seepage analysis model under the control of rainfall, based on the mechanism of the change of seepage flow, the seepage analysis model is derived from boussinesq equation. Compared with the regression model, the seepage model has higher accuracy and fewer parameters that can well simulate the process of seepage flow controlled by rainfall. It can be used to analyze the seepage monitoring data of the dam under the control of rainfall.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to improve the management and control ability for the large-span steel structure project, an evaluation model for the management of construction cost in the large-span steel structure project based on the variable-two-dimensional cloud model was proposed. First, the main factors affecting the management of construction cost were analyzed from the perspective of physical improvement, practical optimization and human coordination with the WSR methodology, to construct the evaluation index system for the management of construction cost in the large-span steel structure project. Then, the vector included angle cosine method and variable weight theory were adopted to assign weights for each evaluation index, and the economy and effectiveness of large-span steel structure project were comprehensively evaluated with the two-dimensional cloud model. Finally, taking a theater as the research object, the level of the management of construction cost was determined to be good through MATLAB software drawing comprehensive two-dimensional cloud map, and the proximity was calculated to further determine the evaluation results, which verified the effectiveness and applicability of the model, providing the theoretical basis for subsequent management of construction cost in the largespan steel structure project.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to reduce the error of the traditional whole life cycle building energy consumption detection method, this paper designs a building life cycle energy consumption detection method based on digital twin technology. This paper sets up the primary and secondary indicators for energy consumption detection in the whole life cycle of a building, and determines the detection index system. The holographic mirroring capability of the digital twin technology is used to fit the energy consumption detection data of the whole life cycle of the building. This paper divides four stages to adjust the detection method, and realizes the energy consumption detection of the whole life cycle of the building. Experimental results show that the detection error of the designed detection method does not exceed 0.05 kJ, which is significantly lower than the traditional method. It shows that the method in this paper can solve the problem of large errors in traditional full-life-cycle building energy consumption detection methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Scientific estimation of the energy consumption of urban buildings is of great significance for reducing industry energy consumption and building low-carbon cities. This paper establishes a method for measuring the operation energy consumption of urban buildings and uses the STIRPAT model to analyse the factors affecting the operation energy consumption of buildings in Beijing. The results show that the factor that has the greatest impact on the operation energy consumption of buildings in Beijing is the output value of the tertiary industry, followed by the urbanization rate and disposable income per capita. The impact of the total population on the operation energy consumption of buildings in Beijing is relatively small. The results of the study show that doing a decent job in urban planning and raising residents’ awareness of low-carbon environmental protection are of great significance to the reduction of urban building operation energy consumption.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The calculation of engineering quantity is the premise and foundation of cost control in the design and construction phase of building engineering. The traditional calculation method is heavy workload, complicated process, time-consuming and the accuracy decreases with the increase of project scale, the emergence of Building Information Modeling technology has brought a new revolution to the field of architectural engineering design and construction. Through the development of three-dimensional calculation of building engineering, this paper analyzes the advantages and disadvantages of current Building Information Modeling calculation tools, and according to the engineering practice of a complex building, chooses Building Information Modeling calculation software to carry on the application research of three-dimensional calculation of building engineering, it also provides a reference for promoting the application of Building Information Modeling technology in the field of architectural engineering calculation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the wide application of new power services, the continuous strengthening of plant network coordination and interaction, resulting in a large extension of data network, network security protection is more difficult. We propose a power system terminal continuous trust evaluation model based on fine-grained data flow analysis, which effectively solves the problem of weak anti-jamming of traditional trust evaluation and unstable trust evaluation results through the analysis of the context behavior of the access subject, evidence reasoning, and identification of intent of confidence propagation. Innovative application of natural language processing (NLP) technology to Web application traffic intrusion detection, multi-level, multi-grained traffic depth analysis, dynamic intelligent correlation and drilling analysis for network traffic data, reduce the traditional feature-based and reputation detection technology leakage rate, the location, tracking and traceability of abnormal traffic, the accuracy of the detection results reached 96.59 percent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D modeling is a long-standing task in computer vision. However, it is very difficult to directly obtain the complete 3D data. Thus, recovering 3D model from low-dimension information is an important issue. Researchers have developed many methods to solve this problem. This paper presents comprehensive state-of-the-art works on this problem. We firstly introduce the image-driven 3D modeling methods, including single-image-based and multi-image-based ways. Then we summarize the completion-based methods, which is to build a 3D shape from fragmentary observation, such as partial point cloud. Quantitative and qualitative comparisons of the mentioned methods are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Face attribute recognition plays a vital role in face-related tasks. Common face attributes include person age, person gender, mask-wearing, glasses-wearing, etc. Using one network to predict all attributes can save many computation costs. However, these attributes can hardly be fully labelled on every image in the same dataset since the labelling costs and the requirement on the sample balance. In many cases, each of the datasets are labelled with a single attribute. With several such datasets, how to use one network to generate the multi-task predictions for all attributes is a problem. In this paper, we propose a two-level iteration training method for multi-task face attribute learning with task-isolated labels. The two-level iteration method includes a task-level inner iteration and the regular outer iteration. With this scheme, the network receives the gradients from all tasks after each inner iteration. After training, the network is able to predict all attributes. Experiments show the effectiveness of the method and the advantages of multi-task learning over single-task learning on network accuracy and efficiency, which demonstrate the broad applicability and effectiveness of the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
CAD-based geometric design has been widely used in various industries in the society. With the continuous improvement of design complexity, higher requirements are placed on the model simulation design of high-precision and high-confidence. The paper analyzes the simulation process of 3D model and improves the feature extraction method in the process based on the research of Open CASCADE platform. Firstly, the CAD platform geometry model is used to develop the model concept map, and then the model's three-view projection map is used to extract feature information; meanwhile, the complex surface adjustment and description are used to represent the model feature information. Finally, the graph feature vector obtained by the complex facet and transformation is calculated by using the maximum and minimum approximations as the distance between the feature references so as to evaluate the similarity between the models. The experiment have shown that the projection mapping of complex surface adjustment can accurately identify the fluctuations of complex surfaces in the model, and facilitate the more precise acquisition of feature parameters. The related functions studied in the paper will help to further improve the development capability of 3D models based on Open CASCADE, which can make the faster implementation of high-precision simulation design requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper covers adjustment and improvement of optimized adaptable prediction using non-model based methods and neural network. The project has combined the advantages of conventional model-based adaptable prediction and the updated adaptable prediction model based on a trained neural network. While the updated adaptable prediction model can increase the prediction performance in large data slope condition, the conventional adaptable prediction model can also correct the prediction error when slope of data is small. All of the obtained results will be analyzed and compared with model-based results. Limitations of each model will also be described in the context. This paper proposes a half-model based prediction for vehicle interactions in unstructured environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the wide use of face tasks on mobile terminals, facial landmark detection faces new challenges of real-time and occlusion. Therefore, this paper designs a real-time facial landmark detection model that can deal with occlusion. The model of this paper includes the backbone network and the auxiliary network. The function of the backbone network is to locate facial landmarks. The backbone network uses the lightweight network module ShufflenetV2 to ensure the realtime reasoning of the model, and uses dynamic convolution and Wing loss function to improve the positioning accuracy of the model. The function of the auxiliary network is to predict landmarks’ visibility. The auxiliary network helps the backbone network focusing attention on visible landmarks, so as to deal with occlusion. The experimental results show that the size of the model is 1.3MB, the reasoning speed of the model on Intel i5-8250u CPU is 64.63 pictures per second, and the prediction accuracy on the WFLW dataset of the model is 85.72%. The model performs better than PFLD in size, speed, and accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Point cloud registration is widely used in computer vision, robotics and other fields. It is a process of finding spatial transformations (such as scaling, rotation, and translation) that align two point clouds. The purpose of finding this transformation includes merging multiple data sets into a globally consistent model (or coordinate system) and mapping new metrics to known data sets to identify features or estimate their pose. The popular solution of point cloud registration is iterative closest point (ICP). Two identical 3D model can coinside if we move and rotate one of them to match the other. Point cloud registration algorithm based on ICP can be divided into two steps: rough registration and fine registration. Rough registration refers to point cloud matching when the transformation parameters between two point clouds are completely unknown. The main purpose is to provide initial transformation parameters for fine registration. The fine registration is to obtain a more accurate transformation with the given initial transformation. Through experiments, this paper verifies that ICP can obtain desirable point cloud registration results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Construction of ecological network is important for improving urban ecological environment under the scenarios of rapid urbanization.This paper extracted the core area with good connectivity as the ecological sources with the methods of morphological spatial pattern analysis (MSPA) and landscape index with Zhengzhou City as the study area.The ecological network was then constructed by minimal cumulative resistance (MCR) model and was quantitatively analysed by network analysis method. After that an optimized ecological network was finally constructed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aiming at the problem of optimal cost planning in the process of raw material supply, this paper studies, designs and develops a supply chain planning system. Based on the analysis of the actual demand of supply chain planning, the mathematical models of optimal ordering scheme and optimal transportation scheme are constructed. First of all, this paper constructs the indicators such as supply stability rate, supply continuity and supply quantity, and uses the analytic hierarchy process to determine the weight of the three indicators. Then the evaluation model of supplier guarantee enterprise production importance is established by using TOPSIS comprehensive evaluation method. The higher the importance index is, the easier it is to cooperate with enterprises. Then, taking whether the supplier is selected as the decision variable and using the 0-1 programming model with the minimum number of suppliers as the objective function, it is obtained that at least 127 suppliers are needed to supply goods. Then, the weekly supply volume of 127 suppliers is taken as the decision variable, and the most economical raw material ordering plan is taken as the objective function to establish a mathematical programming model. Finally, the 0-1 programming model is established by taking the selection of suppliers and transporters as decision variables and the lowest loss rate as the objective function, and the optimal transfer scheme is obtained. In the process of analyzing the ordering scheme and the transportation scheme, the membership function is constructed by using the normal distribution interval and analyzed by the fuzzy comprehensive evaluation method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, based on the real experiments and experimental data of ethanol coupling reaction to produce C4 olefins, a model which can be used to determine the best combination of catalyst and temperature is constructed by using regression, interpolation, Bayesian neural network and genetic algorithm. First of all, the laws between the temperature corresponding to different catalyst combinations and the conversion of ethanol or the selectivity of C4 olefins are mathematically described and analyzed. Finally, 42 equations with higher goodness of fit were constructed according to different temperatures and ethanol conversion or C4 olefin selectivity to effectively describe the problem, and the analytic hierarchy process was used to evaluate all the quantities that meet the conditions. Then control variables are used to analyze the influence of individual variables, and correlation is used to evaluate the degree of interaction among variables. finally, the Bayesian neural network model is used to fit the functional relationship model between the dependent variable ethanol conversion, the selectivity of C4 olefins and the temperature of independent variables and the combination of catalysts (three variables). Then it mainly discusses how to find the optimal catalyst combination and temperature value to maximize the olefin yield.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.