PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12511, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Smart city evaluation is an effective way to measure the level of smart city development and help promote the construction process. Cities are complex dissipative systems that will spontaneously become more chaotic if there is no intervention. Therefore, material, energy, and information should be introduced into the urban system to keep cities stable and ordered, which is consistent with the core idea of information theory. However, the existing smart city evaluation methods do not consider urban entropy. In this paper, we propose a smart city evaluation method from the perspective of information entropy by defining smart city construction as a kind of "negative entropy" that flows into the urban system and reduces city clutter. We conduct experiments on the real-world dataset collected from China's practical smart city evaluation in 2016 and 2018. Experimental results show that our method can effectively evaluate the level of smart city development as smart city entropy strongly correlates with the benchmark results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The key to RGB-D salient object detection is the effective fusion of the different modal features of RGB and depth maps. This study proposes an RGB-D salient object detection method based on multimodal feature information fusion. First, in the encoding stage, essential features from the depth map were extracted using the spatial and channel attention modules and then merged with RGB feature information to improve the expression ability of salient objects. Second, in the decoding stage, a multimodal and multilevel feature fusion module and a global context-feature guidance module were proposed to optimize the detection effect of the network on the salient objects of missing detection and error detection, which can more accurately decode the spatial structure information of multiple objects and small objects. Compared with 15 other deep learning detection methods, the experimental results on four datasets show that our method overcomes the comparison methods on multiple evaluation metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Face expression recognition is a technique that recognizes the emotions of a person from a static picture or a dynamic video. So it has great potential application value in psychology, intelligent robotics, intelligent monitoring, virtual reality and synthetic animation. In this paper, we propose an innovative method of feature fusion in view of the low accuracy of the traditional method facial expression recognition algorithm in the real environmental. And the fusion features are predicted by using the support vector machine, which makes the detection of the model accurate rate reach 67% on FER2013, surpassing the single method. Furthermore, we use the GAN network to enhance the data, minimize the impact of irregular data on the results, and finally reach 72% accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During criminal investigations, chat logs on the suspect's mobile phone are an important information source for investigation. However, some clues involved in the case are hidden in many irrelevant chat messages. Case investigators can generally locate the specific information they need through keyword search, but at the initial stage, they may not be very clear about the specific keywords, which leads to missing some messages in the search results. Therefore, a query strategy based on keyword expansion is proposed in this paper. Firstly, the relevant documents are obtained by using the initial keywords, and the ratio of document frequency (rdf) is defined according to the frequency distribution of words in the documents. Then the values of rdf are sorted to get the expanded keywords, and finally, the weighted BM25 algorithm is used for further queries. Experiments show that the proposed method can improve the query results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Balise uplink signal IQ (I: Inphase; Q: Quadrature) acquisition is an important basis for signal demodulation and uplink signal parameter estimation. Relying on traditional acquisition methods, it is difficult to achieve the requirements of automation, accuracy and real-time online analysis. In order to solve the problems existing in the traditional collection method and improve the collection efficiency of IQ signal. Based on LabVIEW as the development platform, the PXI (PCI extensions for instrumentation) system of NI is used to build a small balise test bench. Combined with the Hilbert transform principle and the characteristics of the uplink signal IQ, the uplink signal IQ acquisition system is designed. Through the actual project debugging of LabVIEW acquisition system, combined with the online analysis of the collected data, the results show that the uplink signal IQ acquisition system based on LabVIEW can effectively solve the problems existing in the traditional acquisition method and can expand the acquisition program according to the actual data processing requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The proportional guidance method has been widely used in various missiles. However, with the continuous improvement of target maneuverability and the increasingly ugly combat environment, the classical proportional guidance method can no longer meet people's requirements for ballistic parameters, so it is improved and perfected. The proportional guidance law is extremely necessary. In this paper, the establishment of the proportional guidance process based on the guidance principle is studied, and the trajectory established by the classical proportional guidance method and the proportional guidance trajectory based on the guidance principle are designed and simulated, and the simulation results are analyzed. The simulation accuracy analysis is carried out with the example, and the result proves that the proposed method effectively reduces the method error in the calculation of the guidance ballistics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since the yum source needs to be specified when applying the yum installation software, the yum source can come from the network, local CD, local mobile disk, and a directory in the local system. If you use the network Yum source to install the software, the installation speed will vary according to the network speed, and the installation process will be few minutes, many hours. In order to solve this problem, this paper studies how to make the RPM software package formed in the cache into a cost source when installing the software from the network or CD-ROM Yum source, and then you can quickly install the required software locally.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to solve the problem that it takes a long time for people to complete vehicle registration when buying a new car, Vehicle Digital Registration System is proposed to provide a new service for vehicle registration, which is more convenient and efficient, taken by the Ministry of Public Security and the Ministry of Industry and Information Technology Equipment Industry Development Center. The system has changed the original mode that the car must be driven to the vehicle management station to complete the registration. It supports vehicle manufacturers to assist in inspection work at the stage when vehicles leave the factory, then owners can be exempted from having the vehicle checked when applying for registration. Whenever and wherever possible, the owners can apply online to complete vehicle registration, and the car's license plate and other certificates can be delivered by mail. The system proposes an innovative and optimized registration mode, and it is an important measure to promote the reform and innovation of vehicle sales mode and vehicle registration model, which greatly facilitates the purchase and registration of vehicles for people.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to conduct a comprehensive vulnerability analysis of armored vehicles, we developed armored vehicle target vulnerability database software using Visual Studio 2019 software development tool and Sqlite database software. The software consists of modules such as target function and structure analysis, target damage level and damage tree, target equivalence model and target damage criterion, which contain various conditions for vulnerability analysis of armored vehicles and lay the foundation for damage assessment of armored vehicle targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to explore the efficiency of cylindrical fragments of different materials penetrating the target in different contact gestures, the WorkBench-LsDyna software is used to simulate the cylindrical fragmentation under different working conditions, assuming that the fragment’s speed is 1800𝑚/𝑠, and the target is vertically held with different gestures. Simulation results show that when cylindrical fragments penetrate the target, the side edges first penetrates the target with the strongest penetration, and the cylindrical side penetrates the target with the worst ability; When cylindrical fragments of different materials penetrate the target, the greater the density under the same structural size, the stronger the penetration, and there is less energy loss and larger residual velocity after penetrating the target, but the impact of the structural size on the penetration of the fragment should be considered when designing the cylindrical fragment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to study the penetration ability of long rod projectile into non-explosive reactive armor, a numerical simulation model of long rod projectile penetrating into non-explosive reactive armor at different target angles is established by using LS-DYNA software, and the influence of different target angles on the penetration process of projectile is analyzed. Through the analysis of simulation results, it is known that when the target angle reaches 40 °, the impact of the target plate on the projectile body is the most serious. When the target angle is between 30 ° and 40 °, the damage degree of the projectile body is much greater than that below 30 ° with the change of the angle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Carbon price forecasting is used to assist emission-regulated firms in decision-making in trading. Based on the ABM method and trading rules of China’s emission trading markets, in this study, we build a decision-making model for emission trading. The model takes into account factors such as risk preference and emission reduction costs for different types of emission-regulated firms. By setting initial values, adjustable parameters, and input variables, the model simulates emission trading decision-making strategies and carbon price trends in different scenarios. In this article, we selected 2,162 emission-regulated firms in China’s national emission trading market for simulation. The results show that the proportion of sellers participating in the market and the proportion of free allowance allocation have a significant impact on the carbon price fluctuations; the introduction of the auction mechanism will cause short-term fluctuations in carbon prices. Through simulation of emission trading decision-making and carbon price forecasting, the model assists emission-regulated firms to minimize their emission regulation compliance costs, and prevent in advance the price and liquidity risks in emission trading markets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi label classification of judicial text data is a hot issue in judicial artificial intelligence. However, judicial data has the characteristics of strong professionalism and long text. The performance of multi label classification task of judicial text using BERT model is poor. Aiming at the above problems, this paper proposes a BERT-TextCNN model, which uses BERT to extract text vectors, and introduces TextCNN to construct multi label classifiers for training, so as to extract semantic information features at different levels of abstraction and improve classification precision. In addition, the input character limit of BERT model will affect the classification results, so this paper rebuild the data set to make the data set meet the maximum input limit of BERT. This paper is tested on the multi label classification data set in the " China legal research Cup " in 2021. The experimental results show that compared with the BERT model, the proposed method has significantly improved the performance and can effectively improve the effect of multi label classification of Chinese judicial texts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper analyzes the correlation between bitcoin, oil price fluctuations and the DOW Jones Industrial Index in the time-frequency framework. Coherent wavelet method applied to recent daily data in the United States (1863 in total). Our research has several implications and supports for policy makers and asset managers. We find that oil prices lead the U.S. market at both low and high frequencies throughout the observation period. This result suggests that sanctions against Russia by a number of countries, including the U.S., are influencing oil prices, while oil remains a major source of systemic risk to the U.S. economy and economic uncertainty between the international level is exacerbated by tensions between Russia and Ukraine.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study aims to explore the current status of pharmaceutical industry innovation(PII) research through visual analysis of the journal papers related to PII. We analyzed 405 pieces of literature that were retrieved from the China National Knowledge Infrastructure (CNKI) and the period was defined as “all years”. Citespace software is used for analysis. By analyzing the basic characteristics of 405 publications, we found that the number of publications peaked in 2020. The clustering results of keywords showed that there were 4 hotspots in PII research: “innovation ability”, “innovation system”, “innovation efficiency” and “collaborative innovation”. From the perspective of the time axis of keywords, the development of PII research has experienced three stages: germination stage, outbreak stage, and stable development stage. Moreover, “biomedicine” had received greater attention recently, indicating that innovative research on biomedicine is the current research hotspot. The findings may help the new researchers to grasp the frontier of PII research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Because there are many factors affecting the financial risk of enterprises, it is difficult to assess the risk, and the traditional methods are difficult to accurately carry out the risk early warning work. To solve this problem, this paper puts forward the research of enterprise financial risk early warning method based on data mining. First, it constructs a financial risk evaluation system covering the profitability, operation, debt repayment, development and cash flow of the enterprise, and then analyzes the specific situation of each index parameter in the financial risk evaluation system by using BP neural network of data mining technology, and realizes the evaluation of the enterprise's financial risk combined with the actual situation of the market. Then, it makes corresponding early warning according to the evaluation results. The test results show that the accuracy rate of the designed method for different degrees of financial crisis can reach more than 80.0%, which has a reliable early warning effect.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aiming at the problem that service robots have poor dexterity in grasping objects with arbitrary postures in the home environment, a dexterous grasping method adapted to objects with arbitrary postures is proposed. First, the YOLACT instance segmentation network is used to recognize and segment the target object, and the segmented target object is registered with the depth image to obtain the target point cloud. Then the target point cloud and the template in the template library are used to estimate the accurate pose of the target object by using the ICP algorithm. Finally, according to the obtained object pose, the grasping pose of the robotic arm is standardized to achieve dexterous grasping of the object. The experimental test shows that the grasping method proposed in this paper has a high total success rate of grasping objects in different postures and the grasping method is beneficial to improve the dexterity of home service robots for grasping objects, and is significant for the development of home service robots.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To solve the problem that it is difficult to measure the emergence of weapon equipment capability, an analysis method of emergence of weapon equipment capability based on incidence matrix is proposed. According to the characteristics of the emergence of new capabilities from various types of equipment coordination in the weapon equipment system, the relationship between equipment and capabilities in the system is analysed by using the association matrix. In order to unify the capabilities of a single type of equipment in the system and the capabilities emerging from equipment coordination to construct the integrated matrix of equipment capability, based on the correlative analysis of equipment capability. Finally, the effectiveness and rationality of the method are verified by an example. The results show that this method can measure the capability emergence of each equipment in the weapon equipment system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aimed at visualization requirements of the motion state of unmanned ground vehicles (UGV) in geospatial environment, we designed a Cesium-based geospatial information framework. Firstly, the ROS-based motion data structure and communicating method are studied. Then the motion information of UGV in real time is display in a 3D geographic environment with basic information such as maps and terrain loaded. As a result, the fusion and expression of motion data and geospatial data are accomplished, which support the application and research of internal and external information visualization, path planning, automatic control, and other aspects of UGV.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on the B/S architecture, a design scheme was proposed to develop a data management system of ordnance support work with certification. Containing such hardware components as printing and attendance devices, the system could not only realize the management of the ordnance support work with certification, but also support the immediate production of a work pass in the form of IC card, and realize the daily attendance management of technical support personnel. An information-based solution was therefore given for the problems of work with certification encountered by the ordnance support forces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on the bibliometric method, this paper sorts out the development history of China's medical device field. With "medical equipment", "medical equipment" and "medical consumables" as the subject words, the Chinese National Knowledge Infrastructure (CNKI) was retrieved from CSSCI and SCDC journals in the field of medical equipment in China from 2010 to 2021.Using CiteSpace software, through the author, research institutions, keywords and other information to conduct visual analysis, explore the research status of this field, and analyze the research hotspots in China's medical device field from 2010 to 2021.The results show that the number of publications in China's medical device field has shown a fluctuating upward trend. There is less cooperation between research institutions and the frequency of author cooperation is low in 2010 to 2021. The main research hotspots include medical device supervision, medical device cleaning, and medical device disinfection. It is concluded that the related research in China's medical device field is in a period of steady development, and pressure injury, clinical trials, and artificial intelligence medical treatment will be the research hotspots in the future of China's medical device field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the results of three studies related to the visualisation of sleep data on wearable devices in two forms: smartwatches and fitness bands. We aimed to comprehend the preferences and outcomes of various workout visualizations according to shape. Their usage of wearable technology and their preferences for choosing a certain visualisation were the subjects of an initial questionnaire. The findings demonstrated that visual representations were superior to plain text in terms of boosting people's comprehension and perception of their movement data and that preferences for movement data also affected the choice of visual representation. We examined the effects of smartwatches and fitness wristbands using in-person tests and online simulated visualisations in the pilot research and subsequent perception studies. According to the findings, smartwatches are more adept at visualizing more complicated issues than fitness bands. Due to these restrictions, we are unable to conclude that smartwatches are inherently better than fitness bands, although this poses new issues for further study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In China,Network Engineering was officially established as an undergraduate major in 2012. Hundreds of thousands of Network Engineering graduates have entered the job market since 2016. What are their salary levels and employment quality? These are the major concerns of in-college Network Engineering students. Only by obtaining comprehensive, timely and reliable market frontier recruitment data, can they know what directions will bring higher salary and better growth space, thereby allocate their time more reasonably to learn the most beneficial courses. How to obtain timely, comprehensive and reliable cutting-edge recruitment data? This research introduces an automatic recruitment data collection, warehousing and analysis program based on big data. The program takes 51job.com as the target website, scrapes all the network related recruitment data in Guangzhou from June 2021 to June 2022. Then stores and visually analyzes the data, and finally obtains the statistical data including total number of recruitment, average salary and the change curve of salary over working years for various posts in the network specialty. The statistical results show that Linux maintenance engineers and information security engineers have higher wages and better development prospects. These help guide Network Engineering students to determine their own direction of efforts. In addition, the data collection, storage and analysis process of this study has wide range of applicability, and can be extended to other situations that need data to assist decision-making.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study aims to use visualization software to explore the development process and research hotspots of Chinese consumer purchasing behavior. We analyzed 4751 articles retrieved by CNKI through CiteSpace. Based on the analysis results of CiteSpace software, we came to the following conclusions: firstly, the number of documents showing an upward trend and researched the development process, which can be divided into four periods: budding, formation, development, and adjustment. Secondly, the analysis of highly cited literature can be divided into three categories, i.e., research review, model construction, and empirical research. Thirdly, the research hotspots focus on four dimensions: external stimuli, consumer satisfaction, consumer psychology, and e-commerce. This study provides a systematic and objective perspective, which will help scholars understand the research hotspots.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Metadata management plays an important role in enterprise information management. The complete metadata management system has directly affected the flexibility and high scalability of the platform. This paper summarizes several important links in metadata management technology in data warehouses and large-scale distributed file systems. This paper organizes the metadata management standards, compares various metadata management architecture technologies,and summarizes the metadata management characteristics and metadata management strategies. At the same time, this paper introduces the applicability of the current mainstream open source metadata management tools in detail, focuses on the research of cognitive metadata directory based on machine learning, and prospects the future research direction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Using the VAR-DCC-GARCH model, the paper studies the dynamic correlation between Ruble closing price and WTI's oil price, and between Euro and WTI's oil price during the Russo-Ukrainian war. The results show that before the Russia Ukraine war, there was a strong correlation between the Ruble and WTI oil prices, and between the Euro and WTI oil prices. Their connection deteriorates sharply and became negative during the Russia Ukraine conflict. We speculate that European stock market investors will flee risky assets and turn to safe haven assets, and China may use Euros in oil trade settlement in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the increasing complexity of urban scale and social structure, the phenomenon of "data island", repeated data collection, difficult sharing and other problems occur frequently in government data. This paper studies the knowledge extraction technology in knowledge graph to provide data support for the subsequent construction of knowledge graph of government affairs. Knowledge extraction includes two sub-tasks: named entity recognition and relation extraction. Machine learning method is used to extract entities and relations, and the knowledge graph of government affairs is constructed based on the extracted entity and relation data. Building a good government affairs knowledge graph plays a crucial role in mining the connections between different types of government affairs entities, and it is also of great significance for information extraction in the following application fields.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The construction process of urban deep foundation pits will inevitably have an impact on nearby buildings. Taking Guangzhou Dayuan station project as the research object, this paper makes theoretical analysis by consulting a large number of data, and uses numerical simulation calculation method for image processing and simulation calculation. The deformation of surrounding buildings caused by deep foundation pit excavation is analyzed. The main contents are as follows: (1) With the help of MIDAS GTS finite element software, the numerical simulation of deep foundation pit and building pile is carried out, the image of pile foundation is processed, and the influence of deep foundation pit excavation depth on pile deformation is studied; (2) Analyze the simulated calculation data and the variation law of horizontal settlement and displacement of pile foundation; (3) Through the deformation analysis of pile foundation, the maximum settlement of pile foundation close to the foundation pit is obtained, which provides theoretical basis and engineering guidance for the design and construction of similar deep foundation pit projects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
All services provided by robots to humans are based on navigation control. Navigation control includes positioning and navigation. Path planning is a key part of navigation. The navigation control algorithm is at the heart of determining the behaviour of the robot. The navigation control module includes global path planning and local path planning. Global path planning is the creation of a feasible path from the start point to the target point using an existing electronic map as a standard. Local path planning, also known as local obstacle avoidance, is the process by which sensors scan for unknown obstacles during the robot's operation and redefine a local path around the obstacles towards the target point. This paper describes some of the main algorithms that are more widely used in motion planning, including sampling-based search methods such as RRT and its range of optimisation methods. Each of these methods has its own search process and results. There is also a search algorithm called the Markov decision process model, which we tried to combine with RRT but failed due to different application areas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid development of digital media technology in recent years, the fast-food culture represented by fragmented reading and short videos has become increasingly popular. Meanwhile, along with the change and development of media, contemporary fast-food culture presents different forms and characteristics, and changes human thinking more profoundly. From the perspective of speculative design, the article explores the influence of fast-food culture on public life through the design of interactive installations, and provides reference for future research on speculative design in the context of interaction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Highly accurate forecasts of day-ahead PV output are an effective way of dealing with the uncertainty of a high proportion of renewable energy generation on a growing power system. At the same time, PV output is highly intermittent and stochastic, making accurate forecasting difficult to achieve. In this paper, a hybrid neural network model consisting of a convolutional neural network with an attention mechanism and a Bi-directional long and short-term memory network is used to forecast the PV system data of the Yulara town for the first hour of the day. The correlation between the historical data and the data to be predicted was also compared using the MIC metric, and it was found that the correlation between the historical data and the data to be predicted at the same time from one week ago to one quarter ago was very high, thus adding it to the dataset to improve the prediction accuracy. Root mean square error and mean absolute error were used as evaluation metrics for the forecasting models. Compared with multivariate LSTM, support vector regression and decision tree regression, the evaluation metrics of the proposed methods in this paper are all better.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper aims to analyze the interaction mechanism of the drug risk knowledge and belief behavior model among residents in Jilin Province and provide a scientific basis for further medication safety intervention strategies among residents. A total of 1176 residents from Jilin province were surveyed, and structural equation models were constructed for analysis. The results of this study showed that in general, residents of Jilin Province had a strong positive effect of medication cognition on their attitudes (r=0.732, p<0.01); both cognition and attitude had a strong direct effect on behavior, but the intensity of the effect of attitude on behavior (r=0.345, p<0.01) was greater than that of cognition on behavior (r=0.171, p<0.01). Residents of Jilin Province are at some risk of medication use. Targeted drug use knowledge popularization and cultivation of positive and correct drug use psychology are effective ways to reduce residents' drug use risk.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated Systems and Visual Information Monitoring
In order to solve the problem that a single camera cannot take into consideration the broad imaging field and high-resolution details, we propose a novel cross-resolution image mosaic framework that embeds local images into global images to synthesize gigapixel images with wide field-of-view and high-definition details. Firstly, the framework adopts a color transfer algorithm to maintain the color consistency between images; secondly, it adopts multi-level feature point matching and homography transformation for image registration; thirdly, the registered local images are segmented into small patches and fused into the global image to provide detailed information. The experimental results show that the framework proposed in this paper can alleviate color and content inconsistency between images and generate large-field images with excellent visual effects when the resolution gap is huge.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The wavefront of Vortex electromagnetic wave (VEMW) is helical in the spatial distribution and different orbital angular momentum (OAM) modes l are orthogonal to each other, which effectively improve the imaging and detection capability of radar. The traditional echo signal ”stop and go” hypothesis will produce mismatch filtering and Doppler coupling time shift caused by the intrapulse Doppler term for high-speed targets. In addition, the azimuth of the target coupled with the OAM mode will also be blurred, resulting in deterioration of the imaging quality. For this problem, this paper analyzes the high-speed target imaging based on vortex electromagnetic waves to explore the factors that affect the imaging quality. Firstly, the imaging model for high-speed target is deduced and the echo signal expression is analyzed. On this basis, the phase term formed by the high-speed motion in the echo is analyzed. The simulation results show that the imaging quality is effectively improved after the error phase term obtained from the theoretical analysis is compensated. This work can provide suggestions for subsequent work on vortex imaging of high-speed targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Event extraction is a key research direction in the field of information extraction. In order to improve the effect of event extraction and solve the problem that the general event extraction method cannot make full use of the text feature information, an event extraction method integrating trigger word features is proposed. By building a remote trigger thesaurus, we can provide additional feature information for the event type classification model, enhance the ability of discovering event trigger words. Then the event arguments extraction model integrates the event type and trigger distance features to improve the representation learning ability. Finally, connecting the event type classification model and the event arguments extraction model in series to complete event extraction. Experiments are carried out on the DuEE dataset and the result shows that our model has more outstanding performance than other models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Biometric methods have been widely used for privacy protection. Biometric recognition has unique and random characteristics, which can better protect people's privacy and security. In this paper, a biometric system based on plethysmography (PPG) signals using fuzzy min-max neural network are developed. The PPG signals have the ability of liveness detection compared with fingerprint recognition and face recognition. The system consists of three processes: signal preprocessing, feature extraction and model classification. The experimental results were obtained an accuracy of 97.62% in the Capnobase database.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, with the development of social economy and the unprecedented development of computer hardware computing power, artificial intelligence technology has developed rapidly, the era of big computing has come, and breakthroughs in deep learning algorithms have brought new opportunities for human-computer interaction and autonomous vehicles. Autonomous driving is divided into three parts: perception, planning and control. The perception part is the most widely used technical field of computer vision, and the camera, as one of the indispensable sensors in intelligent driving vehicles, provides important image information for self-driving vehicle systems. The automatic driving system can obtain the location and distance information of various vehicles and pedestrians appearing in front of the car, providing accurate road information for drivers, avoiding traffic accidents, reducing casualties and property losses.
This paper studies the object detection task and depth estimation task based on monocular vision in the visual distance perception system of autonomous driving. The emergence of deep learning has promoted the rapid development of the field of computer vision, and the application of deep convolutional neural networks has enabled vision-based object detection and depth estimation. The estimation accuracy has been greatly improved, but the computational load of the deep convolutional neural network is very large. Since the automatic driving system needs to control the car in real time, it requires the visual perception algorithm to run in real time, and the on-board computing platform of the self-driving car is required. It belongs to the mobile computing platform, and its computing power is very limited. Therefore, this paper designs a multi-task deep convolutional neural network to detect the targets appearing in the field of view of the autonomous vehicle in real time based on monocular images, and obtain the front The distance between the target vehicle or person and the camera of the vehicle appears.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to solve the problem of missing detection due to small targets and to improve the accuracy of small target detection, this paper proposes a small target detection algorithm based on Faster R-CNN. In order to overcome the problem of gradient disappearance and gradient explosion caused by the over-deep network, this paper uses the residual network ResNet50 instead of the VGG16 backbone feature extraction network, and additionally uses a soft non-maximum suppression method to improve the recognition rate of overlapping objects. The algorithm was trained and tested on the PASCAL VOC dataset, and experiments comparing various networks showed that the algorithm showed good detection and high accuracy in the presence of local occlusion of the target as well as in the presence of too small targets, with a detection accuracy of 83.26% on the test set, which was on average 8.45% higher than the traditional Faster R-CNN detection results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Skin cancer is one of the most commonly diagnosed types of cancer and poses a great threat to people's health. By using computer-aided diagnosis, the corresponding lesions of skin cancer can be classified and segmented. With the development of deep learning in recent years, the convolutional neural network based on deep learning provides a better diagnostic aid for traditional manual selection and empirical judgment under heuristic rules. In this paper, an improved m-VGG16 network model is proposed through the network structure based on VGG16. It uses the U-shaped structure in the U-net model to complete the structural design of the model, and initializes the m-VGG16 network with the weights of the pretrained VGG16 model on the ImageNet dataset. In this paper, experiments were performed on the dermoscopy image dataset from the Part1 part of the ISIC 2017 challenge to test the performance of the model by the corresponding evaluation metrics. There is a certain improvement in Jaccard Index compared to the comparison model, which provides a worthwhile reference for research on aspects related to dermatoscopic image segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In target detection, small target detection is always a thorny problem, because there is little pixel information of small target in a picture. However, in the feature extraction phase of most target detection network architectures, the deep feature maps lack information on the edge or geometric details of small targets. Although the shallow feature map is rich in geometric details, it lacks the semantic features of small objects. To solve the above problems, this paper proposes a small target detection model F-SRAF, which combines feature enhancement and attention mechanism. Specifically, firstly, the feature enhancement module is used to extract credible geometric details by combining the shallow high-resolution features with the deep super-resolution semantic features. Then, the channel attention mechanism is added to the deep super-resolution semantic features to enhance the features useful to the target and suppress the background features. Finally, the related detail feature map learned by the network is fused with the semantic feature map added with the channel attention mechanism to output a high-resolution feature map with semantic information. In the experiment of this paper, our method has achieved good results on MS COCO dataset and Tsinghua-Tencent 100K dataset. At the same time, it is proved that this detection algorithm can enhance the semantic information of shallow feature map and has a good detection effect for small targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Null broadening is an effective method to suppress the motion interference, but most methods are suitable for the uniform linear array. To solve this problem, an adaptive null broadening method based on uniform rectangular array (URA) is proposed in this paper. The method first estimates the power of interference and adds virtual interference with the same power near the real interference. Then, the method estimates the noise power and reconstructs the interference plus noise covariance matrix (INCM) which contains the null broadening information. Finally, the desired signal steering vector is corrected to obtain the optimal weighting vector. Simulation results show that the method can form wide nulls in the interference direction to suppress the motion interference and is robust to the desired signal steering vector mismatch.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of deep neural networks, the ability to generate fake faces has improved significantly. Although many fake generation algorithms appear to generate real faces, they do show artefacts in some areas that are not visible to the naked eye. In this paper, we use the feature extraction algorithm in image steganography to extract features, and at the same time, we use a simple classifier type classification. Compared with previous systems that require a large amount of labeled data input, our method achieves good results with only a small number of annotated training samples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For unsupervised cross-domain named entity recognition, the texts of different domains have different features, and there are also a large number of domain-related specific vocabularies, resulting in some specific words of the target domain are rarely learned in source domain, or have different meanings. In order to solve above problems, we are the first to propose that embedding hierarchical vector representation into the multi-cell compositional LSTM-CRF model, sentence vectors are added on the basis of the character-word vector to form the character-word-sentence hierarchical vector representation. Based on the different contributions of different words to sentences, the model constructs sentence vectors by using label attention mechanism, so that sentence vectors can use more comprehensive information to infer the features of domain-related specific vocabularies, reduce its interference to model understanding. The multi-cell compositional LSTM model encodes various entities and uses the relationship between words and entities to transfer cross-domain knowledge from the word sequence level to the entity sequence level. Finally, after modifying the boundary of the label sequence through the CRF layer, the final result is obtained. In addition to the main task of named entity recognition, the model also uses LM (Language Modeling, LM) task to assist in learning the domain features of the target domain. The experimental results show that the F1 value of the model proposed in this paper has been greatly improved in the different cross-domain datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a new task, how to describe apparel in an aesthetic way, which called fashion image aesthetic captioning. It can be beneficial to the E-commerce since there are tons of clothes needed captioned to capture customers’ eyes. It will also help people understand fashion better. We adopt the architecture of encoder-decoder as our baseline. We introduce two classifiers - color harmony classifier pretrained on AVA dataset as well as clothes type classifier to enable encoder to extract more correct features from clothes images. As for decoder we use LSTM with attention mechanism to generate sentences. Additionally, we build a new dataset containing 79,105 fashion images with aesthetic description and attributes. The experiment on the dataset shows great results of our model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Thinking of the low accuracy of chip detection and location during the chip testing, this paper proposes an optimized YoloV2 model for chip detection and location in the process of chip testing. Firstly, this paper directly builds the chip image acquisition platform to collect the image data set. Because the image data collected in the experiment is less, this paper uses 5 different image transformations to expand the data set, which solves the problems of over fitting, low universality and weak robustness caused by small data set. Secondly, this paper uses the optimized MobileNetV2 model to replace the original Darknet19 model of YoloV2, which reduces the amount of parameters and enhances the recognition speed of the network. Finally, this paper modifies the regression calculation method of the bounding box of the original YoloV2 model to accelerate the regression speed and accuracy of the bounding box. The experimental results show that the parameter size of optimized YoloV2 model is just only 82.3MB, and the recognition accuracy is better than YoloV2 model and template matching algorithm. It can effectively work out the problems of missing inspection, false inspection and poor accuracy in chip positioning in chip testing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Combining traditional culture with modern technology is of great significance of the inheritance and development of human's precious cultural heritage. Taking puppet performance as an example, we applied the popular robot control technology to innovate the control method of the puppet, and design a programmable puppet performance robot. Among them, the programmable module is implemented based on the Android platform. We implemented an application of a concise interface and convenient operation, users can freely combine puppet actions. In order to reduce costs, the electromechanical control system used STM32 series single-chip microcomputer as the core control chip. For the purpose of ensuring the normal communication between the software and the control chip, we designed a communication protocol based on the serial port.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detection and morphological classification of galaxies are important steps to study the formation, structure and evolution of galaxies. However, with the development of large-scale sky surveys such as dark Energy Spectral Survey (DESI), the image data of galaxies are increasing rapidly, and the problem of low classification efficiency caused by excessive image data is difficult to be solved by traditional classification methods. Machine learning algorithms can help astronomers efficiently and automatically detect and classify different galaxies on large astronomical datasets. Therefore, combining with the current popular YOLO visual detection model, this paper proposes a galaxy shape detection model based on YOLOv5.In this paper, affine transformation is firstly carried out on the image, then bilateral filtering and sharpening are used to enhance the data, and for small target detection, the method of reducing the convolution step is adopted. Finally, the YOLOv5 model was used to detect and classify spiral galaxies, elliptical galaxies, lensed galaxies (bar galaxies) and stars in the galaxy images. The mAP@0.5 of the model reached 87.63%, which could accurately detect the positions of different galaxies and classify them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, machine learning algorithms had good performance in many fields. On the one hand, its predictive ability is greatly improved; on the other hand, with the increase of the model complexity, the interpretability of the algorithm is even worse. In this paper, we propose a novel method for improving the tree ensemble model by balancing predictive performance and interpretability. The rule extraction turns tree models into “if-then” rules. The rule pruning method removes the redundant constraints. And the rule selection method selects the optimal rule subset based on the genetic algorithm. An evaluation of the proposed method on the regression problem has been performed. Experiments on acute toxicity datasets demonstrate the effectiveness of the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to solve the problem of false detection and missing detection caused by different shapes and scales of commodities in commodity image detection, a commodity image detection algorithm based on improved YOLOv5 was proposed. Firstly, the algorithm optimizes the extraction features of Backbone by adding Transformer structure to Backbone network of YOLOv5. Then, the original PANet structure of Neck network is replaced by BiFPN structure, and the coordinate attention mechanism is introduced, so that the new network model (CA-BIFPN) can detect and locate the position of goods in the image more accurately. In order to verify the effectiveness of the improved method, a comparative experiment is conducted with YOLOv5. Experimental data show that the improved YOLOv5 algorithm proposed in this paper achieves multi-scale target detection, and the mAP reaches 99.5% on the self-made dataset, which is 0.2% higher than the original YOLOVv5 algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For the problems of fire detection models based on computer vision, such as long inference and training time, too many model parameters and low detection accuracy. We propose ES-YOLO, which can quickly and accurately detect flames and smoke. Firstly, the original YOLOv5s backbone network is replaced with EfficientNetV2, which reduces the computational complexity of the network and improves the detection accuracy. Secondly, replaces the CIoU loss function with SIoU, which speeds up the convergence of the model. Finally, 9-Mosaic data augmentation is designed to enrich the dataset. The experimental results on the PASCAL VOC2007 dataset demonstrate that the mAP@0.5 and recall of ESYOLO are 20% and 15% higher than that of YOLOv5s, the size of the model are compressed to 1/2 of that of YOLOv5s. ES-YOLO meets the requirements of lightweight and real-time detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on the combination of digital animation technology and virtual reality technology, this study will further expand the application of digital animation technology in medical treatment and rehabilitation training, and complete the design and implementation of virtual training and rehabilitation system for upper limbs. The system will comprehensively utilize Maya and Unity3D software to build a virtual training and rehabilitation project of upper limbs in the form of digital animation, and rely on VR technology to transform it into a virtual training scene with high immersion, high accuracy and high interest, so as to solve the tedious problem of traditional training process. At the same time, with the help of the visual motion recognition function of Kinect somatosensory device, the accuracy data of patients' training movements will be obtained through the joint coordinate point sequence recognition algorithm and DTW movement similarity algorithm, which will provide necessary data support for the rehabilitation process of upper limb motor dysfunction. In addition, the whole system adopts C# language in ASP.NET environment to complete the design and encapsulation of API interfaces and functional modules of the system, and helps doctors to complete their daily work with flexible application forms and convenient system operation, which promotes the construction and development of digital medical treatment while improving the effect of rehabilitation training.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Progressive Mean (PM) control chart is a widely recognized tool to notice the insignificant and standard variations in the process location parameter. There is one deficiency in PM chart, it generates signals which are out of control, and when the standard deviation is processed this deficiency changes the results. To overcome this problem, we proposed a method in case of not stable process stand deviation chart is used, which enables monitoring of process which is more robust in this case. The suggested chart is a participant for and charts. The numerical results concluded that the performance of the proposed chart is superior to detect small-scale and standard changes in the process parameter. To support the study an expressive application is also provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D object detection from lidar point cloud is a key technology in autonomous driving. As the sparseness, irregular and large amount of lidar point cloud data, which lead to slow operation efficiency of neural network and low detection accuracy. In order to address this problem, we research on vehicle object detection form lidar point cloud in this paper. In the data preprocessing stage, we use the point cloud reduction method based on density clustering (DBSCAN) to remove the sparse outliers points and noise points from the point cloud, and better retain the target features. The simplified point cloud makes the network converge faster in the training phase, effectively reduces the network computing overhead, and reduces the training time by ~40%. In order to make the network to get a better detection ability, we also add an attention network (Point Attention) to learn the key features from the target point cloud. The experimental results show that our proposed method successfully improves the network operation efficiency, and the accuracy of vehicle object detection is also increased to 89.5%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Drones have been used in many practical applications,,such as agriculture, aerial photography and surveillance. Therefore, it is very difficult for machines to automatically understand the visual data collected by drones, which makes the connection between computer vision and drones become inseparable. However, the identification of targets from aerial images is faced with the difficulty that the target to be detected is usually too small, too dense, and the relative size is not fixed relative to the image, and the relative motion of drone and traffic leads to the blurred target. To address these challenges, this paper proposes an improved YOLOv5 network model. Based on YOLOv5, we integrate the convolution block attention model ( CBAM ) and the attention mechanism of Swin Transformer to enable it to effectively focus on the attention area in dense small object scene and reduce the calculation amount. We also adopt the BiFPN structure for the structure of the neck network, which can obtain more effective feature information and reduce some unnecessary connections. In addition, we also add an additional detector to detect small objects. To further improve our YOLOv5, we adopt the data enhancement strategy. Extensive experiments on the VisDrone 2019 dataset show that the The AP result of the improved YOLOv5 was 32.83 %. Compared with the baseline model ( YOLOv5 ), YOLOv5 increased by about 6.5 %, indicating that the improved model was effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic Aperture Radar (SAR) has become one of the primary means of current earth observation due to its unique technical advantages, such as all-weather, all-time, and extended operating distance. However, most SAR detectors based on deep learning methods use outdated ResNet backbone networks, and the detection model detection accuracy is low. This paper proposes a new network called Dynamic IoU R-CNN (DIoU R-CNN) to transfer the self-supervised learning method moby based on Swin Transformer to the complex downstream task of SAR ship detection. DIoU R-CNN adds the dynamic IoU module to Faster R-CNN and the advanced BalancedL1 loss function, achieves relatively high accuracy SAR ship detection in the SSDD dataset without much increase in the number of parameters and training time. And the Swin Transformer with self-supervised learning performs even better than the supervised learning method in the comparison experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Knowledge Graphs (KGs) are composed of structured information in the form of entities and relations. And the process of extracting entities and relations from data is called Knowledge Extraction. Knowledge extraction is a fundamental task in the field of Natural Language Processing (NLP) and a key part of knowledge graph construction. In this paper, we provide comprehensive research on knowledge extraction in knowledge graph construction. We first introduce the technical architecture of the KGs and the classification of knowledge extraction. Then, we systematically categorize existing works based on the development of knowledge extraction. Finally, we review current open-source tools for knowledge extraction and summarize their advantages and disadvantages.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aiming at the fact that the current channel scene reconstruction method cannot meet the requirements of depth recovery, this time, the research on the 3D modeling method of channel scene based on multi-eye vision is carried out. According to the image's intersection ratio and the intersection point's position, the orientation of the collected image in the transmission channel scene is judged, the marker points are identified, and the YOLOv3 model is used to match the image characteristics of the transmission channel scene. In the matched image, the point closest to the image point of the same name on each ray of the multi-eye vision area is determined as the optimal solution of spatial point coordinates, and the 3D modeling of the channel scene is completed. The test results show that the design method uses multi-eye image data to perform 3D restoration of the channel scene, the ranging error does not exceed 2m, the RMSE does not exceed 0.1, and a good modeling effect is obtained. The design method improves the adaptability of 3D modeling to complex scene environments in identifying and detecting hidden dangers in transmission channels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For the task of SAR ship detection , improvements are made on the basis of YOLOv5. Considering the ship target characteristics, the loss function is improved. And the coordinate attention mechanism (CA) is added to the backbone. Finally, a layer of feature fusion branches is added to the path aggregation network (PANet). Contrast with unchanged YOLOv5 detection network, this improvement increases the precision rate from 93.5% to 96.1%, the recall rate from 93.4% to 95.3%, and the mAP from 93.9% to 97.3%. The network detection performance has been significantly improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The descending speed of the aircraft is fast and the time is short. How to measure the instant descending speed has become a research hotspot. At the moment of descent, the on-board sensors and other equipment often have intermittent signals at the moment of landing, resulting in data gaps, which cannot meet the needs of actual scientific research. Based on this, a set of high-speed camera arrays is designed in this paper, and the camera angles are overlapped by cross-layout. The camera hardware is programmed to grant GPS time uniformly, which unifies the camera time. Then, the method of multi-eye vision measurement is used to measure the descending speed of the aircraft, and the simulation verification is carried out, and good results are obtained. Finally, the error is evaluated, and the measurement speed accuracy can meet the actual use, which shows that the establishment of the camera array measurement system can quickly and effectively solve the problem of aircraft descent speed measurement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Safety wire status detection is of great significance to aircraft operation safety. To address the problems of low efficiency and leakage in the current manual-led safety wire twisting direction inspection, we propose a machine vision-based safety wire twisting direction detection method for aviation fasteners. Firstly, the Yolov4-MSC objective detection algorithm is proposed for fastener and safety wire region localization. After segmentation of the region localization, we pinpoint the precise location of the fastener by the gradient Hough transform method, the safety wire braiding is detected using the Maximally Stable Extremal Regions (MSER) with Non-Maximum Suppression (NMS), and the safety wire centerline location is fitted using the Random Sample Consensus (RANSAC) algorithm. Lastly, the safety wire twisting direction detection is obtained by the position relationship between the safety wire centerline and the fastener. The experimental results show that the proposed safety wire twisting direction method is effective, has high accuracy and robustness, and can achieve fastener and safety wire location and safety wire twisting direction detection in complex aircraft maintenance scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In view of the lack of real-time performance in most of the current object-level semantic map construction systems. Combined with high real-time instance segmentation network and visual synchronous location and map construction (VSLAM) algorithm, a object-oriented semantic map real-time construction system is proposed. firstly, the system uses the instance segmentation network to segment each color image, obtains the object and combines the feature and spatial information provided by VSLAM to form the instance description information, and then matches the instance through feature consistency and spatial consistency. At the same time, Bayes filter is used to calculate the existence state of instances and spatial constraints are used to filter the error detection of instances. finally, a real-time instance-level semantic map construction system based on VSLAM is realized, and the system is tested with TUM data sets. The results show that the system can build object-level semantic maps in real time and reduce the error detection rate of instance segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For the realization of the "dual carbon target", new energy vehicles represented by electric vehicles will replace fuel vehicles to become a foregone conclusion. The reform of electric vehicles often starts with public transportation. Accurately predicting the energy consumption of pure electric buses will help the bus itinerary planning. This article takes 15 pure electric buses for two years of driving data as an example, which will be divided into two -level driving fragments. Extract and study driving behavior, vehicle factors and road traffic conditions. Train machine learning models with key factors extracted and use the model for actual energy consumption prediction. The actual stroke test results indicate that the average root error is 0.19 lower than the traditional multi -linear model, and the average absolute error is 16.7% lower than the traditional multi-linear model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the proportion of artificial intelligence becoming more important, binocular vision technology is more widely used in the field of machine vision. An important step that must be experienced is binocular camera calibration. The accuracy of binocular calibration will also directly affect the accuracy of subsequent work. In view of the disadvantages of low accuracy and poor robustness, Halcon calibration uses binocular camera calibration and stereo correction. This method fully considers the influence of camera distortion on calibration, and determines the conversion relationship between each coordinate system. The internal and external parameters of the camera were obtained and validated by image-stereo correction experiments. The results show that the calibration method is highly accurate and can be widely used in the binocular vision field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The study of new combat head structure is of great significance to the anti-missile warhead, using a typical shrapnel combat head structure as the background, a simulation analysis was conducted using a flat head structure and an isosceles trapezoidal cross-section head structure to study the effects of the two types of shrapnel head structures on shrapnel fragmentation. Computer-aided software LS-DYNA was used to simulate the whole process of the explosive drive of the combat section. The simulation results show that the isosceles trapezoidal head structure can significantly increase the dispersion velocity and dispersion angle of the prefabricated fragment, and significantly increase the axial blockage area of the prefabricated fragment, which greatly enhances the axial force of the combat section of the shrapnel. After simulation analysis, it can be seen that the isosceles trapezoidal head structure is more effective in destroying the warhead, which can provide reference for the design of anti-missile warheads.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to investigate the influence of main design parameters on the sensitivity of the measuring system, this paper gives the expressions of sensitivity in both x and y directions based on two sets of mutually perpendicular measurement devices, and selects the position of the laser source, the position of the measuring center and the position of workpiece as the main parameters affecting the measuring results, establishes the functional relationship between the sensitivity of workpiece diameter and the above parameters, and analyzes the influence of the sensitivity of the above parameters by using simulation. The results show that, the diameter measuring values are most sensitive to the offset of the workpiece position; it can decrease the sensitivity of the system to the parameters and improve the anti-interference performance of the measurement system by increasing the distance between the laser source position and the linear array CCD appropriately or making the measurement center close to the CCD panel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
New energy electric vehicles play an important role in reducing carbon emissions, reducing fossil energy consumption, and promoting the development of electrified transportation. As an important energy storage and driving source for pure electric vehicles, the safety of power batteries in the charging process has always attracted much attention. During use, the thermal effect of the battery will affect the temperature and electrochemical properties of the battery, greatly affecting the safety and service life of the battery. This article uses the historical data of the real electric bus in operation and selects the real driving data collected on 2 electric buses. A battery temperature prediction method based on CNN-LSTM hybrid nerve in the charging stage of electric bus is proposed. Finally, compared with other models, the results show that the model can effectively predict the short-term battery temperature change in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computational Science and Advanced Algorithm Research
With the rapid development of Internet technology, web applications have been widely used. How to ensure the reliability and high performance of web applications has become the focus of web site management. When a web application provides services, huge web logs are generated. These logs contain a great deal of information about users access to this web application. Real-time analysis of web logs can obtain system performance indicators and bottlenecks. In order to improve the reliability and performance of web applications, a real-time web log analysis platform based on stream computing is designed and implemented. The platform collects web log data by Flume, realizes data flow by Kafka message queue, analyzes web log by the Flink stream computing platform, stores the computing results in Redis for real-time query, and in Doris for historical data query. The availability of the platform is proved by running in a real environment, and it can improve the performance and reliability of web applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Software defect prediction is a means of software quality assurance, which aims to find potential defects in software through historical data and software characteristics. Feature selection is an important link in software defect prediction. With the rapid expansion of the number of features and the increase of feature dimensions, there may be multicollinearity problems between multiple types of features, which makes the model unstable and reduces the accuracy of the model. In order to solve the problem of multicollinearity between features, the least absolute shrinkage and selection operator algorithm is introduced into defect prediction. Through this algorithm, feature selection is realized, and the linear regression method is used for defect prediction, which improves the accuracy of classification results, reduces the over fitting of the model, and speeds up the convergence speed of the model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, with the continuous development of battlefield situation technology and the continuous improvement of battlefield information collection capabilities, the types and quantities of targets on the battlefield are diversified. This complicates and confuses the presentation of the plot, and the expression of the plot information is not clear enough. In order to give full play to the prompting and labeling function of the plot, solve the problem of display confusion and delineate the formation of unknown units, the clustering algorithm will be used to aggregate the neighboring plots. As a multi-resolution display of the map, the aggregation range needs to be adaptively adjusted according to the resolution. In this paper, the clustering algorithm of double mean shift is used to perform unit formation clustering on the plot, and cluster display. By designing the adaptive bandwidth aggregation formation based on the unit density change and the adaptive bandwidth based on the scale change, the map drawing can be displayed intelligently.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of unmanned sensing technology, using lidar sensor to obtain 3D information of obstacles has become a research hotspot. Because the laser point cloud has the characteristics of near density and far sparse, it is easy to make the phenomenon of under segmentation of nearby objects and missing detection of distant objects in target detection, which is very easy to produce inaccurate detection results. In order to solve this problem, a combination of spectral clustering and improved European clustering algorithm is proposed, which effectively solves the problems of under segmentation and missing detection. In the experiment, the clustering effect of this algorithm is compared with the traditional European clustering algorithm and the latest point cloud detection algorithm. The real vehicle experimental results show that the average positive detection rate of the proposed algorithm is 86.9%, which is improved compared with other methods. The average time of the algorithm is 87ms, which is practical on the real vehicle platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Considering the non-differentiability and multimode of fractional delay (FD) FIR filter, this paper proposes an improved moth-flame optimization (IMFO) algorithm, which combines moth-flame optimization (MFO) algorithm with Lévy flight to improve the search ability of the algorithm. In order to improve the flexibility of traditional filter design method, amplitude and phase weighted fitness function is proposed. In order to verify the reliability of the algorithm, a simulation example is designed to prove that the optimization ability of the algorithm is better than the traditional method and the traditional MFO algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper fully studies PBFT and its improved consensus mechanism. PBFT, as a classic consensus mechanism in the blockchain to solve the Byzantine fault tolerance problem, has a fault tolerance rate of 33%, but it has problems such as lack of dynamics, no scalability, and the consensus efficiency decreases with the increase of the number of nodes. Although the improved consensus mechanism of PBFT improves the existing problems of PBFT to a certain extent, it still has other problems such as high energy consumption and delay, and it cannot be put into use very well.
Based on the above problems, this paper proposes a Byzantine fault-tolerant consensus mechanism based on a multi-group voting mechanism. The biggest problem with PBFT is that it is not dynamic. When the nodes in the system change dynamically, it will crash, and it must be restarted to run. In response to this problem, this improved method introduces a multi-group voting mechanism, which enhances the fault tolerance of the consensus mechanism by grouping nodes in the network and generating production nodes through two-stage voting. At the same time, in the production node consensus stage, the production node replacement protocol is used to improve the security of the consensus mechanism and make the improved PBFT dynamic. After completing the improved design of the consensus mechanism, compared with the PBFT consensus mechanism, it is superior to PBFT in terms of energy consumption, delay, fault tolerance, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Images collected in bad weather such as fog and haze will be seriously degraded due to the effect of atmospheric scattering. On the one hand, it will make the image color gray-white, the contrast is reduced, and the object features are difficult to identify. On the other hand, it also affect the image of post-processing and so on. Therefore, it is of great theoretical significance and practical value to study more effective dehazing methods. The dark channel prior dehazing algorithm has a good dehazing effect for most scenes, but it is not suitable for the sky region, resulting in severe color distortion and large amount of noise in the sky region. This thesis presents an improved algorithm based on the combination of bright and dark channels and the mean value of fog map. Firstly, the atmospheric luminous value is calculated by using the method of combining bright and dark channels, which makes the calculated atmospheric luminous value more robust. Secondly, for the fog retention parameter, the mean value of the fog map divided by the mean value of the bright channel was taken to solve the single weight problem to make the fog removal clear and natural. Finally, the transmittance threshold is constrained by the mean value of dark channel in normalized fog map to complete the clearness processing of foggy images. Experimental results show that the improved algorithm solves the problems of color distortion and noise in the sky region, and gets natural clear image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stereo matching is a key step in 3D reconstruction based on binocular vision. The choice of stereo matching algorithm is directly related to the effect of 3D reconstruction. Aiming at the limitation of the semi-global stereo matching algorithm SGBM, the Census transform relies too much on the center point pixel and is easily disturbed by noise. An improved algorithm based on SGBM is proposed as a stereo matching algorithm in this paper. In the original cost calculation stage, the algorithm replaces the gray value of the original center pixel with the minimum error gray mean value of the multichannel neighborhood to perform the Census operation, and combines the new operation result with the AD cost of the pixel to calculate the initial matching cost. It effectively solves the pixel dependence of the center point and the matching ambiguity problem of a single Census cost in the repeated area. The disparity map is recovered by methods such as multipath cost aggregation and left-right consistency detection after improved cost calculation is completed. This paper uses the Middlebury standard data set to verify the effectiveness of the improved algorithm. Through experimental comparison and analysis, the disparity map generated by the improved algorithm in this paper is effectively improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As a rapidly developing research, virtual plant simulation has received extensive attention in recent years. The leaf is one of the important organs of the plant, and the quality of the leaf simulation directly affects the effect of the overall model of the plant. The corner point detection method was used, combined with the B-spline curve method to establish the leaf contour, and the leaf vein simulation was carried out using the fractal LS syntax, and finally the two-dimensional geometric model of the leaf was established. This method can keep the blade contour well, and highlights the role of the feature points detected by the corner detection method in the virtual blade modeling, and the simulation effect is good.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the complex environment of the city, due to the limited visual blind spot of the human perspective, the existing police helmets and other equipment cannot give sufficient protection to the special service squad because it does not have a self-warning function. In order to protect the safety of team members and improve the completion rate of tasks, a personnel vigilance system based on Tengine was designed. First, the high-performance RK3399 computing platform and the dual MIPI for real-time picture acquisition were selected according to the design requirements Wide-angle camera. Secondly, simulation experiments are carried out on virtual machines, and lightweight image recognition algorithms based on Tengine are trained and built through the VOC2012 dataset. Finally, the appropriate algorithm is selected according to the analysis and comparison of the experimental results. Experiments have proved that the personnel vigilance helmet recognition algorithm based on Tengine can meet the deployment and real-time target detection requirements of the embedded platform RK3399, which can effectively improve the survival rate of special agents and ensure the efficient completion of tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To solve the problems of low accuracy of greenhouse gas detection and difficult selection of model parameters, this paper proposes a Support Vector Machine Regression (SVR), algorithm based on Particle Swarm Optimization (PSO). By comparing the performance of four common kernel functions of SVR on the test set, the kernel function with the best performance is selected as the kernel function of SVR. On this basis, comparing the Grid Search method (GridSearchCV) with the PSO, the optimal combination of super parameters C and gamma are selected. The results show that the PSO can efficiently select the optimal combination of super parameters and greatly improve the modeling efficiency. Finally, the greenhouse gas concentration of the SVR optimized based on the two algorithms is estimated through experiments. The accuracy of the optimized SVR algorithm reaches 94.42%, which improves the model’s prediction accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid development of software and hardware technology, the technical requirements for coordinated task action are constantly updated, and the order of coordinated task action needs to be automated and judged more quickly and effectively to meet the task requirements. In order to improve the efficiency and effectiveness of the cooperative task action planning process, this paper forms and calculates the basic characteristics of the cooperative task action network diagram by means of the cooperative task action network diagram, so as to realize the automatic processing of tasks and determine the sequence of cooperative actions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aiming at the problem of low localization accuracy in Distance Vector-Hop (DV-Hop) algorithm, a localization algorithm based on adaptive PSO algorithm was proposed for wireless sensor networks. First of all, through single jumped average error correction average distance, and then using the average jump from receiving much of the anchor node estimate the distance between the nodes, to optimize estimated distance, the adaptive particle swarm optimization algorithm was used to optimize the position of the unknown node coordinates obtained by least square method, avoid APSO algorithm using adaptive operator algorithm trapped in local optimum, and get the global optimal. Simulation results show that the proposed algorithm is better than DV-Hop algorithm and PSO-DVHop algorithm in positioning accuracy under different anchor ratio, node number and communication radius.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For calibration of lidar and vision, the point cloud data is irregular and noisy. Meanwhile, the outliers need to be removed due to the occlusion caused by the simple calibration plate. A joint calibration method of binocular cameras and lidar based on improved calibration plate and DON algorithm is proposed. Firstly, circular bulges of different sizes are added to the rectangular calibration plate, and then the point cloud data is filtered and segmented by the DON (Difference of Normal) algorithm. In addition, the random sample consistency (RANSAC) algorithm is used to estimate the plane and edge parameters of the triangle plate by retaining point clouds to obtain the three-dimensional positions of vertices. Finally, the projection matrix between the camera and lidar is estimated using 2D-3D corresponding points at different positions. The projection errors and root mean square errors of different frames and corresponding points are calculated. The results show that the average error of 100 frames is reduced by 5.3% compared with 1 frame. The root means square error (RMSE) of this method is 1.415cm. Compared with other advanced methods, the reliability and superiority of this method are verified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is called unsupervised learning to solve various problems in pattern recognition based on training samples with unknown categories (unlabeled). Clustering algorithm is a kind of unsupervised learning algorithm. Although a lot of clustering algorithms have been studied in modern science and applied in many fields, it is their common problem that the quantity of clusters has to be given. This paper proposes a model-based algorithm for quantity and parameters of clusters discovery (QPCD) which can calculate the quantity and parameters of clusters according to the characteristics of the data themselves. The algorithm initially fills the shortage of existing clustering algorithms. The paper proposes an elementary judgment rule on whether the cluster center is appropriate. According to the elementary judgment rule, the algorithm proposed by the paper can calculate the correct quantity of clusters, and give the corresponding clustering parameters according to the data characteristics. Monte Carlo simulation is used to evaluate the effectiveness of the proposed algorithm. The experimental results show that the algorithm proposed in the paper can start with an arbitrary given cluster center and get the cluster centers close to the actual cluster centers of the data themselves, so as to complete the clustering unsupervised.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper mainly uses support vector machine, random forest, logistic regression, and other machine learning algorithms to analyze the core data of listed companies, and then monitor their operating conditions and predict their bankruptcy probability. At the same time, the prediction effects of the three machine learning methods are evaluated by confusion matrix, ROC curve and other methods and indicators, and the best effect of the method is found. By predicting the bankruptcy probability of listed companies this study further supplements and improves the theory of machine learning algorithm on economic quantitative analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Among all kinds of natural disasters, meteorological disasters cause the greatest economic losses. Meteorological disasters have multiple causes. The characteristics of systematization, social amplification, unpredictability and urgency have caused various. The loss is incalculable. Therefore, the risk assessment and management of meteorological disasters is very important. So far, there is still no universally practical and systematic theory of meteorological disaster risk assessment and management. The research on meteorological disaster risk assessment and management in China started relatively late, Moreover, flood and drought are the main disasters, and there are few studies on other disasters, such as typhoons and hurricanes. Through the method of case analysis, this paper summarizes some internal, hoping to increase the research content in this field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid improvement of the national economy, people's requirements for food quality are increasing. Among many food materials, agricultural products have the highest requirements for transportation and storage. Different types of agricultural products throughout the country are produced scattered, most of the agricultural products in the form of natural circulation. In the traditional logistics system, the circulation of agricultural products from producers to consumers needs to experience long cycle, high logistics cost and low efficiency. However, the establishment of modern agricultural products logistics park can solve a series of problems such as agricultural products transportation, storage, processing and sales, and achieve the purpose of reducing logistics cost and improving logistics efficiency. Therefore, this paper proposes a clustering model of agricultural products logistics park location based on density peak fuzzy clustering algorithm. Taking Guiyang city as an example, site selection analysis is conducted by combining web crawler data and desensitization data to verify the feasibility of the model constructed in this paper. Finally, some constructive suggestions on the site selection of agricultural products logistics park are put forward to provide reference for government construction planning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the storage and computing ability of cloud technology, many protocols are suitable for deployment on cloud servers. Private set intersection (PSI) is practical technology in data mining, similar document detection and so on. In some cloud-based IoT system and mobile devices have poor ability to calculate the intersection. In this scenario, we design a protocol make the part of the computing intersection execute on the cloud server, called Efficient-DPSI, which supports flexible parameters. Experimental results show that our Efficient-DPSI protocol is more efficient with existing related delegated PSI protocols.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aiming at the practical problem that it is difficult to accurately and accurately recognize the shooting pose, this paper proposes a human shooting pose recognition algorithm based on optimized YOLOv5. Firstly, YOLOv5 detection algorithm is adopted in the target detection module of Alphapose. Based on YOLOv5, a four-scale feature fusion structure is proposed to enhance the detection ability of smaller scale objects by adding a small target detection layer. Secondly, the C3 module in the original small-scale detection layer is improved, and the original bottleneck module is stacked by three Transformer encoders, so as to improve the ability of YOLOv5 to learn the feature information of small targets. In addition, this paper proposes a new algorithm based on alignment and matching of key points to evaluate the accuracy of shot pose. The analysis of test results shows that the improved YOLOv5 model can achieve 90.2% recognition rate and 89.5% average accuracy on the human detection algorithm, and the recognition efficiency of key points can be improved from the original 28 frames per second to 105 frames per second. Combined with the human key point shooting evaluation algorithm, The scheme proposed in this paper can meet the requirement of accuracy and real-time of recognizing human shooting posture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid development of deep learning, vegetable detection based on computer vision in smart supermarkets can save a lot of manpower. To solve the problem that the reported target detection algorithm fails to balance detection accuracy and model size well, a lightweight vegetable detection method that is based on yolov5 model was proposed in this work. The method firstly improved the detection accuracy by adding a fusion attention mechanism (Convolutional Block Attention Module) in the backbone part, and then reduces the model parameters by replacing the normal convolution in the network with Ghost convolution to ensure the accuracy at the same time. The accuracy of localization was improved by using Alpha-IoU as the loss function of the bounding box regression. The experimental results show that the improved method achieves an average accuracy of 92.5%, which is 3.2% higher than yolov5, and the total model parameters are 6.1 MB, which is 1.0 MB lower than yolov5, meeting the requirement of balancing detection accuracy and model size.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A multi-scale UNet-attention neural network image reconstruction method is presented to increase the reconstruction quality of degraded images. To improve the reconstruction effect, a "graph-to-graph" U-Net image reconstruction network is built first. Second, a multi-scale input with an image pyramid structure is developed to extract more scale image information while retaining image details. In addition, the attention mechanism is combined to select the important information to obtain the reconstructed image with better visual quality. The experimental results show that the algorithm can recover the image better from the visual effect, and the reconstruction effect is significantly improved compared with the traditional method by using the SSIM evaluation index, which is 0.6 in value than the improved Wiener filtering algorithm and 3dB higher than the UNet structure alone in terms of PSNR index.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the proposal of energy-saving economy, smart grid is developing in the direction of green and environmental protection, and the abnormal power consumption behavior of users causes serious loss of power resources. Traditional power consumption anomaly detection methods have problems of low accuracy and slow operation efficiency. We have built a digital twin for fast and high-precision abnormal power consumption detection. The virtual model includes an LSTM model to achieve effective extraction and detection of abnormal power consumption characteristics. We update the historical database at the same time through multi-dimensional sensors (such as electricity meters) and various twin data of the surrounding environment. Then, based on the collected twin data, anomaly prediction is made. The proposed digital twin model achieves synchronization and real-time updates with the physical entities of the power system, resulting in more accurate detection results than traditional prediction methods. The results show that, compared with traditional detection methods, this method can detect abnormal users quickly and effectively, with a detection accuracy of 98.4%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid growth of the economy and consumption, the phenomenon of customer churn becomes more and more rampant. Therefore, predicting whether customers' churn behavior has become a necessary means for enterprises, society, and even the whole country to develop an economic system and create an economic system to ensure that cash flow is not blocked. In fact, the data related to customer prediction have features of huge magnitude and diverse dimensions. Therefore, organizations often need to invest high costs to complete a series of feature projects to capture and analyze feature variables. However, due to the large and complex data, the accuracy of feature engineering is difficult to be guaranteed. Inspired by this phenomenon, this paper proposes a high-quality and good-performance RF-MLP algorithm. Through the training of random forest, the data is filtered and screened and then combined with an artificial neural network to transform the screening results, capturing high-level and nonlinear feature variables and finally achieving stronger model fitting. In this study, the feasibility of the algorithm model is verified by a real and effective data set of customer churn. According to the experimental results, the AUC score of RF-MLP in the test set is 84.9%, which is 9.1% and 21.7% higher than that of the RF and MLP algorithm alone.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to display the phased information of infrastructure projects, visually process the real-time data of infrastructure projects, and study the application of three-dimensional digital handover technology in infrastructure projects. Divide the processing stages of capital construction projects, divide the grid structure of capital construction buildings, realize the real-time display of numerical value of visual processing by changing the grid operation items, and complete the application of three-dimensional digital handover technology in capital construction projects. The test results of an example show that it is of practical significance to adjust the parameters of capital construction projects online and visually display the parameters of capital construction projects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The identification of marine fish species is significant for marine fish research and conservation and the development and utilization of marine fish resources. To increase the speed and accuracy of detection of marine fish species in natural environments, this document proposes a fish recognition method based on the improved YOLOv5 model. First, the efficient channel attention mechanism module is incorporated into the backbone network to enhance the learning of local and global features of the feature map; then, the GIoU loss function is modified into the Focal-EIoU loss function to reduce the impact of low-quality samples on the detection effect. Under the same training conditions, the experimental results show that the improved YOLOv5s model achieves 94.6% detection precision, 93.8% recall rate, and 92.4% mean average precision. The enhanced fish detection method based on YOLOv5 proposed in this study has good accuracy and effectiveness and can meet the detection of marine fish species in a natural environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Generative adversarial network was proposed in 2014, and its core is a two-person zero-sum game, which improves the quality of generated images through the mutual game between the generator and the discriminator. In 2018, Ma et al. applied Generative Adversarial Networks to the field of infrared and visible image fusion. The generator generates an image that retains both the infrared intensity and visible light detail texture information. By inputting the visible image as "true data" into the discriminator, it retains more detailed texture parts in the visible image in the fusion result, and finally obtains a nice effect. However, since the model only uses one discriminator, it loses some infrared intensity information and detail information in the infrared image. The gradient of the loss function is not enough to describe the detailed texture of visible light, and information such as brightness and contrast is not considered, so the final fusion result has poor visual effect. This paper builds an infrared and visible image fusion network based on the LSGAN framework and uses dual discriminators, and introduces a convolutional attention module in the generator to make the fusion image pay more attention to infrared intensity information. It is proposed to use MS-SSIM loss function to constrain the generated image, so that the fused image has a higher structural similarity with the source image. Through qualitative and quantitative analysis, it is proved that this method retains the main infrared intensity information while retaining the detailed texture information of the visible image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Small objects have low content in the image, insignificant features, and are easily disturbed by noise, so that fewer features can be used for target detection. However, the Faster R-CNN object detection model based on deep convolutional neural network undergoes multiple pooling operations during feature extraction, which makes it more difficult to extract the features of small objects effectively, which is unfavorable for the detection of small objects. Aiming at this problem, this paper proposes a Faster R-CNN-based small object detection model that uses GAN to enhance the small object feature expression. First, based on the strong ability of deep feature extraction of Resnet152, Resnet152 is used to replace VGG16 in the original Faster R-CNN; secondly, the GAN model is trained by using appropriate high-resolution object features as the supervision signal of the generation network; finally, set the threshold of small object, so that the object features larger than the threshold directly enter the detection network and reduce the complexity of the model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Maritime ship detection technology has important value in both the military field and maritime supervision. In terms of traditional detection method of maritime ship with low accuracy under complicated situations, in this paper, we adopt a new detection approach based on the improvement of YOLOv4 in order to realize automatic testing of maritime ship under complex circumstances by deep learning. It aims to adopt lightweight network GhostNet as features to extract the network. Depth-separable convolution will be converted to pointwise convolution first and then transformed into depthwise convolution. The network parameter will be reduced while ensuring the accuracy of testing. The accuracy of testing of maritime ship will be further improved by revising activation function as SMU, combining lose function Alpha-IoU and redesigning lose function CIOU. In order to verify the performance of the algorithm in foggy environment, the interference of foggy weather environment is fully considered when generating the training dataset of maritime ships. During training, Mosaic data enhancements were added to the samples to enhance experimental robustness. The loss function was improved using label smoothing techniques to prevent overfitting. Experimental results showed that when the confidence level is 0.5, compared with the original YOLOv4, the average accuracy of the proposed algorithm reaches 99.97% when the number of parameters is reduced by nearly 84.92%. When the ship target is tiny, the testing result is also highly accurate. Therefore, the method can meet the accuracy requirements of real-time processing of maritime vessel detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We collect a total of 1830 data from January 2020 to June 2022 and use R for data processing and wavelet analysis. Moreover, we analyze the interactions between the COVID-19 pandemic, the Russian-Ukrainian war, crude oil price, the S&P 500 and economic policy uncertainty within a time-frequency frame work. As a result that the COVID-19 pandemic and the Russian-Ukrainian war has the extraordinary effects on the three indexes and the effect of the Russian- Ukrainian war on the crude oil price and US stock price higher than on the US economic uncertainty.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deploying deep learning models in embedded terminals is essential for applications with real-time reasoning requirements. In order to make the model run efficiently in the embedded end with limited resources, we propose a model compression method combining multi-factor channel pruning and knowledge distillation. In the process of network sparsity, this method uses the double factors of the BN layer to improve the pruning standard and guides the local pruning of the model according to the new standard to ensure the compression rate. In order to further improve the accuracy, we use the knowledge transfer method in the idea of knowledge distillation to fine-tune the model and use the more continuous parameter distribution of the student model to ensure accuracy. We use several deep learning models to test. The experimental results show that the proposed model compression method has the advantages of fewer parameters and higher accuracy, which reduces the resources occupied by the model application to the embedded end. More importantly, this method not only realizes the efficient operation of various models in the embedded end, but also ensures the high accuracy of the model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Blasting cut size and blasting parameters are both key influencing factors in the process of chimney collapse. In order to verify the influence of its chimney collapse process, according to the empirical formula of blasting engineering parameters, the blasting notch form and blasting parameter values are designed, and the chimney is numerically simulated by ANSYS software, and the simulation results are obtained. The stress change diagram, the time change curve diagram of the coordinates of the node at the top of the chimney and the time change curve diagram of the velocity during the collapse of the chimney are analyzed, and the simulation results are basically consistent with the actual situation. This numerical simulation design method improves the scientificity of blasting scheme design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Artificial Intelligence and Neural Network Applications
Natural scene text detection refers to locating and representing the text in natural scene images. The existing methods of natural scene text detection are based on convolutional neural network (CNN), but it is vulnerable to useless background noise in the process of extracting the features of curved text instances because the convolution kernel of CNN is fixed in size and rectangular in shape. In order to solve this problem, this paper proposes a novel Transformer-based Feature Fusion Module (TFFM) by integrating the transformer structure into feature pyramid network to reduce the influence of background noise in the process of feature fusion. On this basis, combined with the backbone and detection head of transformer structure, a network of natural scene text detection with full transformer structure is constructed. The method proposed in this paper achieves the state-of-the-art result on CTW1500 and Total-Text datasets, and the Transformer-based Feature Fusion Module (TFFM) proposed in this paper can be easily applied to other target detection frameworks in theory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of science and technology and the deepening of my China's health system reform, most provinces, municipalities and autonomous regions have gradually established a new medical service system with reasonable division of labor in community health service organizations, general hospitals and specialized hospitals. At the same time, taking the construction of health informatization as a breakthrough point, began planning and piloting an electronic health record (HER) project to improve the level and quality of medical and health services. Starting from the storage and management of EHR, this paper analyzes the current research status at home and abroad, and briefly designs a regional EHR service mode, a medical and health system model composed of regional medical information service platform and hospital level medical information service platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, many Tibetan input methods are developed based on Input Method Manager (IMM) and use components to input Tibetan, which has poor compatibility and low input speed. Text Services Framework (TSF) is a new input framework launched by Microsoft, which has the advantages of extensibility, device independence, higher performance, and security. Using the Latin transcoding, users can type Tibetan directly through the English keyboard without changing the keyboard and memorizing corresponding keys to realize fast Tibetan input. Based on the analysis of the working principle of TSF, functions including keyboard input responses and parses, intelligent input corrections, filtered and sorted candidate words responding to keyboard input were designed. This paper realizes the Tibetan intelligent input method based on TSF and Tibetan Latin transcoding on the Windows Platform. Experiments show that using this input method can significantly reduce the search time and the number of used buttons while typing Tibetan and improve efficiency of the input method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As a newly developed technology, virtual reality technology has been widely used in psychological therapy in many fields due to its characteristics of immersion, interaction, imagination and so on. Post-traumatic stress disorder (PTSD) is due to the aversion to directly experience, witness or repeatedly exposed to the details of a potential traumatic event, such as war, major accidents, etc., which is the most important cause of its outbreak. PTSD seriously harm the physical and mental health of individuals, early and timely intervention is particularly critical. It was found that virtual reality (VR) exposure therapy combined with traditional exposure therapy had a significant clinical effect on PTSD. To explore the application of virtual reality exposure therapy in PTSD.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fingerprints are used in many fields as important biological data of people. As one of people's important personal privacy, fingerprints are also prone to leakage. Images and videos posted on social media are invitations to this problem. Research in the direction of protection of fingerprint privacy is nearly vacant. Therefore, this paper presents a technical implementation for locating and erasure human fingerprints for videos. This implementation adopted Google's open-source MediaPipe machine learning framework to identify the hand in the frame and obtain crucial landmark information. The fingerprint core range and finger state are deduced from landmark points' positional relationship and distance. This method can efficiently erase as many as 88% of frames with one hand presenting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Attention deficit hyperactivity disorder (ADHD) is one of the most prevalent mental disorders in childhood. Apart from its main symptoms, the disorder causes major difficulties in education, social performance, and interpersonal relationships. Because rehabilitation is important for these patients to combat these issues, the use of virtual reality (VR) technology is useful. This study aims to highlight the possibilities of virtual reality in rehabilitation of children with ADHD. The application of virtual reality technology in ADHD is retrieved in this paper through literature research. By reviewing relevant research at home and abroad, the research findings to date are summarized, and the potential and opportunities of virtual reality technology in ADHD are summarized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The virtual antenna array technology focuses on the methods of transforming the real antenna array into virtual antenna array. Virtual array transformation (VAT) can be used to realize virtual antenna array beamforming. The degrees of freedom of antenna array can be increased and more interference is inhibited. The robustness of the VAT beamforming against jammer motion can be improved by forming broad nulls. The performance of the VAT null broadening beamforming is better than that of the conventional VAT beamforming. A modified beamforming approach is proposed, and the performance of the VAT null broadening beamforming can be further improved. The reference position of the virtual antenna array is adjusted. The position that is one element space away from the antenna array is chosen as the reference position instead of the position that is at one end side of the antenna array. Besides, the conjugate covariance matrix is introduced into the expended covariance matrix, which is constructed by the Kronecker product of the covariance matrix of virtual array and its conjugate. According to theoretical analysis, more information can be obtained by the modified approach, so the performance of VAT null broadening beamforming can be improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid development of artificial intelligence technology, the automatic recognition of students' learning state and emotion by target detection and expression recognition technology has attracted more and more attention. In order to solve the problem that the detection accuracy and speed can not be considered in the application, an intelligent lightweight classroom learning situation analysis solution is proposed. Firstly, face is recognized by face detection. By designing a small convolution kernel for continuous convolution and extracting the key feature points of the face in parallel, five kinds of learning expression intensity are output. Finally, the learning emotion score of the weighted sum of the students' overall head up rate obtained by face detection and the expression intensity obtained by expression recognition is used as the evaluation result of learning emotion analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the continuous development of economy, China, as the largest agricultural country in the world, has a low level of agricultural automation and a large demand for labor. Therefore, it is necessary to improve production efficiency and improve the environment by developing smart agriculture, so as to realize the dual benefits of agriculture and environment. The traditional CPU and single chip microcomputer have weak running speed and timing implementation ability. In this paper, a design and implementation scheme of intelligent agricultural soil acquisition system based on FPGA is proposed by investigating the research status and development trend of a large number of domestic and foreign intelligent orchards. It has the advantages of high integration, fast speed, strong reliability and low power consumption. This paper proposes the overall design scheme of the system. The hardware part includes the selection of various devices, and the software part includes the language, software development and simulation. It mainly includes soil information collection, data transmission, data storage, data processing and other aspects, which simplifies the design of the circuit and conforms to the development trend of modern intelligent orchard.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Low resolution object detection could be challenging. In this paper, we proposed a GAN-based real-time data augmentation algorithm for the transfer learning task of UAV vehicle detection from ImageNet, with improvements including using FocalLoss to replace the cross-entropy loss commonly used in the industry, as well as redesigning the target detection Head combination to improve the model’s detection accuracy by 4% over original YOLOv5 model. We make it feasible for deployments on UAV-carried ARM systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aiming at the demand of visual detection equipment for miniature target detection system, in order to improve the calculation accuracy and speed of the target detection platform, a target detection method based on FPGA convolution neural network is developed. The characteristics of convolutional neural network and its structural characteristics are analyzed, Convolutional neural network is applied in micro target detection the appropriate convolutional neural network model is selected according to the requirements and the model parameters are obtained. This model can embed the algorithm into the hardware platform system with high performance FPGA chip as its underlying core structure, and can achieve efficient effect. The experimental results can also prove that the real-time performance of the calculation results in this paper is relatively good, and the processing workload of hardware transplantation is relatively small, which can achieve the effect of effectively detecting the target, solve the problem of high power consumption and insufficient realtime performance of the current special image processor (GPU), and meet the requirements of fast and accurate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cross-modal retrieval has been widely used in the Vision-Language field and has achieved many results, but there is a lack of research in the trajectory-text field. At the same time, the current popular cross-modal retrieval models not only lack fine-grained semantic alignment between different modalities, but also ignore the influence of the grammatical structure of the text on the retrieval effect. To solve the above problems, this paper proposes a dual-stream trajectory text retrieval model combined with graph neural network, combining local and global two cross-modal interaction methods: (1) Local alignment, encoding trajectory points and words respectively after passing through the masking module. Semantic alignment. (2) Global alignment, introducing momentum contrastive learning to achieve trajectory and text retrieval learning. Experimental results show that this hierarchical matching method not only retains the efficient performance of the dual-stream model, but also has higher accuracy than other cross-modal retrieval models, and its R@1 value on the dataset is improved by 3.2%-4.7%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
According to the prediction of the United Nations, the global elderly population will increase to 2.1 billion by 2050, and the population structure will have a far-reaching impact on China's labor market. To explore the application of artificial intelligence in new blue-collar recruitment software in the manufacturing industry, this paper uses VOS viewer to visually examine 287 relevant documents published by CNKI (China National Knowledge Infrastructure) from 1994 to 2022. This paper analyzes the blue-collar recruitment market under the background of the epidemic, putting forward three existing problems of blue-collar recruitment software and probing into the design and application of artificial intelligence blue-collar recruitment platform combined with big data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, due to the continuous advancement of computer software and hardware and the advancement of artificial intelligence, the advancement of human-machine communication systems has made considerable progress. The natural language understanding module is the basic element of the human-computer communication system, and the result of its semantic understanding will have a significant impact on the development of later components, and directly affect the process and the improvement of the success rate of human-computer interaction. In this paper, the neural network-oriented human-computer interaction-oriented natural language understanding and interaction engine is researched. On the basis of literature data, the relevant knowledge of human-computer interaction is understood, and then the neural network-oriented human-computer interaction natural language understanding is designed on the interactive engine, and the neural network reasoning structure model it quotes is tested. The test results show that the RMNN model used in this paper has achieved an accuracy of 70.24%, and the significance test p-value is 0.001.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In view of the small sample set of plant cell images, unclear cell boundaries and artifacts, the deep learning model has a low recognition rate. In this study, a yolov5s parameter transfer method is used to improve the recognition accuracy of plant cells. First, perform data enhancement operations such as denoising, rotation, translation and scaling on the plant cell training samples; then, pre-train the yolov5s network on the BCCD Dataset, transfer the model parameters to the network, and then carry out model parameter update training on the plant cell data set. The experimental results showed that the transfer learning method was faster than the original yolov5s network on the model training and validation set in terms of algorithm convergence, with a smaller loss function value. This method can provide a precise cell positioning solution to measure the plant cell geometric parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The current problem of relatively backward and inefficient apple fruit grading technology, computer vision-based classification methods are widely adopted, but traditional visual classification networks face the problems of many parameters, high computational effort and unsatisfactory classification accuracy. Therefore, this paper proposes a lightweight residual network-based apple external quality grading method. Firstly, based on the traditional residual neural network, the network uses group convolution to replace the standard convolution in the original residual units, the aims are to reduce the number of model parameters and computational effort; Secondly, to address the information non-circulation problem between group channels caused by group convolution, a Channel Shuffle operation is used to mix inter-group features to improve model performance; Finally, a parallel pooling structure is proposed to solve the problem of information loss of traditional pooling features. To build a dataset of apple images with extensive coverage of external quality information, and to perform data enhancement on a limited dataset, and to conduct experiments based on the augmented dataset using improved models in comparison with common neural network models. The experimental results show that the improved lightweight residual network model has only 2.97M parameters, the FLOPs are only 1/5 of the traditional model, and the classification accuracy is 96.5%, which is helpful for future implementation of apple grading in low performance mobile terminals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Breast cancer is a tumor disease with a high incidence of female diseases, but it is also a disease that can reduce the mortality rate through early diagnosis and early treatment. A method of gray level co-occurrence matrix combined with BP neural network is proposed to improve the recognition rate of breast tumors. Using computer intelligent method to detect whether breast tumors are benign or malignant is a pattern recognition problem of breast microscopic images, which can help diagnose diseases in advance, thus improving the tumor treatment effect. How to accurately diagnose breast cancer patients is the key to the prevention and early diagnosis of breast tumors. Ultrasound imaging technology plays an important role in the diagnosis and treatment of breast tumors because of its characteristics of no radiation damage, simplicity, effectiveness and low cost. The traditional ultrasonic image segmentation method usually performs the most basic segmentation function of the image, and the subsequent analysis work often depends on the manual operation of doctors or technicians. The interpretation and diagnosis of near-infrared breast images are subjective, and doctors inevitably have subjective differences in interpretation. In order to reduce the subjective influence of different doctors on the analysis results and reduce the pressure of doctors' work, this paper innovatively proposes an automatic segmentation method of ultrasonic images by combining the target recognition algorithms in the field of pattern recognition. In this paper, the medical images of breast cancer tumor cells are researched, and based on BP neural network, the segmentation scheme of breast cancer medical images and the detection scheme of tumor cells based on feature scale expansion are proposed, which can successfully realize the automatic segmentation of breast cancer ultrasound images, effectively identify the tumor areas in the images, have good stability and accuracy, and bring new ideas for the application of BP neural network diagnosis system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is a research topic of great significance to analyze individual consumption behavior and make targeted recommendations. In this paper, we propose to use graph convolutional neural network to analyze individual consumption behavior. Compared with the previous methods, our proposed method has great advantages in speed, accuracy and model size.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Carotid arteries vulnerable plaques are a crucial factor in the screening of atherosclerosis by ultrasound technique. However, manual plaque segmentation may be time-consuming and variable, moreover, the unstable plaques are contaminated by various noises such as artifacts and speckle noise. This paper proposes an automatic convolutional neural network (CNN) method for plaque segmentation in carotid ultrasound images using a small dataset. Firstly, a parallel network with three independent scale decoders is utilized as our base segmentation network, and pyramid dilated convolutions are used to enlarge receptive fields in three decoder sub-networks. Subsequently, the merged feature maps from the three decoders are rectified by the SENet. Thirdly, in the testing, the initial segmented plaque is refined by the maximal contour postprocessing method to obtain the final segmentation result. The dataset consists of 30 carotid ultrasound images with severe stenosis plaques from 30 patients. Test results show that the proposed method yields a Dice value of 0.820, IoU of 0.701, Accuracy of 0.969, and modified Hausdorff distance (MHD) of 1.43 by 10-fold cross-validation, it outperforms some CNN-based methods on these metrics. Additionally, we apply an ablation experiment to show the validity of each proposed module. Our method may be useful in actual applications for carotid unstable (easily ruptured or severe stenosis) plaques segmentation from ultrasound images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detecting tiny defects in cigarettes is currently a major concern for manufacturers. To address this issue, this paper investigates a hybrid model based on lightweight ViT and RCNN to provide a better balance of high performance and high accuracy. Experiments showed that the model presented in this paper has a mAP value of 85.7% at 1% of tiny defects in cigarette appearance and an inference speed of 82 FPS in an acquisition scenario with a camera resolution of 1280×280, which meets the needs of high-speed acquisition in industrial sites. The results indicate that the hybrid model can be used to detect flaws in cigarette appearance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Predicting depth information from a single image has recently become an important research topic in computer vision. In particular, the self-supervised strategy for learning the depth is more attractive because it is not necessary to label any ground truth information. Under the framework of self-supervised learning we propose a CA-depth network to improve the accuracy of a single image depth estimation. We added the attention mechanism to the monocular depth estimation network to address the issues of observable artifacts and inaccurate prediction geometry in monocular depth estimation images. The spatial position information in the high-dimensional feature map is used to pay attention to the essential features, and to weaken the artifact phenomenon in the depth prediction map. We used Resnet as the encoder to extract the input image's feature map, the coordinate attention mechanism to realize the optimal allocation of convolution feature map weight, and the decoding network structure to predict the depth. Experimental results on public datasets show that the depth prediction accuracy of the CA-depth network is higher than the state-of-art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The advancement of computer technology in the area of image generation opens up opportunities for the creation of art. However, owing to the complexity of artistic expression and the difficulty of data collection, generating abstract ink paintings from photos remains a big challenge. This paper proposed a method of machine learning to generate abstract ink paintings. First, a dataset consisting of photos and Chinese abstract ink paintings is collected. Then, the statistical information of photos is obtained by using color segmentation to draw abstract images. On the unpaired dataset, an imageto- image translation network with contrastive learning is trained to learn a mapping from abstract images to abstract ink paintings. Experiments showed that this method can effectively realize the process of generating abstract ink paintings from photos.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid development of advanced technologies such as artificial intelligence and Internet of Things, intelligent traffic management has been widely used. License plate recognition technology is particularly important in intelligent traffic management as the unique identification of vehicles, but due to the influence of natural environment, recognition angle, image clarity, size of license plate and other factors, the traditional license plate recognition technology is less robust and difficult to accurately identify the number plate number. In this paper, we describe a license plate recognition technique implemented by constructing Convolutional Neural Networks (CNN), which is able to recognize license plate numbers more accurately and has certain significance for further research on deep learning in the field of intelligent traffic management.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To solve the problem that the extracted features are not accurate due to the use of single-size convolution kernel in convolution neural network super-resolution reconstruction algorithm, a network structure combining multi-scale features is proposed. The structure consists of a multi-scale feature extraction block and a reconstruction module. Multiple convolution kernels are adopt to extract he multi-scale feature in multi-scale feature extraction module, and sub-pixel convolution layer is used to enlarge the feature image size to high-resolution image size in the image reconstruction module. The deep network model in this paper fully considers the importance of multi-scale features and can better reconstruct the high-frequency details of the image. The experimental results show that the improved network structure model can enhance the quality of image reconstruction and can better deal with the problem of image super-resolution reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A camouflage design strategy based on battlefield environment twinning is proposed to address the camouflage concealment of military equipment in a realistic battlefield environment. The strategy is based on the construction of a three-dimensional battlefield environment digital twin model, and by synthesizing the digital camouflage based on the background primary color and natural camouflage decoration, a camouflage design scheme with a higher camouflage success rate can be generated to fit the realistic battlefield environment. A neural network-based image segmentation network and a target detection network are used to evaluate the performance of the camouflage design scheme. The experiment results show that the camouflage design strategy proposed in this paper has better interference and confrontation with detection technology, and it can provide the most adaptable camouflage design scheme for the target object in the real battlefield environment, which has strong practical value and strategic significance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The traditional pressure testing system has some problems, such as low data accuracy and inconvenient data storage. This paper introduces a dynamic pressure testing system based on STM32 microcontroller. Proteus is used to complete the hardware design of the testing system. Keil uVision 5 and LabVIEW are used to complete the software design of the lower and upper computers of the testing system. The dynamic pressure testing system is simulated by Proteus, and the simulation results show that the whole testing system can well complete the dynamic pressure testing task, and achieve the expected goal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
FPGA has the advantages of high flexibility, strong real-time performance, low cost, low risk and son on. So it is used widely in spacecraft. FPGA has also developed from the initial implementation of only interface timing control to replacing most of the calculation functions of CPU and DSP in the spacecraft now, which shows the influence and importance of FPGA in the spacecraft. In addition, the single event upset(SEU) effect in the space environment can cause easily the spacecraft functional failure or even task failure. Therefore, the demand for rapid efficient development of aerospace high-reliability FPGA products has become prominent increasingly. This paper proposes a design method for aerospace high-reliability FPGA based on Flash, which can pre-distort the transmitted broadband signal in-orbit and perform two out of three judgment and maintenance on the DSP code in Flash. Insides it can dynamically modify the storage, timing table, BAQ code table, temperature phase curve table and other information in Flash according to the characteristics of the in-orbit environment Parameter. Not only does this method short the development cycle of FPGA products, but also it has simple implementation, high reliability, convenient and quick testing, and strong practicability. It can adapt to the interference of complex space environments and can be widely used in aerospace FPGA products.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The duck rudder correction mechanism of 2D correction projectiles has important applications in modern high-precision munitions due to its advantages of low cost and high accuracy. In order to study the aerodynamic characteristics of the correction mechanism of two-dimensional correction projectile, the simulation model of two-dimensional correction projectile was established by using the hydrodynamic analysis software Fluent, and the aerodynamic simulation calculation was carried out for the two-dimensional correction projectile at different correction mechanism speed and different projectile body speed, and the drag coefficient and lift coefficient of two-dimensional correction projectile and their fitted curves were obtained. The curves show that the change of the aerodynamic parameters of the 2D correction projectile with the speed of the correction mechanism and the speed of the projectile body, and the analysis results show that as the speed of the projectile body decreases, the change of the deceleration force of the correction mechanism is greater, and the influence of the deceleration of the projectile body on the overall aerodynamic values is smaller than that of the correction fuze on the aerodynamic values. The results show that as the speed of the projectile decreases, the variation of the deconvolution force on the overall aerodynamic value is greater.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid development of information technology and the rapid growth of data, the era of big data has come. Rational use of big data technology to improve the contribution quality of network teaching resources, to achieve comprehensive sharing of teaching resources, will be the trend of future development. The rapid development of big data technology has created good opportunities for improving the network informatization level of party construction in colleges and universities. In the era of big data, the innovation of party construction in colleges and universities has its necessity and possibility. However, traditional resource-sharing technology can not accurately identify heterogeneous resources, resulting in a poor sharing effect. Therefore, based on big data analysis, this paper studies the sharing technology of educational resources for party history study of party members in colleges and universities. By constructing a resource-sharing model, semantic recognition of heterogeneous educational resources is carried out, and semantic relations between concepts are obtained by reasoning. Based on similar semantics, big data analysis algorithm is used to calculate semantic similarity, and other resources associated with the resource are retrieved according to semantic similarity to realize resource-sharing. In the experimental demonstration, the semantic similarity between different concepts is calculated by the design method and the traditional method respectively. Experimental results show that the semantic similarity calculated by the resource-sharing technology based on big data analysis is closer to the expected value. The semantic similarity can also better identify heterogeneous resources and achieve a better sharing effect. The learning education resource-sharing technology based on a big data analysis algorithm is feasible, which can lay a foundation for resource-sharing technology and help users rationally use more educational resources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.