PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317101 (2024) https://doi.org/10.1117/12.3034226
This PDF file contains the front matter associated with SPIE Proceedings Volume 13171, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317102 (2024) https://doi.org/10.1117/12.3031938
Recent advances in machine learning have made forged video and audio more convincing. This poses a threat to the security of individuals, societies and nations. To address this threat, the ASVspoof initiative was conceived to spearhead research on Automatic Speaker Verification (ASV) for anti-spoofing. Currently, most research on ASVspoof has focused on detecting whether speech has been tampered with. However, little attention has been paid to the recognition of speech forgery algorithms. Moreover, in the real world, new forgery algorithms keep emerging, making it difficult to adapt forgery algorithm recognition models trained under closed-set conditions to realistic open-set scenarios. Therefore, we propose a method based on prototype learning and adaptive thresholding for recognizing speech forgery algorithms in open-set. The method uses manifold mixup and dummy prototypes to simulate and recognize unknown speech forgery algorithms. Prototype classification improves the ability to recognize speech forgery algorithms with high similarity. At the same time, it has the advantage of preventing catastrophic forgetting and facilitates subsequent incremental training using samples of newly recognized forgery algorithms. Thus, our method increases the number of recognized categories for forgery algorithms. Experimental results show that our method is effective. The codes are available at https://github.com/multimedia-security/open-set-recognization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317103 (2024) https://doi.org/10.1117/12.3031910
In this paper, we focus on the online 3D bin packing problem, a classical strong NP-hard problem. In the problem, each item is unknown before bin packing is performed, and the arrival of the item requires immediate bin packing, which has many applications in industrial automation. In this paper, we propose a greedy algorithm for multi-indicator fusion to solve this problem by defining a series of evaluation indicators during bin packing, determining the weights of these indicators to be fused by SVR algorithm and Quasi-Newton Methods, and finally selecting the placement with the highest score of the fused indicators to be placed. The experimental results show that this method can solve the online 3D bin packing problem and is competitive with other algorithms in terms of space utilization and the number of bins, and the running time is fully completed to meet the online bin packing requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317104 (2024) https://doi.org/10.1117/12.3032126
By simulating and analyzing the spatial spillover effect between different regions, the spatial correlation of carbon emission efficiency can be studied. Understanding how carbon emission behaviors interact and transmit across regions will help to develop more comprehensive and accurate carbon emission management strategies. Therefore, a new evolutionary simulation method for spatial spillover effect of carbon emission efficiency is proposed in this study. The PSO-PFCM clustering algorithm was improved to detect the overflow of carbon emission efficiency. The main characteristics of the multidimensional spatial spillover effect of carbon emission efficiency were selected, the multidimensional spatial feature mapping model was constructed, and the level of spillover effect was judged to complete the analysis of the evolution of the spatial spillover effect of carbon emission efficiency. The experimental results show that the proposed method has shorter abnormal detection time of carbon emission data spill and higher evolutionary accuracy of spatial spillover effect of carbon emission efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317105 (2024) https://doi.org/10.1117/12.3031924
Rail transit systems are an important part of public transportation in large cities. However, unforeseen emergencies such as floods, equipment failures, or large events can cause serious consequences such as traffic congestion and stranded passengers, thus affecting the normal operation of rail transit. To cope with these emergencies, this paper proposes a new algorithm that can query the latest reachable time under time constraints. A rail network model is developed to optimize Dijkstra's algorithm in emergency situations by using a new data structure. The study emphasizes the temporal complexity and spatio-temporal accessibility of the algorithm. Finally, the model and algorithm are validated using data from the Beijing Metro. The proposed shortest path planning emergency strategy for rail transit and the application of the algorithm are mainly aimed at the command center level of rail transit and solved practical problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317106 (2024) https://doi.org/10.1117/12.3031959
MPI (Message Passing Interface) plays a crucial role in the field of parallel computing. In the Allreduce algorithm of the OpenMPI communication library, there are some issues in handling communication scenarios with a number of processes that is non-power-of-two. The two existing algorithms address this by excluding some processes to achieve a power-of-two process count. However, the consideration factors are too simplistic, resulting in an imbalanced distribution of participating processes on nodes, greatly impacting communication efficiency. To address this problem, the layout of processes on nodes is taken into consideration, and the range of excluded processes is redefined. Both algorithms are subjected to generic load balancing optimizations and adaptations for domestic architectures, resulting in improved load balancing. Experimental results show that, under a communication scale of 16 nodes, the recursive_doubling algorithm achieves performance improvements of up to 30%, while the reduce_scatter_allgather algorithm achieves performance improvements of up to 21%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cong Tian, Hongyu Chu, Taiqi He, Yanhua Shao, Haode Shi
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317107 (2024) https://doi.org/10.1117/12.3032063
The UAV platform has limited computing resources, and the tracking algorithm needs better speed and accuracy tradeoff, a lightweight siamese network target tracking algorithm called SiamBAN-T based on SiamBAN. Firstly, to reduce the number of network parameters, mobilenetV3 was as to extract the siamese feature. Secondly, we introduce CA attention into the feature fusion module to enhance perception ability regarding target spatial-position information. Thirdly, multibranch cross correlation is incorporated into the head of the network to strengthen boundary information and scale information, thereby improving the anti-interference capability of our trackier. Finally, a feature enhancement module is designed to improve classification and regression abilities. Experimental results on UAV123 dataset demonstrate that compared with the original algorithm, our improved algorithm achieves an increase in success rate by 0.8% and accuracy by 0.8%. The running speed has been enhanced by 7.6 times for PC devices and 18.5 times for airborne mobile terminals, respectively. These experimental findings indicate that our SiamBAN-T significantly enhances tracking speed while maintaining high precision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317108 (2024) https://doi.org/10.1117/12.3032027
To improve the global path planning capabilities of mobile robots and achieve real-time obstacle avoidance, a robot path planning algorithm that improves the traditional A* algorithm is proposed. The A* algorithm is utilized for the design of global path planning, incorporating weight coefficients into the heuristic function to bolster search efficiency. Path smoothing is performed by improving the Floyd algorithm, aiming to reduce inflection points and increase the path smoothness. For local path planning, the artificial potential field method is adopted to address the real-time obstacle avoidance limitations of the A* algorithm. Simultaneously, local corrections are applied to mitigate potential issues associated with local minima in the artificial potential field method. Additionally, attempts are made to navigate around obstacles by fine-tuning the turning angle. Simulation results validate that the improved A* algorithm can effectively construct reasonable paths in the map environment with better search mechanism and flexibility. The improved artificial potential field algorithm successfully achieves real-time obstacle avoidance, surpassing local optimal points.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317109 (2024) https://doi.org/10.1117/12.3031948
With the continuous promotion of national energy conservation and emission reduction, the in-depth application of information technology has gradually triggered in-depth changes in the development mode of the country, city and industry. This paper mainly starts from the status quo of national building energy consumption and related energy saving and green development plan, introduces the research status quo of building energy management, and aims at the more popular machine learning algorithm models in recent years, including Random Forest Regression Model, XGBoost Model and Stacking Multi-Algorithmic Fusion Model, and combines with CITIC Design Digital Intelligent Building System in the statistics of a certain office building with a total of 321 days of Combined with the raw data of measured energy consumption of an office building counted in the CITIC Design Digital Intelligent Building System for a total of 321 days, the prediction learning of building operation and maintenance energy consumption is carried out respectively, the prediction effects of the three prediction algorithms are compared and analyzed, and it is recommended to use the Stacking Multi-Algorithmic Fusion Model for predicting the energy consumption of building operation and maintenance and the operation and maintenance mode of building operation and maintenance energy consumption control in advance warning is proposed by combining with energy consumption prediction model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131710A (2024) https://doi.org/10.1117/12.3032089
Low-orbit satellite communication has the advantages of global coverage and low latency, and the satellite network is developing in the direction of gigantism, low orbital altitude, tilted orbit, and inter-satellite link networking, which brings more challenges to the design of routing algorithms. Aiming at the characteristics of mega-constellation networks (MCNs), this paper proposes a local flooding-based survivable routing algorithm (LFSA) that obtains the local link-state information by limiting the flooding range, to reduce the flooding and computation overheads of the network. Based on the topological characteristics of MCNs, a next-hop selection mechanism based on minimum Manhattan distance is proposed. Simulation results show that the LFSA algorithm improves the pathfinding capability and reduces end-to-end delay in the face of random link failure and regional satellite failure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131710B (2024) https://doi.org/10.1117/12.3032004
Considering the morning and evening peak traffic congestion caused by commuters' commuting, a HOV lane route layout method based on a heuristic algorithm was studied. The travel characteristics of commuters are summarized, and the travel time, travel comfort and travel cost of commuters are used as optimization objectives, and a heuristic algorithm is used to plan the layout of HOV lanes. By choosing the appropriate transfer station, the traffic efficiency of people can be effectively improved. It is verified by experimental results that choosing HOV lane layout can effectively improve the traffic efficiency of road sections, reduce commuting costs for commuters, and improve their commuting satisfaction. Experimental results show that the change in commuting methods has effectively reduced the commuting cost of commuters, from the original average of 69.838 yuan/time to the current average of 61.381 yuan/time. The travel cost was reduced by 12.1%, which proves that commuting costs can be effectively saved by using the transportation method in this article.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131710C (2024) https://doi.org/10.1117/12.3031951
With the advancement of satellite communication technology and space launch technology, low earth orbit (LEO) satellites have become the best choice to overcome geographical limitations and achieve global communication. In order to achieve efficient performance in the large-scale LEO constellation network, a routing algorithm adapted to satellite network is necessary. A routing algorithm based on on-demand routing is proposed to address the issues of high network overhead and difficult routing in the large-scale LEO constellation. "Maximum routing restriction area" is defined based on the satellite network structure to reduce the routing overhead in the large-scale constellation network and improve network performance. The simulation results show that this algorithm has better network performance in the large-scale constellation network compared to the other on-demand routing algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131710D (2024) https://doi.org/10.1117/12.3032105
In the realm of cloud computing, long sequence prediction of workloads plays a pivotal role, crucial for optimizing resource allocation and enhancing system performance. However, current research of long-sequence workload forecasting faces a series of challenges, mainly due to the high randomness and instability characteristics of long workload sequences, making it difficult for traditional machine learning methods to provide accurate results. Therefore, we designed a novel approach for long sequence forecasting, thoroughly considering the latent characteristics of cloud workload sequences. Initially, we employ convolution kernels of varying sizes to perform multiscale sequence decomposition, better capturing contextual information and periodic features in long sequence. Furthermore, through fast Fourier transformation, we convert one-dimensional sequences into two-dimensional space, leveraging dilated convolutions to extract effective features within the intra-period and inter-period variations. Ultimately, we introduce an attention mechanism, effectively integrating the intra-period and inter-period variation features into the proposed model. Our method has undergone comprehensive evaluation on publicly available datasets from Google, Alibaba, and Microsoft. Experimental results demonstrate superior accuracy and robustness of our model across various workload types, showcasing its excellent adaptability to dynamic and complex workload scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131710E (2024) https://doi.org/10.1117/12.3032086
To solve the problem of coherent sound source direction of arrival (DOA) estimation of microphone uniform circular array in indoor reverberation environment, an improved MUSIC algorithm for microphone uniform circular array (UCA) is proposed. The pre-processing uses the mode space to change into several virtual uniform linear arrays. The maximum feature vector matrix is constructed by decomposing the covariance matrix of the snapshot data. The information of all sound sources is used, and the covariance matrix is also restored to the diagonal matrix, which greatly reduces the error caused by the pre-processing. The spatial spectral function is obtained, and the spectral function is searched to obtain N directions and pitch angles corresponding to N signals. The MATLAB tool is used to simulate the indoor reverberation environment microphone uniform ring array model and the improved MUSIC algorithm. The simulation results show that the algorithm has better estimation accuracy for indoor sound sources under ideal conditions, and has higher resolution and lower signal-to-noise ratio.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131710F (2024) https://doi.org/10.1117/12.3031904
In response to challenges such as the large number of parameters and high computational demands of vehicle appearance damage detection models, which hinder deployment on mobile devices, this paper presents a study focusing on lightweight and high-precision optimization of the YOLOv5s target detection algorithm. Specifically, we introduce the lightweight network into the YOLOv5s architecture to create a more efficient network. Furthermore, we integrate the attention mechanism to enhance feature extraction capabilities and employ knowledge distillation to improve algorithm accuracy. These enhancements aim to boost target detection performance. The experimental results illustrate that our optimized YOLOv5 algorithm achieves significant improvements in both speed and accuracy on the car damage dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131710G (2024) https://doi.org/10.1117/12.3032048
Carbon dioxide emissions are an important cause of dramatic changes in climate. Therefore, the control of carbon emission rights of relevant subjects is a necessary prerequisite for mitigating climate change. However, the existing carbon trading and auditing framework has not yet completed the control of relevant entities under the premise of considering privacy, and adversaries may infer the privacy information of emission entities based on carbon emission data. In response to this challenge, this paper proposes a novel carbon data collection, carbon trading and auditing method based on block chain and the Internet of Things. This method realizes the trusted storage of carbon data through block chain technology. At the same time, in order to protect user privacy, the interaction of carbon data is carried out through HE. Security analysis shows that our method can prevent user privacy leakage. Experimental analysis shows that the method still maintains good availability while having high security.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131710H (2024) https://doi.org/10.1117/12.3031987
At present, the application of deep learning algorithm to the scene of modulation type identification mostly focuses on single digital modulation type identification, and rarely involves the identification of mixed digital and analog modulation types. At present, the signal characteristics used in the identification network are single, and the analog signal does not have the common identification characteristics such as cyclic spectrum and constellation diagram, so the existing composition method is not suitable for the identification of mixed digital-analog signal sets. In order to solve these problems, a TCSE-ResNet50 mixed-signal recognition algorithm combining the fourth power spectrum of frequency spectrum is proposed, and a feature map with wider feature applicability is formed by combining the signal spectrum and the fourth power spectrum. According to the attention mechanism module included in the proposed TCSE-ResNet50 network, the model pays more attention to discrete spectral lines and reduces the interference of other background areas or random noise on signal recognition as much as possible. At the same time, the cross entropy and triplet loss functions are combined, and the cross entropy is used to widen the characteristic distance between different kinds of signals with similar frequency domain expressions, and the triplet is used to narrow the characteristic distance between similar signals caused by random baseband symbols or random additive noise, thus completing the identification of {FM, AM, 2ASK, BPSK, 2FSK, 16QAM, 16APSK} digital-analog mixed signal sets. When the signal-to-noise ratio is -2dB, the average recognition rate of this algorithm is over 93%, which is superior to single feature input and traditional convolutional network recognition model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131710I (2024) https://doi.org/10.1117/12.3032033
Aiming at the problems of punctuality, parking accuracy, energy saving and comfort in the automatic driving of urban rail trains, this paper proposes an algorithm for generating planned speed profile based on improved genetic algorithm. This improved genetic algorithm aims to achieve multi-objective optimization of on-time, accurate parking, energy saving and comfort and improve the optimization efficiency of traditional genetic algorithms. The simulation results show that the proposed algorithm can satisfy the basic constraints of safe, punctual and accurate stopping of trains. The algorithm also reduces the operation energy consumption and improves the operation comfort.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131710J (2024) https://doi.org/10.1117/12.3032040
Point cloud is a critically important geometric data structure, and researchers have increasingly focused on and achieved promising results in terms of point cloud processing since PointNet's pioneering work. However, most previous methods only represent the shape of point clouds through coordinates or normal vectors, neglecting the intrinsic geometric and topological properties of this data structure. In this paper, we present an effective point cloud analysis approach which is using topological information. By employing a simplified version of the PointNet++(SSG version), we conduct benchmark experiments on the ModelNet40 dataset to evaluate TPA's performance in the classification task. Our improved method can still directly process point clouds, as the topological invariants ensure the permutation invariance of the input points. Simulation results show that the topological approach based on persistent homology can effectively provide topological structural features and improve the accuracy of the models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Xiaodong Wang, Xian Shi, Hu Duan, Laishan Zhou, Hexiang Wu
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131710K (2024) https://doi.org/10.1117/12.3032111
The article takes the prevention of accidents caused by CO in daily life as the background and introduces a method of CO concentration detection parameter modulation and system denoising. The system adopts a hybrid detection technology of tunable semiconductor laser absorption spectroscopy (TDLAS) and wavelength modulation (WMS). It is achieving effective suppression of background gases in the environment. The system is combined with a BP neural network to invert the concentration of CO in the gas. Finally, the system noise is processed through wavelet transform to achieve the completion and optimization of the smoke detection algorithm. This has high practical value and guiding significance for the combination of TDLAS CO detection simulation system and hardware system of WOA-BP neural network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131710L (2024) https://doi.org/10.1117/12.3031897
Image denoising algorithm based on depth learning generally uses convolution sparse self-coding network as the main framework of the denoising network. However, although convolution sparse self-coding network can effectively suppress the noise information in the image, it has the problem of loss of certain details in the image after denoising. Aiming at this defect, on the basis of convolutional sparse self-encoding network, the detail information of each layer feature map is extracted from the output of each encoder layer using self-attention mechanism, and the detail information is integrated into the input layer of the corresponding decoder using residual connection method. Experimental results show that compared with the traditional convolutional self-coding noise reduction network, the proposed convolutional self-coding network based on self-attention residuals can effectively improve the level of network noise reduction. At the same time, compared with the mainstream noise reduction network, the proposed algorithm can also achieve better noise reduction effect.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131710M (2024) https://doi.org/10.1117/12.3032167
The relationships between multi-source heterogeneous data and elements in the field of artificial intelligence security are integrated and analyzed in this paper, including attack information, data information, and other security data. Targeting the associated complex entity concepts that existed in the construction of the artificial intelligence security knowledge graph, the ontology structure is divided into theory layer, problem layer, and measure layer, making the artificial intelligence security ontology more diverse and expandable. The addition of the measure layer provides more accurate security decision-making reasoning for the subsequent knowledge inference stage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131710N (2024) https://doi.org/10.1117/12.3031937
Aiming at the problems of traditional A* algorithm with many inflection points, long paths and low efficiency, this paper proposes a multi-domain A* algorithm. Firstly, it improves the obstacle search method and node passable discrimination method, and optimizes the path generation process of the algorithm. Then study the effect of different neighborhood search matrices on the algorithm's path nodes and path length, and select the optimal neighborhood search matrix. Since the improved A* algorithm has a larger neighborhood and more choices than the traditional A* algorithm, compared with the traditional A* algorithm the path length is reduced by 7.2%, the path nodes are reduced by 47.4%, and the search time is reduced by 82.6%, in addition, the path of the improved A* algorithm is smoother.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131710O (2024) https://doi.org/10.1117/12.3031908
A set S⊆V(G) in graph G is a [j, k]-set if it satisfies that G[S] is a subgraph of G and each vertex v∈V(G)∖S, has j≤∣N(v)∩S∣≤k, wherein j and k are nonnegative integers. In this paper, we focus on the situation of j=1, k=2, that is, [1,2]-set of graph G. We mainly study the situation of γ(T)-set and γ[1,2](T)-set in tree, and analyse a particular tree called spider. Then discuss with two different algorithm to calculate the [1,2]-domination number γ[1,2](T) in tree. Finally, compare and analyze the calculation results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131710P (2024) https://doi.org/10.1117/12.3031930
By utilizing geometric and astronomical knowledge, a model for the length of a solar shadow in relation to its geographical location and object height is established. The variations of shadow length concerning various parameters are analyzed. The model incorporates geographical latitude, longitude, day of the year, time, etc., to calculate the solar altitude angle and, in conjunction with object height, establishes a model for calculating object projection length. Finally, using the provided data in the appendix, the curve of solar shadow length variation at a given time is obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131710Q (2024) https://doi.org/10.1117/12.3031969
Centering on the main line of intelligent development of civil aviation, carrying out the application and product development of airport digital twin technology is a key link to promote the integrated development of digital economy and real economy. In the airport flight scheduling work, under normal circumstances, the flight planning time (arrival time, departure time) will be planned, without the need for AOC transfer. However, when special circumstances occur, AOC will select some flights in the next day's flight schedule to adjust their arrival time or departure time to the rest of the time according to demand. However, manual timing is time-consuming and requires a certain amount of experience, so an automatic timing algorithm that can meet the actual needs has an urgent need to improve the efficiency of AOC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131710R (2024) https://doi.org/10.1117/12.3031972
The emergence of industrial Internet has brought exponential growth of data, and agents may collide in the process of operation. This paper proposes and tests a path planning algorithm based on three-dimensional time window structure, which can solve avoidable and unavoidable conflicts, effectively identify and solve the conflicts of agents in time, reduce model complexity and improve computational efficiency. This paper hopes to help the rational planning and efficiency improvement of the production process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131710S (2024) https://doi.org/10.1117/12.3032067
With the advancement of mobile cloud computing technology, the demand for collaborative operation and maintenance technology is constantly increasing. Therefore, this article proposes a model that combines multi feature collaborative knowledge graph and blockchain technology to achieve secure and trustworthy collaborative operation and maintenance computing in cross domain environments. The model focuses on addressing data privacy and security issues, and improving the accuracy of collaborative operations. By introducing a multi feature collaborative knowledge graph, secure fusion of multi-source feature data can be achieved. Meanwhile, design a blockchain based trust verification mechanism to ensure the traceability of anonymous data sources, prevent data tampering, and ensure data authenticity. In addition, an adaptive recommendation algorithm based on MKGCN is proposed, which utilizes multi feature collaborative knowledge graph data to achieve secure and accurate collaborative computing. The experimental results show that this method improves the accuracy of recommendation calculation while ensuring privacy and security, promoting the development and practical application of cross domain operation and maintenance computing technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131710T (2024) https://doi.org/10.1117/12.3031966
This paper proposes a hybrid algorithm of particle swarm optimization and genetic algorithm named PSO-GA, which combines the advantages of genetic algorithm’s population diversity and stochastic global search and particle swarm optimization algorithm’s memory and fast convergence. The hybrid algorithm is then used to build an automatic replenishment model to help replenishment decisions by combining the idea of solving 0-1 knapsack problem. Using the sales data of a supermarket, we verify the feasibility and accuracy of the model, and the proposed algorithm can well solve practical problems in life.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Electronic Instrumentation Research and Target Detection
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131710U (2024) https://doi.org/10.1117/12.3032061
Pediatric diseases are challenging to diagnose due to their complex and diverse characteristics. To assist doctors in diagnosis and help them make informed decisions, this paper proposes a Knowledge graph and Large language model Knowledge-Enhanced (KLKE) intelligent diagnosis model. The intelligent diagnosis task is treated as a text classification task, where the original Electronic Medical Record are input into MacBERT model encoder to obtain the contextual representation after key information enhancement and KG prompted LLM enhancement respectively. The final text representation is obtained by concatenating and merging the enhanced representations. Graph Convolutional Network is utilized to obtain the knowledge representation and the two representations are fused using a fusion method based on interactive attention mechanism. Experiments are conducted on PeEMR, and compared with models that only fuses triples and graph structures. The KLKE achieved an increase of 9.15% and 2.28% in F1_micro scores respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131710V (2024) https://doi.org/10.1117/12.3031892
Silicon modulators, which play a crucial role in silicon photonics systems, are currently trending towards lower biases and improved bandwidth. The plasma dispersion effect of silicon modulators highlights the importance of carrier concentration in improving performance. Common doping profiles have been optimized for high efficiency but may suffer from increased loss. Our horizontal S-shaped modulator improves silicon modulators with excellent VπL of 0.77 V·cm and low loss of 10.9 dB/cm, with small resistance and capacitance enhancing bandwidth over 27 GHz. This design is suitable for high-speed and low-voltage applications with benefits of saving power.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131710W (2024) https://doi.org/10.1117/12.3032078
In Lidar system, time interval measurement technology is the key to achieve high precision time frequency transmission and ranging. By measuring the signal duration, arrival time and pulse width of communication equipment, more accurate time interval data can be obtained. However, the traditional time interval measurement method has some problems, such as low accuracy, difficulty in maintaining long working time and complicated design. Based on the Lidar system, this paper presents a delay line interpolation time interval measurement technology suitable for Lidar system distance measurement by analyzing the advantages and disadvantages of various time interval measurement techniques. The method has the advantages of small measurement error, high precision, stable and efficient, and simple design. Through the error analysis, it can be seen that the final test error of delay line interpolation time interval measurement technology based on Lidar ranging system is 90ps, the resolution is 50ps, and the corresponding minimum range resolution is 15mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Chao Wang, Xinfeng Hu, Xiaodong Yang, Wuliang Chang, Guangze Cao
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131710X (2024) https://doi.org/10.1117/12.3032020
Microwave radar has wide applications in the field of vital sign detection, but there are still some challenges in detecting multiple people in complex environments. For this problem, in this article, a multi-target vital signs detection system based on the fusion of radar and optical image is proposed. This system employs a fusion algorithm to enhance the speed and precision of detection. By comparing the measured data with the reference signal, it can be considered that the measured data has a certain accuracy. The average absolute error in heart rate (HR) detection was 4 beats per minute (BPM), while respiratory rate (RR) detection exhibited an error of 0.75 respirations per minute (RPM). The accuracy of heart rate detection stood at 91.61%, while respiratory rate detection accuracy attained 99.01%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131710Y (2024) https://doi.org/10.1117/12.3031932
In Intelligent Transportation System applications, there is an urgent need for high-precision positioning services. Currently, Global Navigation Satellite Systems in open areas can provide centimeter level high-precision services. However, in underground, tunnel, indoor and other environments, the user receiver is unable to receive the navigation signal transmitted by the satellite, which impacts the performance of positioning systems. This paper is based on the development of groundbased navigation systems in narrow and long indoor environments such as tunnels. It proposes a positioning method that combines single-difference carrier phase measurement with sparse ranging measurement. This method effectively improves the system's positioning accuracy under conditions where base station layout is restricted. The method incorporates sparse ranging measurements to improve the ill-conditioned properties resulting from nonlinearity in the system calculation. Finally, the optimization of positioning results is achieved through a combined weighted nonlinear least squares algorithm. The proposed positioning method is experimentally validated using actual carrier phase data collected in a 4.6 kilometers tunnel and simulated sparse ranging measurement. The experimental positioning result indicate that combining sparse ranging measurements with GH-LPS has the advantages of low cost, low complexity and high precision. When the ranging error is 0.5 m, the terminal positioning accuracy is approximately 55 cm. And when the ranging error decrease to 10 cm, the terminal positioning accuracy is improved to approximately 25 cm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131710Z (2024) https://doi.org/10.1117/12.3031896
A low-complexity FMCW-SAR motion target imaging scheme has been proposed. This scheme consists of an FMCWSAR system and a moving object detection method. The FMCW-SAR system uses an equivalent virtual array to increase the number of transceiver antenna pairs, thereby improving radar azimuth resolution and the use of a PLL structure to improve signal linearity in FMCW radar. This hardware design improves radar imaging performance while reducing complexity. The motion target detection method controls the virtual array components to monitor the motion targets in real-time during the process of motion target monitoring. In the subsequent signal processing, signal interpolation is used to fill the signal used for imaging processing. The experiment shows that this method can effectively and accurately image the detected targets and has good time resource utilization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317110 (2024) https://doi.org/10.1117/12.3031921
Underwater sensor networks (USNs) have different characteristics from ground based wireless sensor networks (WSNs), making traditional WSN protocols unsuitable for uasn. In addition, energy issues directly affect the lifespan of the entire sensor network. The goal of this study is to transmit data to sink nodes in a timely and efficient manner when node resources are limited. Therefore, a reliable and scalable routing protocol EA-VBF for underwater sensor networks is proposed. The innovation of the protocol lies in utilizing the location information and remaining energy of intermediate nodes to make decisions on data forwarding. In addition, the number of times a node relays data packets within a cycle time is considered as a factor in determining. This is the novelty of this article. By introducing energy warning values and improving forwarding factors, the EA-VBF protocol reduces the energy cost of the network and balances the overall energy consumption of the network at the cost of a smaller packet delivery rate
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317111 (2024) https://doi.org/10.1117/12.3032069
In order to solve the problems of insufficient computational power and high power consumption of deep learning hardware, the use of deep learning in the field of hardware design is thoroughly investigated, focusing on the design and validation of a hardware gas pedal for Convolutional Neural Networks (CNNs) for target detection. The completeness of the design is ensured by implementing a hardware gas pedal with high computational parallelism using the Verilog HDL language and functional testing using the Universal Verification Methodology UVM. Through module level and system level verification. The experiments confirm the effectiveness of the hardware gas pedal in improving the computational efficiency of the target detection algorithm, contributing valuable insights to the research in the field of deep learning and chip design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317112 (2024) https://doi.org/10.1117/12.3032000
In this work, the method of Model Predictive Control (MPC) and Improved Whale Optimization Algorithm (IWOA) has been proposed to solve multiple unmanned aerial vehicles (UAVs) tracking a moving target in urban environment. The problem models are established, including the UAV model, target model, environment model and cost function model. Adopting MPC as a control framework for UAV target tracking, WOA is chosen as the solver of MPC. To further improve the optimized efficiency, the introduced strategies include bootstrap initialization strategy, double-difference variational strategy, adaptive weighting strategy and elite selection strategy. The compared experiments show the control method in this paper has better tracking performance and is a reliable technique for UAV tracking the moving target.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317113 (2024) https://doi.org/10.1117/12.3032014
With the increasing complexity and scale of power system, the challenges of data management and anomaly detection are becoming increasingly prominent. However, the existing methods often face the challenge of accuracy and efficiency when dealing with large-scale and high-dimensional data. In order to detect abnormal power data accurately and efficiently, this paper proposes an automatic detection method of abnormal power data based on chaotic sequence. Using chaotic sequence to encrypt the original power data increases the randomness and uncertainty of the data and improves the security of the data. The encrypted data are processed and clustered to extract the abnormal features of power data. Through cluster analysis, similar abnormal data patterns are grouped, and the similarity between abnormal data and normal data is calculated, so as to realize automatic detection of abnormal data. The experimental results show that this method is consistent with the actual situation, and the encryption effect is good, and the accuracy, precision and recall index are high. It is proved that this method is effective in automatic detection of abnormal power data in power system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317114 (2024) https://doi.org/10.1117/12.3032001
Pythagorean fuzzy sets (PFS) as a generation of Fuzzy sets has the greater representation space in handling uncertain information, which is applied to many fields. Distance between PFS which can measure the difference or discrepancy grade. Obviously, the distance between (1,0) and (0,1) is different from that between (0,0) and (0,1). However, some distance measure methods violate this result. To address above problem, the paper proposes a new distance measure based on geometric compression. In FPS, the sum of squares of membership, non-membership and hesitant is 1. In new method, membership, non-membership and hesitant information are regarded as x, y, z-axis to establish a space rectangular coordinate system. Based on the unit circle, the membership, non-membership and hesitant information are compressed to get the deformable ellipsoid. For hesitant information, it can be regarded to contain membership and non-membership information from the view of Dempster-Shafer evidence theory. What’s more, new distance measure not only satisfies the axiomatic definition of distance measure but also has nonlinear characteristics. In addition, the advantages of new method are indicated by comparing with other distance measure methods. Finally, the paper apply new method in the Multiattribute decision making problem, which provides a promising solution for addressing decision-making problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317115 (2024) https://doi.org/10.1117/12.3032076
Aiming at the problems that the convenient household formaldehyde gas detector could not realize simultaneous multipoint detection, excessive warning and the remote monitoring, a kind of indoor formaldehyde gas detection system with wireless AD hoc networking capability was designed. With the Zigbee module CC2530 of TI Company as the core, it adopted the Zigbee network ad-supported capability to realize real-time collection of air quality in multiple rooms at the same time, and uploaded the detected data to Aliyun Internet of Things platform to realize the remote monitoring function. The experimental results show that the system can simultaneously detect the formaldehyde concentration in multiple test points. When the formaldehyde concentration exceeds the preset concentration threshold, the system can make sound and light alarm and release the data to the Internet of Things platform. The remote Internet of Things platform stores the formaldehyde data for PC and mobile terminal to access and check. The system can meet the requirements of indoor formaldehyde concentration detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317116 (2024) https://doi.org/10.1117/12.3032098
This paper addresses the issues of wavelength selection and isolation for underwater full-duplex laser communication automatic tracking of beacon light signals. A method for four-wavelength reception and transmission isolation in the blue-green light band is proposed, which utilizes watt-level semiconductor lasers to achieve beacon light signals with orders of magnitude divergence angles, aiding in rapid acquisition and alignment. The underwater transmission distance of beacon light is close to that of signal light with small divergence angles. Additionally, the hemispherical dome used for waterproof sealing in the underwater laser communication ATP device has a significant impact on the alignment consistency of beacon light and signal light axes. This paper simulates the non-parallelism of underwater optical axes using ray tracing and corrects the non-parallelism of underwater optical axes by presetting a reverse bias angle, addressing the feasibility issue of independent beacon light application in underwater ATP devices and improving the performance of underwater laser communication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317117 (2024) https://doi.org/10.1117/12.3032092
This article takes the National College Student Smart Car Competition as the background and designs and produces an electromagnetic tracking car based on the STC32 chip. On the basis of not violating the competition rules, in order to improve the running stability and speed of the car, the hardware uses a DRV8701E driver board to drive the motor and a four-inductor arrangement to collect the inductance signal; the tracking algorithm uses a ratio algorithm and Fuzzy PID handles electromagnetic values. The experimental results show that the car has good tracking effect and strong anti-interference ability, and can successfully complete various requirements in smart car competitions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317118 (2024) https://doi.org/10.1117/12.3032064
An information routing planning model based on cloud computing and hybrid particle swarm optimisation is proposed to address the problems of traditional routing planning for smart grids. The model constructs a smart grid routing and transmission channel model in an optical communication network in a cloud computing environment and uses a hybrid particle swarm optimisation approach to find the optimal routing distribution. Dynamic neighbourhood decision making and adaptive updating of node positions improves smart grid management, reduces BER and increases information transmission load.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317119 (2024) https://doi.org/10.1117/12.3031964
Due to the continuous impact of haze weather, Xianyang city's air quality has ranked in the bottom three of the province for three consecutive years. This has led to an urgent need to improve air quality. Haze pollution prediction is of great practical significance. By timely and accurate prediction of haze pollution, the government and relevant institutions can take necessary measures to improve air quality and protect the ecosystem. Although the traditional RNN and LSTM models can effectively capture the time sequence information in the haze data over the years for prediction, it is still difficult to achieve accurate prediction due to the complexity of haze prediction. In this study, 8769 pieces of heterogeneous data were successfully collected using multi-source big data acquisition technology. A series of pre-processing operations, including data conversion and dimensionality reduction, were performed on different data such as AQI, PM2.5, PM10, SO2, NO2, CO and O3. The method of big data fusion and deep learning is adopted to integrate haze data and discover hidden rules and trends in it. Finally, based on FEDformer model and STL time series decomposition method, the prediction model was established in this study, which achieved significant improvement in both short - and long-term time series prediction problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131711A (2024) https://doi.org/10.1117/12.3032046
To improve radar signal detection accuracy of traditional methods under low SNR, a detection method based on stacked auto-encoder (SAE) and time-frequency domain features is proposed. The time-domain features, frequency-domain features and joint time-frequency domain features of signal are extracted by SAE to obtain the representative features of radar signal. The extracted features are input into support vector data description (SVDD) for open-set judgment to distinguish radar signal from background signal. Simulation results show that the accuracy and robustness of object detection are improved and the performance of object detection algorithms in complex environments is improved by integrating time-domain features and frequency-domain features information from the target background into detection decisions. It has practical significance for improving the detection accuracy of radar signal detection under low SNR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Zeya Li, Yang Liu, Longqing Gong, Danni Xu, Jinfeng Tang
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131711B (2024) https://doi.org/10.1117/12.3031997
In response to the growing demands of real-time application scenarios, TSN has emerged as a crucial solution. However, the implementation of a credit-based shaper to ensure low-priority traffic transmission may unintentionally result in burst traffic generation. This could lead to storage overflow and potential data loss, posing a significant challenge to the efficient operation of TSN networks. To address this issue, we propose the ICBS algorithm, an enhancement of the original CBS mechanism, which preserves the fundamental principles of preventing low-priority starvation while mitigating the risk of burst traffic generation. The ICBS algorithm demonstrates enhanced fine-grained operation of data frames by refining the credit calculation method, effectively minimizing the occurrence of bursts in AVB traffic. Furthermore, we have devised an ICBS adaptation evaluation algorithm to assess the rationality of pre-scheduling results for AVB, ensuring optimal resource allocation. The simulation results demonstrate that the proposed ICBS algorithm effectively achieves its objective of low-priority traffic transmission with extra worst-case delay cost, making it a highly suitable substitute for existing TSN shaper solutions. The ICBS algorithm not only enhances the efficiency and reliability of TSN networks but also paves the way for future advancements in real-time application scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131711C (2024) https://doi.org/10.1117/12.3031998
Currently, the Advanced Encryption Standard (AES) holds the distinction of being the most widely used symmetric cryptographic algorithm. The importance of developing AES with superior performance cannot be overstated, as it holds the potential to expand its vast range of applications. The encryption algorithm may leak some information during operation, which may be used by attackers for side channel attacks (SCA). CGRA (Coarse-Grained Reconfigurable Architecture), as a coarse-grained reconfigurable architecture, allows hardware resources to be reconfigured for different tasks. This reduces the impact of SCA during encryption and decryption. To improve the security of AES algorithm, this paper introduces an encryption and decryption framework based on open-source CGRA complier that enable domain experts to easily accelerate the plaintexts on reconfigurable processors. Firstly, we propose an improved hardware-friendly AES algorithm, which allows the processing elements (PE) of CGRA to access the data in a vectorized fashion. Secondly, a new set of CGRA instructions, based on the proposed algorithm, has been used and the performance has been improved up to 19 times when compared to the standard AES algorithm. Finally, we evaluate the proper size of CGRA to balance the performance and the area. Our experiments show that the best compromise of CGRA size is 8 * 8 for classic AES-128.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131711D (2024) https://doi.org/10.1117/12.3031909
In the complex electromagnetic environment of the 230-270MHz ultra short wave frequency band, traditional energy detection methods suffer from missed detections and high false alarm rates in broadband satellite signals. This paper proposes a broadband ultra short wave signal detection method based on the Short Cut Swin Transformer YOLOV5s (SST-YOLOV5s) network with spectrum superposition, Effectively addressing the challenge of detecting broadband satellite channels in low signal-to-noise ratio scenarios, a problem often encountered with traditional methods. Additionally, tackling the issue of elevated false alarm rates when interference anomalies are present. Firstly, by overlaying spectra, the discrimination between ultra short wave signals and bottom noise is highlighted, and the influence of short burst interference is suppressed, Enhancing the target signal characteristics effectively amidst a low signal-to-noise ratio. Simultaneously, a four layer SC (shortcut)-ST (Swin Transformer) and multi-layer convolutional cascaded ultra short wave signal feature extraction backbone network SST-Backbone (SC-ST-Backbone) are proposed. In the backbone network, the SC-ST module utilizes the global attention to global features of the Transformer, combined with residual multi-layer convolution modules that focus on local features, to increase the depth and receptive field of the network, making the network model more accurate in reconnaissance and detection of broadband ultra short wave signals in the target frequency band. It can efficiently remove the interference of bottom noise features and reduce the attention to abnormal signal features, Improved the detection accuracy of broadband ultra short wave target signals in complex environments and reduced false alarm rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131711E (2024) https://doi.org/10.1117/12.3032039
With the increasing commercialization of deep neural networks (DNN), there is a growing need for running multiple neural networks simultaneously on an accelerator. This creates a new space to explore the allocation of computing resources and the order of computation. However, the majority of current research in multi-DNN scheduling relies predominantly on newly developed accelerators or employs heuristic methods aimed primarily at reducing DRAM traffic, increasing throughput and improving Service Level Agreements (SLA) satisfaction. These approaches often lead to poor portability, incompatibility with other optimization methods, and markedly high energy consumption. In this paper, we introduce a novel scheduling framework, M-LAB, that all scheduling of data is at layer level instead of network level, which means our framework is compatible with the research of inter-layer scheduling, with significant improvement in energy consumption and speed. To facilitate layer-level scheduling, M-LAB eliminates the conventional network boundaries, transforming these dependencies into a layer-to-layer format. Subsequently, M-LAB explores the scheduling space by amalgamating inter-layer and intra-layer scheduling, which allows for a more nuanced and efficient scheduling strategy tailored to the specific needs of multiple neural networks. Compared with current works, M-LAB achieves 2.06x-4.85x speed-up and 2.27-4.12x cost reduction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131711F (2024) https://doi.org/10.1117/12.3032028
Aiming at the scheduling problem of underground mining equipment in shot mining, this paper proposes an improved cultural gene algorithm (MA). The global search applies the genetic algorithm, and some adjustments are made in its crossover and mutation operations; the local search uses the simulated annealing algorithm. The global search applies the genetic algorithm, and some adjustments are made in its crossover and mutation operations; the local search uses the simulated annealing algorithm, considering that the algorithm will have a certain probability to jump out of the optimal solution range, so on the basis of the original algorithm, the Gaussian function is replaced by the Cauchy function to avoid this problem. The algorithm is applied to the scenario of 5S15J for simulation experiments. After that, compared with the results of the genetic algorithm, it shows that the improved MA algorithm is obviously better in total time and total interval time, and can obtain high-quality solutions and an ideal cooperative scheduling strategy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Jingsi Yang, Rong Yi, Yao Fu, Zhaojing Wang, Tongqing Li, Junxi Wang
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131711G (2024) https://doi.org/10.1117/12.3032011
Power failure in distribution network affects the quality of people's daily electricity consumption, so it is of great practical significance to quickly locate the fault location and realize power supply recovery for improving the reliability and safety of power supply. Based on this, this paper proposes an optimization method for rapid emergency recovery of distribution network power failure based on multi-agent algorithm. In the single power supply mode, the switching function is designed, and the power failure location result of distribution network based on multi-agent algorithm is determined by analyzing the power generation efficiency function of equipment. Automatic data is used to optimize the rapid emergency recovery of power failure, and multi-agent algorithm is introduced into it to realize the optimization method of rapid emergency recovery of power failure in distribution network. The experimental results show that the precision, recall and F1 score of the research method are all above 95%, and the power generation effect of the equipment is high, which can restore the normal operation of the distribution network more quickly and effectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nan Lin, Yongpeng Niu, Kaipeng Tang, Hao Duan, Ying Han
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131711H (2024) https://doi.org/10.1117/12.3031916
Targeting the challenge where the substantial labeling expense of ECG data contributes to the present dearth of labeled ECG datasets and the subpar segmentation precision of contemporary models, this paper proposes an ECG segmentation model NGA-Net,the model is based on RRU-Net, with the addition of the ASPNL module and the improved Ghost module, in which the improved Ghost module is designed to generate an increased quantity of feature maps using a reduced parameter set, thereby boosting computational efficiency; The ASPNL module can capture ECG signal features from multiple scales to enhance the efficiency of feature extraction. Experimental evidence indicates that the ECG segmentation model, NGA-Net, introduced in this research, exhibits superior performance in comparison to other methodologies when tested on the publicly available LUDB dataset, which demonstrates the effectiveness of NGANet.In this research, we adopt a semi-supervised learning strategy for training the NGA-Net in scenarios with small sample sizes, leveraging data augmentation and consistency training methodologies. The experimental findings corroborate the effectiveness of semi-supervised learning in augmenting the performance of deep learning models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131711I (2024) https://doi.org/10.1117/12.3031958
With the continuous development of semiconductor technology, monolithic integration faces problems such as high design costs and long research period. The chiplet effectively improves the yield rate and shortens the research and development cycle by splitting a single die into multiple dies with different functions for advanced packaging integration. However, compared to monolithic integration, inter-die communication is limited by pin density and physical distance, and die interconnects bring higher latency. At the same time, each die has an independent structure, and accessing the same address space will cause system-level cache coherence issues. Therefore, we design a system-level cache based on the directory-based hybrid consistency protocol, and use optimization strategies such as shared bit and add interconnection channels between dies to improve the efficiency of inter-core coherence maintenance. We use GEM5 in conjunction with the SPLASH-2 benchmark to compare with an unoptimized directory-based hybrid coherence protocol. The results show that the program running speed is increased by 19.3%, the average memory access time is reduced by 23.3%, and the consistency protocol traffic is reduced by 37.8%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Internet of Things System Design and Network Optimization Method
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131711J (2024) https://doi.org/10.1117/12.3032136
Starting from practical needs, this article combines network technology, control technology, and radar technology based on the latest developments in communication technology. A distributed remote control shipborne radar system has been designed, focusing on solving the problem of unmanned shipborne radar and achieving unmanned and intelligent remote control of shipborne radar. The system is divided into three parts: the shore remote control center, the shipborne radar end, and the wireless network. The operator obtains real-time information such as radar intelligence, radar working status, and surrounding situation from the human-machine interface of the shore remote control center, and controls the shipborne radar equipment through the wireless network to achieve intelligent remote control of the shipborne radar. It meets the needs of unmanned and intelligent shipborne radar, improves the reliability and safety of shipborne radar, and provides reference and inspiration for the intelligent unmanned application of shipborne radar in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131711K (2024) https://doi.org/10.1117/12.3031928
Clone detection of source code is one of the most fundamental software engineering techniques. Although intensive research has been conducted in the past few years, it has more often addressed syntactic code clone, and there are still a number of problems in detecting semantic code clone. In this paper, we propose an approach that uses C/C++ code to finetune the Bert pre-training model so that it better understands the syntactic and semantic features of the C/C++ code, thus enabling better source code similarity evaluation. We evaluated our approach on a large C/C++ code clone dataset and the results show that our approach achieves excellent semantic code clone detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131711L (2024) https://doi.org/10.1117/12.3031955
Most current network devices have multiple network interfaces, and multipath transport protocols can utilize multiple network paths (e.g., WiFi and cellular) to improve the performance and reliability of network transmission. The scheduler of the multipath transmission protocol determines the path to which each data packet should be transmitted, and is a key module that affects multipath transmission. However, current multipath schedulers cannot adapt well to various user usage scenarios. In this paper, we propose DRLMS, a deep reinforcement learning based multipath scheduler. DRLMS uses deep reinforcement learning to train neural networks to generate packet scheduling policies. It optimizes the scheduling strategy through feedback to the neural network through the reward function based on the current user usage scenario and QoS. We implement DRLMS in the MPQUIC protocol and compared it with current multipath schedulers. The results show that DRLMS's adaptability to user usage scenarios is significantly outperforms other schedulers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131711M (2024) https://doi.org/10.1117/12.3032051
Webshell is a backdoor program based on web services. Attackers can use WebShell to gain administrative privileges for web services, thereby achieving penetration and control of web applications. With the gradual development of traffic encryption technology, traditional detection methods that match text content features and network traffic features are becoming increasingly difficult to prevent complex WebShell malicious attacks in production environments, especially variant samples, adversarial samples or 0Day vulnerability samples, and the detection effect is not ideal. This article constructs a network collection environment and collects malicious Webshell traffic samples using different platforms, languages, and tools; A WebShell encrypted traffic recognition method based on Relie F feature extraction was proposed, which assigns weights to multiple features through the Relie F algorithm and selects feature groups with strong classification ability based on the size of the weights; Finally, use the LightGBM classification algorithm to identify normal encrypted traffic and WebShell encrypted traffic, and distinguish the management tools to which WebShell password traffic belongs. The experimental results indicate that this method can effectively distinguish between normal encrypted traffic and Webshell malicious traffic. The recognition accuracy and recall rate of Webshell management tool software are both higher than 92%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131711N (2024) https://doi.org/10.1117/12.3031940
To address the current issue of slow face detection and the low accuracy of single-feature fatigue detection in drivers, we first introduce a lightweight Retinaface network. This is achieved by replacing the backbone of the Retinaface network with Ghostnet, which accelerates face detection while improving accuracy. We then proceed to locate facial key features. Following this, a comprehensive SSD network is employed for the identification of the driver's ocular and oral conditions. By combining the MAR (Mouth Aspect Ratio) and EAR (Eye Aspect Ratio) values with fatigue detection thresholds, we ultimately determine the driver's condition. The experimental findings reveal that the enhanced Retinaface algorithm surpasses the original Retinaface approach, exhibiting an average accuracy improvement of 2.64%. The final fatigue detection, based on multiple features, achieves an average correctness rate of over 90%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131711O (2024) https://doi.org/10.1117/12.3032034
With the development of electronic information technology in the 21st century, various testing and acquisition techniques have made breakthrough progress. However, with the rapid development of commercial society and economy, traditional acquisition methods no longer meet current requirements, especially in the measurement industry. LabVIEW, with its intuitive visual programming interface, rich functional modules, and compatibility with other software, has become one of the preferred tools for engineers to perform data acquisition, analysis, and control. Users can build a data acquisition system that suits their practical applications using LabVIEW, and can monitor and record data from various sensors and instruments in real-time. Furthermore, the collected data can be analyzed and processed. In this paper, a weak current acquisition device is designed using the USB6001 data acquisition card as an example
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131711P (2024) https://doi.org/10.1117/12.3031967
The integration of satellite and ground networks (SGIN) represents a significant advancement in the evolution of 6G mobile technologies. The rapid movement of satellites leads to swift changes in network topology. By adjusting the Service Function Chain (SFC) to these frequent changes, it's possible to diminish SFC delay and enhance the user experience. This study addresses the challenge of SFC migration within the dynamic SGIN framework. We initially developed a mathematical formulation for the VNF migration and SFC reconfiguration task, with the objective of optimizing the total system benefits. An enhanced genetic algorithm was introduced as a solution, striking an optimal compromise between SFC delay and the expenses associated with migration. When we compare our findings to previous algorithms, it's clear that our method substantially reduces the costs linked to network migration, enhances the overall profitability of services, and achieves a more favorable balance between SFC latency and migration costs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131711Q (2024) https://doi.org/10.1117/12.3031956
With the widespread application of deep learning frameworks, large-scale computing and GPU programming are receiving increased attention. For upper-layer applications that utilize GPUs for computational communication, such as TensorFlow and PyTorch, improving the communication efficiency of the underlying communication library is of paramount importance to enhance the overall performance of the frameworks. Among them, the RCCL (Rocm Collective Communication Library) GPU communication library, provided by the Rocm (Radeon Open Compute platform) computing platform, supports various collective communication operations and point-to-point operations. Through analysis, we have identified a problem in the initialization and usage of the ring channel network in the RCCL library, specifically in multi-network card systems. This issue results in certain network cards being unable to communicate, leading to wasted system resources. To address this problem, optimizations can be made at the code level by introducing data structures and algorithms to control the invocation of network cards. The goal is to adjust the usage strategy of multiple network cards in the ring channel network without modifying the original design concept of RCCL. After optimization, extensive evaluations were conducted using a large-scale GPU cluster. The optimized RCCL library achieved significant improvements in communication performance. Under a communication scale of 16 compute nodes and 64 GPUs, the peak bandwidth increased from 5.28GB/s to 7.78GB/s. In inter-node collective communication tests, the performance improvement reached up to 60%. The improved RCCL library provides better low-level communication performance for upper-layer applications on the Rocm computing platform, offering enhanced communication support.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131711R (2024) https://doi.org/10.1117/12.3031950
Recently, the combination of a service function chain (SFC) with network function virtualization (NFV) and softwaredefined networking (SDN) has provided customers with flexible and efficient services. The emergence of multi-access edge computing (MEC) further enhances the level of service customization. However, achieving joint optimization of virtual network function (VNF) deployment and flow allocation in resource-constrained scenarios while meeting the diverse requirements of 5G verticals is challenging. Current research rarely addresses dedicated service provisioning for edge servers and considers the additional instantiation overhead introduced by adjusting cloud server parameters. In fact, this is a non-negligible issue during SFC deployment in 5G-MEC scenarios. Based on the above considerations, this paper constructs a joint SFC deployment problem for edge-cloud networks with the goal of maximizing network utility. We first propose a univariate modeling method based on meta-links that effectively avoids the variable coupling problem in traditional multivariate modeling approaches and reduce the problem size by at least half. Subsequently, to solve the NPhard integer nonlinear problem (INLP), we propose a distributed computing architecture named SP-ADMM, which improves the speed and quality of SFC deployment in large-scale scenarios via convex combinatorial formulations and a Viterbi-based heuristic algorithm (PAC-GREP). Finally, we experimentally verify the convergence and approximation of the algorithms. Our solution demonstrates advantages in terms of network utility and convergence speed under the same network resources, increasing service capacity by at least 39%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131711S (2024) https://doi.org/10.1117/12.3032042
The traditional Modbus communication architecture usually consists of a single master station and multiple slave stations, which can lead to decreased communication efficiency in certain application scenarios. As IIoT (Industrial Internet of Things) continues to progress, there is a growing demand for new sophisticated applications that necessitate retrieving data from various industrial settings. As a result, multi-master station technology has been developed, enabling the retrieval of on-site data without disrupting the data collection process of the primary master station. However, most of these solutions require modifications to the original bus and the suspension of the original data collection process during installation. In order to maintain the integrity of the original bus system, this study introduces a Modbus multi-master technology in which the additional master employs a non-invasive listening approach to receive messages. It identifies request and response messages based on the protocol’s function codes and message byte numbers, parses information, and shares the acquired data to IIoT applications. The technology was tested in an IIoT application in which a WirelessHART network node was converted into such a master, which uploads acquired data wirelessly to the IIoT application. The findings indicate that the new master successfully identified messages and exchanged data with precision
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131711T (2024) https://doi.org/10.1117/12.3031999
Remote conference systems play a crucial role in modern society. In order to design a high-quality system that meets user requirements, accurate analysis and selection of user needs to derive a functional list for the system is particularly important. This study is based on the KANO model and the Better-Worse index model, utilizing a combination of qualitative and quantitative analysis methods. By collecting and analyzing user needs, the aim is to gain a deep understanding of the true requirements for remote conference systems and identify corresponding features to guide system design and development. The contribution of this study lies in providing a systematic approach for conducting requirement analysis of remote conference systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131711U (2024) https://doi.org/10.1117/12.3032084
In the field of underwater target recognition, forward-looking sonar images are widely applied in underwater rescue operations. The emergence of object detection technologies powered by deep learning has significantly enhanced the ability to recognize underwater targets. In object detection, the neck network, serving as a critical intermediary component, plays a vital role. However, traditional Feature Pyramid Networks (FPN) have two main problems: 1) During the feature fusion process, FPN does not modify the importance of features across various levels, resulting in imbalanced features at different scales and loss of scale information. 2) Lack of effective information transmission between features of different scales. In this article, we propose a novel neck network architecture, Multi Scale Selective Fusion with Dense Connectivity Network (MSSF-DCNet), which encompasses two components to tackle the previously mentioned challenges. The first one is the Multi Scale Selection Module, which effectively balances the weights of features at different levels during the feature fusion process by calculating and weighting weights for different scales, better preserving scale information. The second one is the Cross Scale Dense Connection module, which exchanges information between different feature layer levels. The model is capable of capturing global context information at every layer. thereby improving the detection capability of the neck network. By replacing the FPN with MSSF-DCNet in the Faster R-CNN framework, our model achieves an increase in Average Precision (AP) by 1.2, 4.0, and 2.6 points using MobileNet-v2, ResNet50, and SwinTransformer backbones, respectively. Furthermore, when employing ResNet50 as the backbone, MSSF-DCNet enhances the RetinaNet by 3.4 AP and ATSS by 4.1 AP. At the same time, we compared different neck networks with MSSF-DCNet on the Faster R-CNN baseline network, and MSSF-DCNet achieved the best performance in all metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131711V (2024) https://doi.org/10.1117/12.3031933
Deep Q learning is a crucial method of deep reinforcement learning and has achieved remarkable success in multiple applications. However, Deep Q-learning suffers from low sample efficiency. To overcome this limitation, we introduce a novel algorithm, adaptive prediction sample network (APSN), to improve the sample efficiency. APSN is designed to predict the importance of each sample to policy updates, enabling efficient sample selection. We introduce a new metric to evaluate the importance of samples and use it to train the APSN network. In the experimental parts, we evaluate our algorithm on four Atari games in OpenAI Gym and compare APSN with SDQN. Experimental results show that APSN performs better in terms of sample efficiency and provides an effective solution for improving the sample efficiency of deep reinforcement learning. This research result is expected to promote the performance of deep reinforcement learning algorithms in practical applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Chang Wang, Zhiqiong Liu, Jin Liu, Wang Li, Junxin Chen
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131711W (2024) https://doi.org/10.1117/12.3032003
FaaS enables users to focus on developing function codes rather than managing complex infrastructure, as the serverless computing platform takes responsibility for resource management and dynamically scales computing resources for serverless functions. While serverless computing platform provides efficient hardware resource management and provisioning, they suffer from weaker computing performance due to the latency associated with serverless function startup. Startup latency refers to the time required to prepare execution environments for user functions. To alleviate this latency, this paper proposes a container scheduling policy aimed at reducing startup latency by reducing the likelihood of container cold starts. This is achieved by unifying language runtime images, creating pre-warm container pools, and warm containers. We formulate the startup latency problem and implement a scheduling policy in a serverless computing platform. Simulation results demonstrate that our proposed scheduling policy effectively reduces overall startup latency while ensuring optimal computing performance for user functions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131711X (2024) https://doi.org/10.1117/12.3032070
As a critical stage in modern Very-Large-Scale Integrated (VLSI) design, placement plays a crucial role in positioning numerous circuit modules of varying sizes onto a 2D chip canvas to achieve optimal performance. In recent years, applying machine learning methods to placement has emerged as a promising solution for significantly enhancing efficiency and achieving superior results. However, machine learning-driven methods are still in their early stages, facing challenges such as exploration and convergence difficulties. In addition, it is challenging to integrate netlist data with the placement information. This paper proposes a novel approach leveraging deep reinforcement learning to address these challenges. First, a multi-layer chip canvas state representation method is proposed to tackle the challenges of storing and using the placement information. Additionally, a graph neural network is used to assist in generating placement information. Second, this paper proposed a semi-shared policy and value network, and to accommodate the scale and complexity of chip placement, a residual-like neural network is proposed. Third, extensive experiments on eight circuits of public benchmarks show that GoPlace achieves 10% ~ 25% wirelength reduction compared to other reinforcement learning-based methods, lowest congestion, and guarantees zero overlap.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hang Zhang, Liqi Zhuang, Dong Wei, Weiqing Huang, Jing Li
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131711Y (2024) https://doi.org/10.1117/12.3031911
Traffic identification is a vital technology in network security. Currently, the identification of mobile network traffic is based on the downlink data in the air interface. This is because it is difficult to synchronize uplinks and obtain uplink traffic data in real-world environments. We propose to utilize mobile communication network sideband resource occupancy for traffic identification. This method captures the uplink IQ data and draws a time-frequency resource map. In order to reduce the computational complexity, we only use the sideband portion of the time-frequency resource map for identification. Based on the different colors reflected on the time-frequency resource map by different users' uplink transmitting power, we distinguish the number of users by color and separate the different user data. The result shows that the accuracy of user number identification is up to 95%. Finally, we use Resnet18 to identify the service of the separated pictures. The F1 parameter of the Resnet18 network reaches 88%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Yichao Wu, Yafei Xiang, Shuning Huo, Yulu Gong, Penghao Liang
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 131711Z (2024) https://doi.org/10.1117/12.3032013
In addressing the computational and memory demands of fine-tuning Large Language Models (LLMs), we propose LoRASP (Streamlined Partial Parameter Adaptation), a novel approach utilizing randomized half-selective parameter freezing within the Low-Rank Adaptation (LoRA) framework. This method efficiently balances pre-trained knowledge retention and adaptability for task-specific optimizations. Through a randomized mechanism, LoRA-SP determines which parameters to update or freeze, significantly reducing computational and memory requirements without compromising model performance. We evaluated LoRA-SP across several benchmark NLP tasks, demonstrating its ability to achieve competitive performance with substantially lower resource consumption compared to traditional full-parameter fine-tuning and other parameter-efficient techniques. LoRA-SP’s innovative approach not only facilitates the deployment of advanced NLP models in resource-limited settings but also opens new research avenues into effective and efficient model adaptation strategies
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317120 (2024) https://doi.org/10.1117/12.3032021
In marine ship ad-hoc networks, the distance between ships is usually several kilometers. The IEEE802.11 standard designed for indoor networks is not applicable so that the use of directional antennas extends the communication range. In this case, the node cannot listen to data transmission activity outside the beam. And the asymmetric gain of the antenna can lead to hidden termination problems. This paper proposes a directional MAC protocol for ship ad-hoc networks (DMSA) by using Automatic Identification System (AIS) to obtain ship node position information. The transmitting node will send polling RTS (P-RTS) to the surrounding nodes before sending data, and the neighboring nodes around it will create a P-NAV table to record the direction of the data transmission after hearing the P-RTS frame. After completing the RTS/CTS handshake, power control is used to solve the hidden terminal problem caused by antenna gain asymmetry
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317121 (2024) https://doi.org/10.1117/12.3032060
Mobile Edge Computing (MEC) has emerged as a pivotal technology to meet the increasing demands of mobile applications. However, in high-dynamic MEC environments, load balancing and performance optimization among servers remain challenging. Focusing on server load balancing in task offloading in MEC environment. It constructs a framework for ultra-dense network environments and formulates the problem of computation offloading and resource allocation as a Markov Decision Process (MDP). Subsequently, a learning algorithm based on Proximal Policy Optimization (PPO) is proposed to reduce load standard deviation, achieve load balancing, and simultaneously minimize the system's total delay energy consumption, thereby enhancing the efficiency of the MEC system. Simulation results demonstrate that, compared to random offloading strategies, all-offloading strategies, and the Deep Deterministic Policy Gradient algorithm, the algorithm proposed consistently demonstrates superior performance in load balancing across varying numbers of users and task sizes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317122 (2024) https://doi.org/10.1117/12.3032019
The development of 6G has led to the need for communication systems to realize low latency, high throughput and stable connectivity. To achieve these goals, Reconfigurable Intelligent Surface has emerged. However, for RIS with massive elements, the overhead caused by channel estimation and feedback is not negligible. In this article, we address this problem using irregular RIS, which essentially involves irregularly rationing a specified number of reflective elements on an RIS surface, by providing additional spatial degrees of freedom to achieve performance gains. To this end, we formulate the problem of joint topology and precoding matrix joint optimization with tabu-based adaptive large neighborhood search for channel capacity maximization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317123 (2024) https://doi.org/10.1117/12.3031962
Accurate identification of Line-of-Sight (LOS) and Non-Line-of-Sight (NLOS) conditions can enhance the precision of indoor positioning. This paper proposes a method for identifying LOS and NLOS channel states in millimeter-wave indoor wireless positioning based on machine learning. In this approach, we introduce angular and frequency domain features for the first time and combine them with traditional channel characteristics to improve the accuracy of millimeter-wave indoor LOS/NLOS scene classification. The method utilizes an artificial neural network to analyze five distinct channel indicators extracted from the spatial, temporal, and frequency domains: the angular difference of the strongest path, maximum received power, average excess delay, root mean square delay spread, and the kurtosis of the frequency domain transfer function. Simulation results show that this method achieves an accuracy rate of 97.58%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317124 (2024) https://doi.org/10.1117/12.3032110
With the rapid popularity of social media and the Internet, network security issues are becoming increasingly prominent. More and more people are accustomed to expressing their emotions and opinions online, and the expression of netizens’ emotions is becoming more and more diversified. Accurate analysis of netizens’ emotions is particularly important. Traditional emotion recognition methods are mainly based on text analysis, but with the diversification of network media, single text analysis has been unable to meet the actual needs. Therefore, continuously exploring the application of multimodal deep learning in netizen emotion recognition has become an inevitable choice for public security organs. This paper aims to explore the application of multimodal deep learning in netizen emotion recognition research. Therefore, this study uses multimodal datasets of text and images, and constructs BERT and VGG-16(fine-tuning) models to extract emotional features from text mode and image mode respectively. By introducing the multi-head attention mechanism, the two modes are combined to establish a fusion model, and explores how to combine them to improve classification performance. The final accuracy of text modality is 0.70, the accuracy of image modality is 0.58, and the accuracy of multimodal fusion model is 0.73, which is 0.03 and 0.15 higher than that of text modality and image modality, respectively, proving the scientific nature of multimodal fusion model. It can provide new ideas and methods for the analysis and early warning of public security organs, and also provide reference and inspiration for the research in other fields.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317125 (2024) https://doi.org/10.1117/12.3032025
In today's era of rapid development in information technology, short-text data has surged on various social networking platforms. How to quickly and accurately analyze people's emotional tendencies from these vast and complex data is a highly challenging task in the field of short-text data analysis. This paper proposes a short-text sentiment analysis framework that integrates a sentiment lexicon and graph convolutional neural networks (GCN). The framework utilizes the sentiment dictionary to enhance sentiment recognition and employs GCN to process complex data structures, learning the emotional features of short texts, and ultimately achieving short-text sentiment classification. To verify the effectiveness of the model, we conducted validation on public datasets. The experimental results show that this model significantly improves classification accuracy and recall rate compared to traditional single models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317126 (2024) https://doi.org/10.1117/12.3031934
One of the primary challenges in integrating large-scale data sources is entity resolution, which involves linking records that refer to the same entity. In recent years, deep learning has emerged as a proposed solution for addressing entity resolution. however, insufficient feature extraction and inadequate feature integration during the entity resolution process have resulted in sub-optimal results. In this paper, Multi-Channel BERT for Entity Resolution (MCBER) is proposed, a method that involves first translating the target data into different languages and utilizing data augmentation to expand the labeled data. Then, these data are fed into a multi-channel BERT model for feature extraction, followed by deeper feature extraction using LSTM. Finally, abstract features are induced from hidden layers. Our method is compared with state-of-the-art entity resolution methods on publicly available datasets, and the experimental results demonstrate that higher F1 scores are achieved by our approach, and good stability is exhibited.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ying Ling, Fuchuan Tang, Xin Li, Dongmei Bin, Chunyan Yang
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317127 (2024) https://doi.org/10.1117/12.3031979
This paper proposes a high-confrontation botnet theoretical model from the attacker's point of view, which is based on the terminal-aware strategy, improves the network's anti-analysis, anti-pollution, and anti-infiltration capabilities, and based on this, further enhances the network's robustness and destructive resistance through the self-organization and reconstruction mechanism. It is of great practical significance to discuss its possible defense strategies and propose effective defense measures before attackers for this kind of potential new highly adversarial botnets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on Algorithms, Microchips, and Network Applications (AMNA 2024), 1317128 (2024) https://doi.org/10.1117/12.3031986
Recent advancements in graph neural networks (GNNs) have prompted diverse research endeavors focused on utilizing GNNs for anomaly detection. The fundamental concept revolves around harnessing the inherent expressive capabilities of GNNs to acquire meaningful node representations, aiming to distinguish between anomalous and normal nodes in the embedding space. However, prior methods have often employed simple readout modules (such as sum, mean, or max functions) for subgraph aggregation, failing to fully exploit subgraph information. In response to this limitation, we propose an anomaly detection application algorithm called “Graph Contrastive Learning Network with Adaptive Readouts” (GNAR), tailored specifically for Graph Anomaly Detection (GAD) tasks. Through extensive experiments on three famous public datasets, we consistently observe that GNAR outperforms baseline methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.