Real-time image processing is a key area of focus, but computationally intensive. Neural networks effectively address classification tasks, but they are not always a viable option, particularly in environments where high power consumption or computational requirements are limiting factors. Hardware devices such as Field-Programmable Gate Arrays (FPGAs) offer significant parallelization capabilities that can be fully exploited when the implemented circuit is composed solely of logic gates. In addition, FPGAs are also interesting alternatives to traditional GPU-based implementations in terms of power consumption and reconfiguration capabilities. They can be used as a demonstration platform to validate a hardware design that can be later manufactured, creating the final Application-Specific Integrated Circuit (ASIC). This paper introduces a practical demonstration platform based on an FPGA that highlights the great capabilities of logic neural networks, a type of neural network constructed exclusively with logic gates.
By harnessing FPGA parallelization and logic gates, we have achieved a balance between computational power and real-time performance. This approach ensures that image classification occurs at speeds on the order of nanoseconds. This ultra-fast processing is well-suited for real-time image analysis applications across various domains. Industries that rely on quality control, such as manufacturing, will benefit from rapid and precise assessments. In the field of medical image processing, where quick diagnoses are crucial, this technology promises transformative advancements. The demonstration platform developed serves as a proof of concept for logic neural networks, offering a solution to the challenge of real-time image processing and representing the first step towards the implementation of future architectures of logic networks in hardware.
In this work, we present a performance study of our preliminary Automatic Parking Space Detection (APSD) system. The purpose of the APSD prototype is to enrich an information system with automatically located parking spaces. It uses images captured from a vehicle to suggest available parking spaces in urban environments. To carry out this performance evaluation, we tested three different platforms; a desktop computer with a NVIDIA RTX 2070 GPU as an upper bound performance system and two embedded solutions, a NVIDIA Jetson Xavier NX module, and a NVIDIA Jetson TX1 module. We analyze the effect of different modifications on the system, including the use of different state-of-the-art networks on the different architectures and an ablation study to verify the effect of using lower resolution images and optimizing the detection network by means of TensorRT. The evaluation results presented demonstrate the effectiveness of the proposed APSD system to meet the requirement of real-time processing. This study highlights the importance of the choice of neural network architectures used in the system, as well as the limitations of hardware devices used in the evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.