We present the first image sensor that, equipped with a matrix of 64x64 SPADs and a 4-layers distributed artificial neural network (ANN), can be ”reprogrammed” (by loading its weights) at run-time to perform any required task without redesigning the complete hardware. In general, SPAD sensors are specifically designed for only one type of application because systems that are multi-purpose usually have to face issues that are originated from the trade-off performance needed for each of those aimed applications. The next logical step is to design an image sensor that is application-independent. With the help of well-known high-level processing capabilities of ANNs, the system was designed to deliver qualitative information in micro seconds, such as ”car in front” or ”the vehicle is off the road” or ”sign ahead”. The system was synthesized for a high-speed clock of 100 MHz. The total processing time needed by a neuron is 320 ns. Considering that the neural network has three processing layers, the total time it takes to generate its outputs is 960 ns. However, the network is a pipelined system, meaning the throughput of information is that of the neuron (320 ns) with a latency of 960 ns. This prompt reaction, for example, could be used for driving assistance. We show the results of post-layout simulations for two different applications utilizing the same hardware model to prove this concept: Optical Character Recognition (OCR) and signal digitization. The recognition performance achieved is 96.47 %; it only lowers at 85 % with a given Signal-to-Noise Ratio (SNR) of 2
|