As neural networks (NNs) become more capable, their computational resource requirements also increase exponentially. Optical systems can provide alternatives with higher parallelizability and lower energy consumption. However, the conventional training method, error backpropagation, is challenging to implement with these analog systems since it requires the characterization of the hardware. In contrast, the Forward-Forward Algorithm defines a local loss function for each layer and trains them sequentially without tracking the error gradient between different layers. In this study, we experimentally demonstrate the suitability of this approach for optical NNs by utilizing the multimode nonlinear propagation inside an optical fiber as a building block of the NN. Compared to the all-digital implementation, the optical NN achieves significantly higher classification accuracy while utilizing the optical system only one epoch per layer.
In video codecs, CNN-based models have shown huge promise in two related tasks: in-loop restoration and frame super-resolution. In our previous work, we presented a framework that uses a common CNN architecture with downloadable model parameters for both these tasks with a preliminary performance study, where encoderside selection of scale factor was left as future work. The advantage of a common architecture with switchable parameters is that a single hardware inference engine can be utilized in all cases of same-resolution and super-resolution restoration, thereby limiting implementation costs. In this paper, we fully integrate this framework into the under-development AV2 video codec from the Alliance for Open Media (AOM). We also implement an algorithm for encoder-side selection of the super-resolution scale factor. With this implementation, we are able to achieve combined compression improvement up to −3.5% (AI) and −3.9% (RA) in BDRATE PSNR-Y and up to −7.8% (AI) and −7.9% (RA) in BDRATE VMAF, with inference cost as low as 1500 MACs/pixel.
Today's video transcoding pipelines choose transcoding parameters based on rate-distortion curves, which mainly focus on the relative quality difference between original and transcoded videos. By investigating the recently released YouTube UGC dataset, we found that human subjects were more tolerant to changes in low quality videos than in high quality ones, which suggests that current transcoding frameworks can be further optimized by considering perceptual quality of the input. In this paper, an efficient machine learning metric is proposed to detect low quality inputs, whose bitrate can be further reduced without sacrificing perceptual quality. To evaluate the impact of our method on perceptual quality, we conducted a crowd-sourcing subjective experiment, and provided a methodology to evaluate statistical significance among different treatments. The results show that the proposed quality guided transcoding framework is able to reduce the average bitrate up to 5% with insignificant perceptual quality degradation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.