The cost of processing imagery that exhibits a large data burden can be ameliorated by compressive processing, which simulates an image-domain operation using an analogous operation over a given compressed image format. The output of the analogous operation, when decompressed, equals or approximates the output of the corresponding image operation. In previous research, we have shown that compressive processing can lead to sequential computational efficiencies that approach the compression ratio. This effect is due to the presence of fewer data in the compressed image, as well as to the occasional occurrence of an analogous operation whose cost per pixel is less than that of the corresponding image operation. We generally claim computational efficiencies that approach the compression ratio. A further advantage of compressive processing can occur in parallel computing paradigms, where a consequent reduction in the processor count may approach the compression ratio. This has important implications when the compressive operation requires less computing time than the corresponding image operation. That is, a reduction in the fundamental complexity may occur, which facilitates computation in nearly-constant time, given sufficient parallelism. Additionally, the degree of parallelism is reduced with respect to that required for image-domain computation by a factor that approaches the compression ratio. In this paper, we discuss fundamental theory that unifies compressive processing at a high level, as well as present and evaluate general formulations of the BTC and VPIC compression transforms, Analyses emphasize effects of information loss and computational error inherent in VPIC and BTC, as well as computational efficiency. Our algorithms are expressed in terms of image algebra, a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Since image algebra has been implemented on numerous sequential and parallel computers, our algorithms are feasible and widely portable.
|