Open Access Paper
2 May 2023 Compressive sensing recovery algorithm based on improve smoothed L0 norm (Withdrawal Notice)
Junjie Feng, Qianmei Zhao
Author Affiliations +
Proceedings Volume 12642, Second International Conference on Electronic Information Engineering, Big Data, and Computer Technology (EIBDCT 2023); 126421P (2023) https://doi.org/10.1117/12.2674725
Event: Second International Conference on Electronic Information Engineering, Big Data and Computer Technology (EIBDCT 2023), 2023, Xishuangbanna, China
Abstract
Publisher's Note: This paper, originally published on 2 May 2023, was withdrawn on 9 May 2023 per organizer and author request.

1.

INTRODUCTION

Compressed sensing (CS) theory is a new theory in signal processing and has attracted extensive attention [1-4]. Different from the traditional Nyquist sampling theory, CS theory for the signal of sparsity or compressibility can solve an optimization problem to reconstruct the original signal from a small number of observations with high probability, which greatly save processing time. Once it appears which has become a research hotspot, and the current application research on CS has involved many fields, such as image processing[5-6], voice and video signal processing and radar signal processing[7-8] and so on.

Compressed sensing theory mainly includes three steps: sparse representation, incoherent observation and sparse reconstruction. The sparse representation means that the signal is sparse in a transformation domain, looking for a set of basis so that the signal is as sparse as possible under that group of basis. The measurement process observes the original high-dimensional signal to the low-dimensional space, and the compression process can not lose the valid information contained in the original signal. Sparse recovery uses the sparse signal reconstruction algorithm to reconstruct the original signal from the observed values which guarantees the reconstruction accuracy.

Sparse signal reconstruction is an important step of CS theory. At present, the research on reconstruction algorithms can be classified into the following three categories:The first is the greedy algorithm which uses iterative selection of local optimal solutions to gradually approximate the original signal, such as MP, OMP algorithm. The second is convex relaxation method which approximates the signal by replacing the L0 norm with the L1 norm and transforming the nonconvex problem into a convex problem. The third is non-convex optimization algorithm represented by Bayesian statistics.

Mohimani et al proposed a smoothed sequence of Gaussian functions with parameters by using the fastest descent method and the gradient projection principle[9], the minimum L0 norm of the sparse signal is approximated, thus solving for the minimum of L0 norm problem is transformed into an extreme value problem of smoothed function, which is called the smoothed L0 norm reconstruction algorithm. Under the condition of the same accuracy, the reconstruction speed of the algorithm is 2~3 times faster than that of the basis pursuit (BP).

The steepest descent method should reduce the cost function at every step, however, it is not really along the descended direction in the actual searching process. Therefore, in the above algorithm, the step of checking whether to descend is added in each iteration in this paper. If not, the midpoint of the previous point and the current point is taken for correction to ensure that the search direction is along the steepest descent direction. In this paper, the performance of improved smoothed L0 norm recovery algorithm is discussed. An order one negative exponential function sequence is used as a smoothed function to measure the sparsity.

The paper is organized as follows. In section II, the compressive sensing signal model is introduced. Section III provides the proposed recovery algorithm. Simulation results are presented in section IV. Finally, conclusions are given in section V.

2.

COMPRESSIVE SENSING AND SPARSE SIGNAL MODEL

Assume the signal x ∈ RN is the K sparse vector, which is the number of non-zero coefficients is K, K ≪ N. By compressing the signal, you can obtain a low-dimensional observation y

00065_PSISDG12642_126421P_page_2_1.jpg

where Φ is the observation matrix, Φ ∈ RM×N, M < N, n is the noise vector. Since the signal x is a sparse vector, that is, there are only finite non-zero values, and its sparse solution can be found by certain restriction. To transform the solution problem of the above system of equations into an optimization problem, the solution model is:

00065_PSISDG12642_126421P_page_2_2.jpg

ε is a small constant whose value is related to the noise variance.

3.

IMPROVED SL0 SPARSE SIGNAL RECOVERY ALGORITHM

Sparse signal recovery algorithm aims to recovery sparse signal to solve the optimal solution of linear under determined equation. The SL0 algorithm adopts the steepest descent method and gradient projection by using the Gauss smoothed function to gradually approximate the L0 norm. The smoothed function 00065_PSISDG12642_126421P_page_2_3.jpg is used to approach to L0 norm in [9]. The algorithm includes two loops, in the outer loop, the control value is from large to small which control the smoothed function approach to L0 norm solution. In the inner loop, for each value, the gradient of smoothed function is obtain, then move a small step according to the negative gradient direction to obtain the minimum value of the smoothed function. To further improve approximation performance, we use the negative exponential function 00065_PSISDG12642_126421P_page_2_4.jpg in the paper. The difference between Fσ(x) and Gσ(x) is that when σ varies from infinity to zero, Fσ (x) approximates between L2 norm and L0 norm, however, Gσ (x) approximates between L1 norm and L0 norm. The smoothed function selected in this paper obtains the optimal solution with higher probability than Gaussian function. The improved smoothed L0 norm (ISL0) sparse signal recovery algorithm can be expressed as:

  • Initialization:

    • 1) Let x̂0 be equal to the minimum L2 norm solution of y = Φx obtained by 00065_PSISDG12642_126421P_page_2_5.jpg and L 1 are two parameters which control search step.

    • 2) Choose a suitable decreasing sequence for {σ}, [σ1,…, σJ].

      • for j = 1,…,J (loop J times)

  • 1) Let σ = σj

  • 2) Minimize the function Gσ(x) on the feasible set x = {x:|Φx–y|2 < ε} using the steepest descent algorithm (followed by projection onto the feasible set):

    Initialization: 00065_PSISDG12642_126421P_page_2_6.jpg

    • For l = 1,…,L (loop L times)

      • a) Let δ be gradient of Gσ (x)

      • b) For every element of x, Let 00065_PSISDG12642_126421P_page_3_1.jpg where μ = min(max(|x|)/L0|,σ/L1).

      • c) If |Φx– y|2 > ε, project x back into the feasible set x :

        00065_PSISDG12642_126421P_page_3_7.jpg

      • d) compare step

        00065_PSISDG12642_126421P_page_3_2.jpg

    • 3) Set 00065_PSISDG12642_126421P_page_3_3.jpg

      • Final answer is 00065_PSISDG12642_126421P_page_3_4.jpg

The steepest descent method should reduce the cost function at every step, but it can not ensure the descending direction in the actual solution process. The step is added that check whether the cost function decrease in each iteration process in the paper. If the direction isn’t descent, the midpoint of the current point and the before point is used.

4.

SIMULATION RESULTS

In order to verify the reconstruction performance of the algorithm, the paper tests the algorithm through MATLAB processing platform. The sparse signal model is y = Φx+ n, and the measurement matrix is Φ, the dimension is 80 × 160, which is normalized Gaussian distribution matrix. The signal x is sparse and the non-zero element value is ±1.

Mean squared error (MSE) is defined as:00065_PSISDG12642_126421P_page_3_5.jpg, x is the original signal and 00065_PSISDG12642_126421P_page_3_6.jpg is the reconstructed signal.

The ISL0 algorithm in this paper is compared with the OMP algorithm[10], SL0 algorithm [9], and Laplace algorithm [11]. The changes of reconstruction probability and mean squared error (MSE) with sparsity of several algorithms are shown in Figures 1 and Figure 2. The recovery performance of several algorithms gradually get better as the SNR increases. With the increasing of signal sparsity ratio, the probability of signal reconstruction gradually decreases. However, when the sparsity ratio is 0.35, the correction rate of ISL0 are still 78% and 90% respectively which is much higher than other algorithms when the SNR is 20dB and 25dB. We can see that the performances of ISL0 algorithm are competitive with other algorithms.

Figure 1.

Correct position estimation of different algorithms for SNR=20dB, 25dB

00065_PSISDG12642_126421P_page_4_1.jpg

Figure 2.

MSE of different algorithms for SNR=20dB, 25dB

00065_PSISDG12642_126421P_page_5_1.jpg

5.

CONCLUSIONS

An improved smoothed L0 norm compressive sensing reconstruction algorithm is proposed in the paper. A comparison step is added in each iteration to ensure that the optimal value is searched in the direction of the fastest descent. The simulation results show that under the same test conditions, the proposed algorithm does not need to take sparsity and has advantages over the existing sparse signal reconstruction algorithms in terms of reconstruction probability and reconstruction accuracy.

ACKNOWLEDGEMENTS

This work is partially supported by University-level first-class undergraduate professional construction point project of Liupanshui Normal University (No. LPSSYylzy2202), The Youth Science and Technology Talent Growth Project of Guizhou Province Ordinary Colleges and Universities(No. KY2022047), the Scientific research and training program for college students of Liupanshui Normal University (No.2022DK197).

REFERENCES

[1] 

Lu, J., Verma, N., and Jha, N. K., “Compressed signal processing on Nyquist-sampled signals,” IEEE Trans. Comput., 65 3293 –3303 (2016). https://doi.org/10.1109/TC.2016.2532861 Google Scholar

[2] 

Wimalajeewa.T., Chen.H. and Varshney,P. K., “Performance limits of compressive sensing based signal classifification,” IEEE Transactions on Signal Processing, 60 2758 –2770 (2012). https://doi.org/10.1109/TSP.2012.2189859 Google Scholar

[3] 

Needell,D. and Vershynin, R., “Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit,” Foundations of Computational Mathematics, 9 317 –334 (2009). https://doi.org/10.1007/s10208-008-9031-3 Google Scholar

[4] 

Baraniuk,R. G., Cevher,V., Duarte, M. F. and Hegde, C., “Model-based compressive sensing,” IEEE Transactions on Information Theory, 56 1982 –2001 (2010). https://doi.org/10.1109/TIT.2010.2040894 Google Scholar

[5] 

Eldar,Y. C., Kuppinger,P. and Bolcskei, H., “Block-sparse signals: uncertainty relations and efficient recovery,” IEEE Transactions on Signal Processing, 58 3042 –3054 (2010). https://doi.org/10.1109/TSP.2010.2044837 Google Scholar

[6] 

Van Den Berg, E., and Friedlander,M. P., “Sparse optimization with least-squares constraints,” SIAM Journal on optimization, 21 1201 –1229 (2011). https://doi.org/10.1137/100785028 Google Scholar

[7] 

Zhang, L.,Qiao, Z., Xing, M., “High resolution ISAR imaging by exploiting sparse apertures,” IEEE Transactions on Antennas and Propagation, 60 997 –1008 (2012). https://doi.org/10.1109/TAP.2011.2173130 Google Scholar

[8] 

Chen,Y. J., Zhang, Q., Yuan, N., Luo,Y., “An adaptive ISAR imaging considered task scheduling algorithm for multi-function phased array radars,” IEEE Transactions on Signal Processing., 63 5096 –5110 (2015). https://doi.org/10.1109/TSP.2015.2449251 Google Scholar

[9] 

Mohimani,H., Babaie-Zadeh, M. and Jutten, C., “A fast approach for overcomplete sparse decomposition based on Wireless Communications and Mobile Computing smoothed l0 norm,” IEEE Transactions on Signal Processing, 57 289 –301 (2009). https://doi.org/10.1109/TSP.2008.2007606 Google Scholar

[10] 

Determe,J. F.,Louveaux, J., Jacques, L. and Horlin, F., “Improving the correlation lower bound for simultaneous orthogonal matching pursuit,” IEEE Signal Processing Letters, 3 (11), 1642 –1646 (2016). https://doi.org/10.1109/LSP.2016.2612759 Google Scholar

[11] 

Babacan,S. D. Molina, R. and Katsaggelos,A. K., “Bayesian compressive sensing using Laplace priors,” IEEE Transactions on Image Processing, 19 (1), 53 –63 (2010). https://doi.org/10.1109/TIP.2009.2032894 Google Scholar
© (2023) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Junjie Feng and Qianmei Zhao "Compressive sensing recovery algorithm based on improve smoothed L0 norm (Withdrawal Notice)", Proc. SPIE 12642, Second International Conference on Electronic Information Engineering, Big Data, and Computer Technology (EIBDCT 2023), 126421P (2 May 2023); https://doi.org/10.1117/12.2674725
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
Back to Top