**5. Discussion**

Multiple attenuation can provide a good foundation for subsequent data processing and data interpretation. The traditional parabolic Radon transform is unable to separate primaries and multiples without distortion. Sparse parabolic Radon transform based on *L*<sup>1</sup> regularization also does not achieve the desired effect of suppressing multiples. We introduce the nonconvex *Lq*<sup>1</sup> − *Lq*<sup>2</sup> (0 < *q*1, *q*<sup>2</sup> < 1) mixed regularization sparse inversion method. This method obviously improves the accuracy of inversion, suppresses multiples and obtains clean seismic data. Moreover, the reconstructed primary data have higher accuracy, which was verified by experimental data.

In this paper, *L*1/2 regularization is used to constrain primary and multiple waves because it has many properties such as sparsity and unbiasedness and oracle properties [25]. Meanwhile, it is more stable and sparse than the *L*<sup>1</sup> regularization, and easier to solve than *L*<sup>0</sup> regularization. In other practical applications, the values of *q*<sup>1</sup> and *q*<sup>2</sup> can be selected according to the actual problem. For example, if the coefficients are not strictly sparse, a moderate to large value of *q* can yield better results [34].

In complex geological conditions, the weak effective signals in the wavefield are easily covered by noise. Therefore, the parameter setting needs to be more careful, otherwise it may lose an effective signal because of noise suppression.

Since the proposed method introduced the *L*1/2-norm, the computation time is increased when the sparsity of seismic signals is improved. Therefore, the computational efficiency of this method has no advantage over the *L*1-norm sparse inversion method.

In future research, we propose three research directions that can be improved:


4. In addition, with the rapid development of deep learning, fields such as mechanics, medicine and geophysics [43–45] have been actively combined with deep learning, and more possibilities have been developed. Therefore, in future studies, we will also combine multiple suppression with deep learning to solve problems such as computational efficiency.
