Computer-Vision-Oriented Adaptive Sampling in Compressive Sensing †
Abstract
:1. Introduction
- Adaptive sampling for enhanced sensor data utilization: By implementing an adaptive sampling strategy based on saliency detection, our proposal improves the quality and relevance of data collected by sensors. Our proposal effectively preserves essential information that is crucial for CV tasks, leading to more accurate and efficient processing in downstream CV tasks.
- Wide versatility for different CV tasks: To comprehensively evaluate the effectiveness of our proposal, we extend the application from image classification to more intricate objection detection. The experimental results substantiate the superiority of our proposal over existing adaptive sampling techniques. This shows the versatility and broad applicability of our proposal.
- Improvement of CV task accuracy at low sampling rates: Unlike traditional CS techniques that focus on visual quality, our technique enhances the accuracy of CV tasks even at low sampling rates, making it a robust solution for sensor data analysis in real-world scenarios. This highlights a promising solution for maintaining accuracy at a reduced cost of sampled data.
2. Related Work
3. Proposed CV-Oriented Adaptive Sampling
3.1. Saliency Detection
3.2. Adaptive Sampling
4. Experiments
4.1. Classification
4.1.1. Setup
4.1.2. Results
4.2. Object Detection
4.2.1. Setup
4.2.2. Results
5. Conclusions and Future Remarks
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Choi, J.; Lee, W. Drone SAR Image Compression Based on Block Adaptive Compressive Sensing. Remote Sens. 2021, 13, 3947. [Google Scholar] [CrossRef]
- Djelouat, H.; Amira, A.; Bensaali, F. Compressive sensing-based IoT applications: A review. J. Sens. Actuator Netw. 2018, 7, 45. [Google Scholar] [CrossRef]
- Li, L.; Fang, Y.; Liu, L.; Peng, H.; Kurths, J.; Yang, Y. Overview of compressed sensing: Sensing model, reconstruction algorithm, and its applications. Appl. Sci. 2020, 10, 5909. [Google Scholar] [CrossRef]
- Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
- Belyaev, E. An efficient compressive sensed video codec with inter-frame decoding and low-complexity intra-frame encoding. Sensors 2023, 23, 1368. [Google Scholar] [CrossRef] [PubMed]
- Nguyen Canh, T.; Nagahara, H. Deep compressive sensing for visual privacy protection in flatcam imaging. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Republic of Korea, 27–28 October 2019; pp. 3978–3986. [Google Scholar]
- Xue, Y.; Bigras, G.; Hugh, J.; Ray, N. Training convolutional neural networks and compressed sensing end-to-end for microscopy cell detection. IEEE Trans. Med. Imaging 2019, 38, 2632–2641. [Google Scholar] [CrossRef] [PubMed]
- Adler, A.; Boublil, D.; Elad, M.; Zibulevsky, M. A deep learning approach to block-based compressed sensing of images. arXiv 2016, arXiv:1606.01519. [Google Scholar]
- Yu, Y.; Wang, B.; Zhang, L. Saliency-based compressive sampling for image signals. IEEE Signal Process. Lett. 2010, 17, 973–976. [Google Scholar]
- Zhou, S.; Xiang, S.; Liu, X.; Li, H. Asymmetric block based compressive sensing for image signals. In Proceedings of the 2018 IEEE International Conference on Multimedia and Expo (ICME), San Diego, CA, USA, 23–27 July 2018; pp. 1–6. [Google Scholar]
- Yang, J.; Wang, H.; Fan, Y.; Zhou, J. VCSL: Video Compressive Sensing with Low-complexity ROI Detection in Compressed Domain. In Proceedings of the 2023 Data Compression Conference (DCC), Snowbird, UT, USA, 21–24 March 2023. [Google Scholar]
- Liu, L.; Nishikawa, H.; Zhou, J.; Taniguchi, I.; Onoye, T. Adaptive Sampling for Computer Vision-Oriented Compressive Sensing. In Proceedings of the ACM Multimedia Asia 2023, Tainan, Taiwan, 6–8 December 2023; pp. 1–5. [Google Scholar]
- Cui, W.; Liu, S.; Jiang, F.; Zhao, D. Image compressed sensing using non-local neural network. IEEE Trans. Multimed. 2021, 25, 816–830. [Google Scholar] [CrossRef]
- Zhang, X.; Wu, X. Attention-guided image compression by deep reconstruction of compressive sensed saliency skeleton. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 13354–13364. [Google Scholar]
- Fan, Z.E.; Lian, F.; Quan, J.N. Global Sensing and Measurements Reuse for Image Compressed Sensing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 8954–8963. [Google Scholar]
- Akbari, A.; Mandache, D.; Trocan, M.; Granado, B. Adaptive saliency-based compressive sensing image reconstruction. In Proceedings of the 2016 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Seattle, WA, USA, 11–15 July 2016; pp. 1–6. [Google Scholar]
- You, D.; Xie, J.; Zhang, J. ISTA-Net++: Flexible deep unfolding network for compressive sensing. In Proceedings of the 2021 IEEE International Conference on Multimedia and Expo (ICME), Shenzhen, China, 5–9 July 2021; pp. 1–6. [Google Scholar]
- Chen, B.; Zhang, J. Content-aware scalable deep compressed sensing. IEEE Trans. Image Process. 2022, 31, 5412–5426. [Google Scholar] [CrossRef]
- Singh, S.; Kumar, V.; Verma, H. Reduction of blocking artifacts in JPEG compressed images. Digit. Signal Process. 2007, 17, 225–243. [Google Scholar] [CrossRef]
- Chun, I.Y.; Adcock, B. Compressed sensing and parallel acquisition. IEEE Trans. Inf. Theory 2017, 63, 4860–4882. [Google Scholar] [CrossRef]
- Gan, L. Block compressed sensing of natural images. In Proceedings of the 2007 15th International Conference on Digital Signal Processing, Cardiff, Wales, UK, 1–4 July 2007; pp. 403–406. [Google Scholar]
- Chun, I.Y.; Adcock, B. Uniform recovery from subgaussian multi-sensor measurements. Appl. Comput. Harmon. Anal. 2020, 48, 731–765. [Google Scholar] [CrossRef]
- Cong, R.; Lei, J.; Fu, H.; Cheng, M.M.; Lin, W.; Huang, Q. Review of visual saliency detection with comprehensive information. IEEE Trans. Circuits Syst. Video Technol. 2018, 29, 2941–2959. [Google Scholar] [CrossRef]
- Jian, M.; Wang, J.; Yu, H.; Wang, G.; Meng, X.; Yang, L.; Dong, J.; Yin, Y. Visual saliency detection by integrating spatial position prior of object with background cues. Expert Syst. Appl. 2021, 168, 114219. [Google Scholar] [CrossRef]
- Howard, A.; Sandler, M.; Chu, G.; Chen, L.C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for mobilenetv3. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Tian, Z.; He, T.; Shen, C.; Yan, Y. Decoders matter for semantic segmentation: Data-dependent decoding enables flexible feature aggregation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3126–3135. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Ternausnet, I.V. U-net with vgg11 encoder pre-trained on imagenet for image segmentation. arXiv 2018, arXiv:1801.05746. [Google Scholar]
- Fang, Y.; Chen, Z.; Lin, W.; Lin, C.W. Saliency detection in the compressed domain for adaptive image retargeting. IEEE Trans. Image Process. 2012, 21, 3888–3901. [Google Scholar] [CrossRef]
- Fang, Y.; Lin, W.; Chen, Z.; Tsai, C.M.; Lin, C.W. A video saliency detection model in compressed domain. IEEE Trans. Circuits Syst. Video Technol. 2013, 24, 27–38. [Google Scholar] [CrossRef]
- Aich, S.; Stavness, I. Global sum pooling: A generalization trick for object counting with small datasets of large images. arXiv 2018, arXiv:1805.11123. [Google Scholar]
- Shi, W.; Jiang, F.; Liu, S.; Zhao, D. Image compressed sensing using convolutional neural network. IEEE Trans. Image Process. 2019, 29, 375–388. [Google Scholar] [CrossRef]
- Wang, D.; Tan, X. Unsupervised feature learning with C-SVDDNet. Pattern Recognit. 2016, 60, 473–485. [Google Scholar] [CrossRef]
- Rahimzadeh, M.; Parvin, S.; Safi, E.; Mohammadi, M.R. Wise-srnet: A novel architecture for enhancing image classification by learning spatial resolution of feature maps. arXiv 2021, arXiv:2104.12294. [Google Scholar] [CrossRef]
- Lelekas, I.; Tomen, N.; Pintea, S.L.; van Gemert, J.C. Top-Down Networks: A coarse-to-fine reimagination of CNNs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 752–753. [Google Scholar]
- Zhang, K.; Li, Y.; Zuo, W.; Zhang, L.; Van Gool, L.; Timofte, R. Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 6360–6376. [Google Scholar] [CrossRef] [PubMed]
- Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef]
- Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Zhang, Z.; Liu, Y.; Liu, J.; Wen, F.; Zhu, C. AMP-Net: Denoising-based deep unfolding for compressive image sensing. IEEE Trans. Image Process. 2020, 30, 1487–1500. [Google Scholar] [CrossRef]
- Chen, W.; Yang, C.; Yang, X. FSOINET: Feature-space optimization-inspired network for image compressive sensing. In Proceedings of the ICASSP 2022–2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Virtual, 7–13 May 2022; pp. 2460–2464. [Google Scholar]
- Yang, J.; Price, B.; Cohen, S.; Lee, H.; Yang, M.H. Object contour detection with a fully convolutional encoder-decoder network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 193–202. [Google Scholar]
- Lu, D.; Weng, Q. A survey of image classification methods and techniques for improving classification performance. Int. J. Remote Sens. 2007, 28, 823–870. [Google Scholar] [CrossRef]
- Rawat, W.; Wang, Z. Deep convolutional neural networks for image classification: A comprehensive review. Neural Comput. 2017, 29, 2352–2449. [Google Scholar] [CrossRef]
- Zou, Z.; Chen, K.; Shi, Z.; Guo, Y.; Ye, J. Object detection in 20 years: A survey. Proc. IEEE 2023, 111, 257–276. [Google Scholar] [CrossRef]
- Zhao, Z.Q.; Zheng, P.; Xu, S.t.; Wu, X. Object detection with deep learning: A review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232. [Google Scholar] [CrossRef]
- Padilla, R.; Netto, S.L.; Da Silva, E.A. A survey on performance metrics for object-detection algorithms. In Proceedings of the 2020 International Conference on Systems, Signals and Image Processing (IWSSIP), Niteroi, Brazil, 1–3 July 2020; pp. 237–242. [Google Scholar]
- Zhou, S.; Deng, X.; Li, C.; Liu, Y.; Jiang, H. Recognition-oriented image compressive sensing with deep learning. IEEE Trans. Multimed. 2022, 25, 2022–2032. [Google Scholar] [CrossRef]
- Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? the kitti vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Kumar, C.; Punitha, R. Yolov3 and yolov4: Multiple object detection for surveillance applications. In Proceedings of the 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 20–22 August 2020; pp. 1316–1321. [Google Scholar]
- Choi, J.; Chun, D.; Kim, H.; Lee, H.J. Gaussian yolov3: An accurate and fast object detector using localization uncertainty for autonomous driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 502–511. [Google Scholar]
- Tian, D.; Lin, C.; Zhou, J.; Duan, X.; Cao, Y.; Zhao, D.; Cao, D. SA-YOLOv3: An efficient and accurate object detector using self-attention mechanism for autonomous driving. IEEE Trans. Intell. Transp. Syst. 2020, 23, 4099–4110. [Google Scholar] [CrossRef]
- Koksal, A.; Ince, K.G.; Alatan, A. Effect of annotation errors on drone detection with YOLOv3. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 1030–1031. [Google Scholar]
Layer | Input | Operator | Output Channel | Stride |
---|---|---|---|---|
1 | H × W × 3 | conv2d | 16 | 2 |
2 | (H × W)/4 × 16 | bottleneck, 3 × 3 | 16 | 1 |
3 | (H × W)/4 × 16 | bottleneck, 3 × 3 | 24 | 2 |
4 | (H × W)/16 × 24 | bottleneck, 3 × 3 | 24 | 1 |
5 | (H × W)/16 × 24 | bottleneck, 5 × 5 | 40 | 2 |
6 | (H × W)/64 × 40 | bottleneck, 5 × 5 | 40 | 1 |
7 | (H × W)/64 × 40 | bottleneck, 5 × 5 | 40 | 1 |
8 | (H × W)/64 × 40 | bottleneck, 3 × 3 | 80 | 2 |
9 | (H × W)/256 × 80 | bottleneck, 3 × 3 | 80 | 1 |
10 | (H × W)/256 × 80 | bottleneck, 3 × 3 | 80 | 1 |
11 | (H × W)/256 × 80 | bottleneck, 3 × 3 | 80 | 1 |
12 | (H × W)/256 × 80 | bottleneck, 3 × 3 | 112 | 1 |
13 | (H × W)/256 × 112 | bottleneck, 3 × 3 | 112 | 1 |
14 | (H × W)/256 × 112 | bottleneck, 5 × 5 | 160 | 2 |
15 | (H × W)/1024 × 160 | bottleneck, 5 × 5 | 160 | 1 |
16 | (H × W)/1024 × 160 | bottleneck, 5 × 5 | 160 | 1 |
17 | (H × W)/1024 × 160 | DUpsampling | 1 | 1 |
Output | H × W × 1 | - | - | - |
Average CS Rate | Sampling Technique | CS Rate | Reconstructed Image Quality | Classification Accuracy [%] | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Saliency () | Non-Saliency () | PSNR [dB] | SSIM | Xception | ResNet | DenseNet | Average | Difference | |||
1.00 | Original image | 1.00 | - | - | 86.32 | 80.76 | 76.71 | 81.26 | - | ||
0.21 | BCS | 0.21 | 24.33 | 0.76 | 54.48 | 55.17 | 51.45 | 53.70 | −14.88 | ||
BCS-PCT | 0.21 | 25.58 | 0.81 | 62.42 | 62.17 | 57.97 | 60.85 | −7.73 | |||
BCS-asymmetry | 0.21 | 26.20 | 0.83 | 66.90 | 65.13 | 60.80 | 64.28 | −4.30 | |||
Ours | 0.50 | 0.05 | 26.40 | 0.85 | 71.91 | 69.33 | 64.50 | 68.58 | ±0.00 | ||
0.18 | BCS | 0.18 | 24.33 | 0.76 | 54.48 | 55.17 | 51.45 | 53.70 | −6.20 | ||
BCS-PCT | 0.18 | 25.42 | 0.80 | 61.20 | 61.53 | 57.08 | 59.94 | +0.03 | |||
BCS-asymmetry | 0.18 | 26.03 | 0.83 | 65.91 | 64.55 | 60.11 | 63.52 | +3.62 | |||
Ours | 0.50 | 0.01 | 22.97 | 0.73 | 63.42 | 59.23 | 57.06 | 59.90 | ±0.00 | ||
0.17 | BCS | 0.17 | 24.33 | 0.76 | 54.48 | 55.17 | 51.45 | 53.70 | −14.42 | ||
BCS-PCT | 0.17 | 25.37 | 0.80 | 60.78 | 61.31 | 56.90 | 59.66 | −8.46 | |||
BCS-asymmetry | 0.17 | 25.97 | 0.82 | 65.35 | 64.23 | 59.91 | 63.16 | −4.96 | |||
Ours | 0.40 | 0.05 | 26.27 | 0.85 | 71.16 | 69.05 | 64.16 | 68.12 | ±0.00 | ||
0.15 | BCS | 0.15 | 23.35 | 0.71 | 46.60 | 45.67 | 44.91 | 45.73 | −13.52 | ||
BCS-PCT | 0.15 | 24.45 | 0.76 | 55.87 | 55.32 | 52.45 | 54.55 | −4.70 | |||
BCS-asymmetry | 0.15 | 25.04 | 0.79 | 61.21 | 59.52 | 56.31 | 59.01 | -0.23 | |||
Ours | 0.40 | 0.01 | 22.89 | 0.72 | 62.51 | 58.62 | 56.60 | 59.24 | ±0.00 | ||
0.14 | BCS | 0.14 | 23.35 | 0.71 | 46.60 | 45.67 | 44.91 | 45.73 | −21.11 | ||
BCS-PCT | 0.14 | 24.38 | 0.76 | 55.25 | 54.93 | 52.02 | 54.07 | −12.77 | |||
BCS-asymmetry | 0.14 | 24.97 | 0.79 | 60.62 | 59.02 | 56.01 | 58.55 | −8.29 | |||
Ours | 0.30 | 0.05 | 26.06 | 0.84 | 69.41 | 68.05 | 63.05 | 66.84 | ±0.00 | ||
0.11 | BCS | 0.11 | 23.35 | 0.71 | 46.60 | 45.67 | 44.91 | 45.73 | −11.71 | ||
BCS-PCT | 0.11 | 24.16 | 0.75 | 53.57 | 53.12 | 50.56 | 52.42 | −5.02 | |||
BCS-asymmetry | 0.11 | 24.72 | 0.77 | 58.48 | 57.15 | 54.22 | 56.62 | −0.82 | |||
Ours | 0.30 | 0.01 | 22.77 | 0.71 | 60.05 | 57.16 | 55.11 | 57.44 | ±0.00 | ||
0.10 | BCS | 0.10 | 22.14 | 0.64 | 38.78 | 35.27 | 37.81 | 37.29 | −26.23 | ||
BCS-PCT | 0.10 | 23.11 | 0.70 | 48.37 | 46.11 | 46.78 | 47.09 | −16.43 | |||
BCS-asymmetry | 0.10 | 23.68 | 0.73 | 54.28 | 51.88 | 51.35 | 52.50 | −11.02 | |||
Ours | 0.20 | 0.05 | 25.69 | 0.82 | 64.57 | 65.68 | 60.31 | 63.52 | ±0.00 | ||
0.08 | BCS | 0.08 | 22.14 | 0.64 | 38.78 | 35.27 | 37.81 | 37.29 | −15.80 | ||
BCS-PCT | 0.08 | 22.92 | 0.69 | 46.43 | 43.72 | 44.72 | 44.96 | −8.13 | |||
BCS-asymmetry | 0.08 | 23.46 | 0.72 | 52.10 | 49.37 | 49.31 | 50.26 | -2.83 | |||
Ours | 0.20 | 0.01 | 22.56 | 0.69 | 54.30 | 54.02 | 50.95 | 53.09 | ±0.00 | ||
0.07 | BCS | 0.07 | 22.14 | 0.64 | 38.78 | 35.27 | 37.81 | 37.29 | −20.15 | ||
BCS-PCT | 0.07 | 22.81 | 0.68 | 44.71 | 42.43 | 43.65 | 43.60 | −13.84 | |||
BCS-asymmetry | 0.07 | 23.33 | 0.71 | 50.60 | 48.17 | 48.40 | 49.06 | −8.38 | |||
Ours | 0.10 | 0.05 | 24.99 | 0.79 | 56.90 | 60.07 | 55.33 | 57.43 | ±0.00 | ||
0.04 | BCS | 0.04 | 19.99 | 0.50 | 28.88 | 22.06 | 27.66 | 26.20 | −18.35 | ||
BCS-PCT | 0.04 | 20.58 | 0.54 | 32.63 | 26.75 | 31.47 | 30.28 | −14.26 | |||
BCS-asymmetry | 0.04 | 21.08 | 0.58 | 37.63 | 32.82 | 36.07 | 35.51 | −9.04 | |||
Ours | 0.10 | 0.01 | 22.10 | 0.66 | 44.93 | 45.20 | 43.51 | 44.55 | ±0.00 |
Average CS Rate | Sampling Technique | CS Rate | Reconstructed Image Quality | Classification Accuracy [%] | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Saliency () | Non-Saliency () | PSNR [dB] | SSIM | Xception | ResNet | DenseNet | Average | Difference | |||
1.00 | Original image | 1.00 | - | - | 88.56 | 87.13 | 68.66 | 81.45 | - | ||
0.22 | BCS | 0.22 | 24.39 | 0.73 | 53.20 | 48.96 | 67.53 | 56.56 | −8.50 | ||
BCS-PCT | 0.22 | 25.20 | 0.77 | 56.06 | 53.86 | 67.53 | 59.15 | −5.91 | |||
BCS-asymmetry | 0.22 | 25.70 | 0.79 | 58.26 | 56.03 | 67.50 | 60.60 | −4.47 | |||
Ours | 0.50 | 0.05 | 26.18 | 0.82 | 64.56 | 62.73 | 67.90 | 65.06 | ±0.00 | ||
0.19 | BCS | 0.19 | 23.87 | 0.70 | 50.56 | 46.46 | 67.53 | 54.85 | −9.49 | ||
BCS-PCT | 0.19 | 24.70 | 0.74 | 54.20 | 51.86 | 67.53 | 57.86 | −6.48 | |||
BCS-asymmetry | 0.19 | 25.17 | 0.76 | 56.66 | 54.53 | 67.66 | 59.62 | −4.73 | |||
Ours | 0.50 | 0.01 | 24.10 | 0.72 | 62.53 | 63.70 | 66.80 | 64.34 | ±0.00 | ||
0.18 | BCS | 0.18 | 23.87 | 0.70 | 50.56 | 46.46 | 67.53 | 54.85 | −9.54 | ||
BCS-PCT | 0.18 | 24.66 | 0.74 | 54.06 | 51.63 | 67.46 | 57.72 | −6.68 | |||
BCS-asymmetry | 0.18 | 25.12 | 0.76 | 56.76 | 54.30 | 67.66 | 59.57 | −4.82 | |||
Ours | 0.40 | 0.05 | 25.98 | 0.81 | 63.66 | 61.66 | 67.86 | 64.39 | ±0.00 | ||
0.15 | BCS | 0.15 | 23.21 | 0.65 | 46.76 | 42.13 | 67.03 | 51.97 | −11.69 | ||
BCS-PCT | 0.15 | 23.99 | 0.70 | 50.86 | 47.76 | 67.16 | 55.26 | −8.40 | |||
BCS-asymmetry | 0.15 | 24.43 | 0.73 | 52.80 | 51.50 | 67.16 | 57.15 | -6.51 | |||
Ours | 0.40 | 0.01 | 23.96 | 0.71 | 61.93 | 62.40 | 66.66 | 63.66 | ±0.00 | ||
0.14 | BCS | 0.14 | 23.21 | 0.65 | 46.76 | 42.13 | 67.03 | 51.97 | −11.02 | ||
BCS-PCT | 0.14 | 23.94 | 0.70 | 50.66 | 47.73 | 67.20 | 55.20 | −7.80 | |||
BCS-asymmetry | 0.14 | 24.38 | 0.72 | 53.03 | 51.06 | 67.20 | 57.10 | −5.90 | |||
Ours | 0.30 | 0.05 | 25.69 | 0.80 | 61.90 | 59.26 | 67.83 | 63.00 | ±0.00 | ||
0.12 | BCS | 0.12 | 23.21 | 0.65 | 46.76 | 42.13 | 67.03 | 51.97 | −10.33 | ||
BCS-PCT | 0.12 | 23.84 | 0.69 | 50.30 | 46.60 | 67.13 | 54.68 | −7.62 | |||
BCS-asymmetry | 0.12 | 24.26 | 0.72 | 52.00 | 50.13 | 67.33 | 56.49 | −5.81 | |||
Ours | 0.30 | 0.01 | 23.76 | 0.70 | 59.70 | 60.60 | 66.60 | 62.30 | ±0.00 | ||
0.11 | BCS | 0.11 | 23.21 | 0.65 | 46.76 | 42.13 | 67.03 | 51.97 | −8.08 | ||
BCS-PCT | 0.11 | 23.79 | 0.69 | 50.20 | 46.23 | 67.30 | 54.58 | −5.47 | |||
BCS-asymmetry | 0.11 | 24.19 | 0.71 | 51.70 | 49.40 | 67.33 | 56.14 | −3.91 | |||
Ours | 0.20 | 0.05 | 25.23 | 0.77 | 57.86 | 54.56 | 67.73 | 60.05 | ±0.00 | ||
0.08 | BCS | 0.08 | 22.40 | 0.59 | 42.56 | 37.90 | 67.06 | 49.17 | −9.81 | ||
BCS-PCT | 0.08 | 22.96 | 0.63 | 46.03 | 43.10 | 67.13 | 52.09 | −6.90 | |||
BCS-asymmetry | 0.08 | 23.35 | 0.66 | 47.50 | 46.46 | 67.43 | 53.80 | −5.19 | |||
Ours | 0.20 | 0.01 | 23.43 | 0.67 | 55.06 | 55.16 | 66.73 | 58.98 | ±0.00 | ||
0.07 | BCS | 0.07 | 22.40 | 0.59 | 42.56 | 37.90 | 67.06 | 49.17 | −6.94 | ||
BCS-PCT | 0.07 | 22.88 | 0.63 | 45.40 | 42.63 | 67.30 | 51.78 | −4.34 | |||
BCS-asymmetry | 0.07 | 23.26 | 0.65 | 47.53 | 45.66 | 67.30 | 53.50 | −2.62 | |||
Ours | 0.10 | 0.05 | 24.50 | 0.74 | 51.96 | 48.56 | 67.83 | 56.12 | ±0.00 | ||
0.04 | BCS | 0.04 | 21.03 | 0.48 | 33.50 | 26.30 | 66.53 | 42.11 | −10.73 | ||
BCS-PCT | 0.04 | 21.49 | 0.52 | 37.66 | 30.60 | 66.90 | 45.05 | −7.79 | |||
BCS-asymmetry | 0.04 | 21.86 | 0.55 | 40.53 | 34.90 | 67.16 | 47.53 | −5.31 | |||
Ours | 0.10 | 0.01 | 22.88 | 0.64 | 46.93 | 44.66 | 66.93 | 52.84 | ±0.00 |
Average CS Rate | Sampling Technique | CS Rate | Reconstructed Image Quality | Classification Accuracy [%] | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Saliency () | Non-SALIENCY () | PSNR [dB] | SSIM | Xception | ResNet | DenseNet | Average | Difference | |||
1.00 | Original image | 1.00 | - | - | 90.03 | 85.40 | 84.86 | 86.76 | - | ||
0.20 | BCS | 0.20 | 29.43 | 0.85 | 88.64 | 83.01 | 84.06 | 85.24 | −0.88 | ||
BCS-PCT | 0.20 | 30.79 | 0.88 | 88.98 | 83.94 | 84.51 | 85.81 | −0.31 | |||
BCS-asymmetry | 0.20 | 31.37 | 0.89 | 89.01 | 84.23 | 84.60 | 85.95 | −0.17 | |||
Ours | 0.50 | 0.05 | 33.50 | 0.91 | 89.32 | 84.49 | 84.54 | 86.12 | ±0.00 | ||
0.18 | BCS | 0.18 | 29.43 | 0.85 | 88.64 | 83.01 | 84.06 | 85.24 | +1.74 | ||
BCS-PCT | 0.18 | 30.69 | 0.87 | 89.01 | 83.86 | 84.51 | 85.79 | +2.29 | |||
BCS-asymmetry | 0.18 | 31.27 | 0.88 | 89.12 | 84.12 | 84.66 | 85.97 | +2.47 | |||
Ours | 0.50 | 0.01 | 29.92 | 0.84 | 88.50 | 80.76 | 81.24 | 83.50 | ±0.00 | ||
0.17 | BCS | 0.17 | 29.43 | 0.85 | 88.64 | 83.01 | 84.06 | 85.24 | −0.92 | ||
BCS-PCT | 0.17 | 30.63 | 0.87 | 88.98 | 83.86 | 84.54 | 85.79 | −0.36 | |||
BCS-asymmetry | 0.17 | 31.21 | 0.88 | 89.07 | 84.09 | 84.71 | 85.96 | −0.20 | |||
Ours | 0.40 | 0.05 | 33.21 | 0.91 | 89.44 | 84.51 | 84.51 | 86.15 | ±0.00 | ||
0.14 | BCS | 0.14 | 28.38 | 0.82 | 87.10 | 80.90 | 83.15 | 83.72 | +0.18 | ||
BCS-PCT | 0.14 | 29.58 | 0.85 | 88.10 | 82.66 | 83.97 | 84.91 | +1.37 | |||
BCS-asymmetry | 0.14 | 30.15 | 0.86 | 88.21 | 83.29 | 84.09 | 85.20 | +1.66 | |||
Ours | 0.40 | 0.01 | 29.76 | 0.84 | 88.44 | 80.76 | 81.41 | 83.54 | ±0.00 | ||
0.13 | BCS | 0.13 | 28.38 | 0.82 | 87.10 | 80.90 | 83.15 | 83.72 | −2.38 | ||
BCS-PCT | 0.13 | 29.51 | 0.85 | 87.93 | 82.81 | 84.03 | 84.92 | −1.18 | |||
BCS-asymmetry | 0.13 | 30.07 | 0.86 | 88.21 | 83.15 | 84.12 | 85.16 | −0.94 | |||
Ours | 0.30 | 0.05 | 32.80 | 0.90 | 89.35 | 84.46 | 84.49 | 86.10 | ±0.00 | ||
0.11 | BCS | 0.11 | 28.38 | 0.82 | 87.10 | 80.90 | 83.15 | 83.72 | +0.19 | ||
BCS-PCT | 0.11 | 29.35 | 0.84 | 87.84 | 82.52 | 83.94 | 84.77 | +1.24 | |||
BCS-asymmetry | 0.11 | 29.91 | 0.85 | 88.19 | 83.01 | 84.12 | 85.11 | +1.58 | |||
Ours | 0.30 | 0.01 | 29.53 | 0.83 | 88.44 | 80.67 | 81.47 | 83.53 | ±0.00 | ||
0.10 | BCS | 0.10 | 27.02 | 0.77 | 84.51 | 75.61 | 80.16 | 80.09 | −5.99 | ||
BCS-PCT | 0.10 | 28.17 | 0.81 | 86.39 | 79.68 | 82.15 | 82.74 | −3.34 | |||
BCS-asymmetry | 0.10 | 28.72 | 0.82 | 86.71 | 80.36 | 82.69 | 83.25 | −2.83 | |||
Ours | 0.20 | 0.05 | 32.08 | 0.89 | 89.27 | 84.46 | 84.51 | 86.08 | ±0.00 | ||
0.07 | BCS | 0.07 | 27.02 | 0.77 | 84.51 | 75.61 | 80.16 | 80.09 | −3.28 | ||
BCS-PCT | 0.07 | 27.85 | 0.80 | 86.14 | 78.45 | 81.78 | 82.12 | −1.25 | |||
BCS-asymmetry | 0.07 | 28.38 | 0.81 | 86.62 | 79.93 | 82.15 | 82.90 | −0.47 | |||
Ours | 0.20 | 0.01 | 29.11 | 0.82 | 88.21 | 80.53 | 81.38 | 83.37 | ±0.00 | ||
0.07 | BCS | 0.07 | 27.02 | 0.77 | 84.51 | 75.61 | 80.16 | 80.09 | −5.92 | ||
BCS-PCT | 0.07 | 27.85 | 0.80 | 86.14 | 78.45 | 81.78 | 82.12 | −3.89 | |||
BCS-asymmetry | 0.07 | 28.38 | 0.81 | 86.62 | 79.93 | 82.15 | 82.90 | −3.11 | |||
Ours | 0.10 | 0.05 | 30.76 | 0.87 | 89.21 | 84.23 | 84.60 | 86.01 | ±0.00 | ||
0.04 | BCS | 0.04 | 24.75 | 0.68 | 74.61 | 59.27 | 59.33 | 64.40 | −18.25 | ||
BCS-PCT | 0.04 | 25.53 | 0.72 | 78.77 | 65.59 | 68.41 | 70.92 | −11.73 | |||
BCS-asymmetry | 0.04 | 26.05 | 0.74 | 80.73 | 68.83 | 71.99 | 73.85 | −8.80 | |||
Ours | 0.10 | 0.01 | 28.26 | 0.80 | 87.47 | 79.53 | 80.96 | 82.65 | ±0.00 |
Dataset | Block Size | Average CS Rate | Average Accuracy of BCS [%] | Average Accuracy of Ours [%] | Improvement in Accuracy of Ours Compared with BCS [%] |
---|---|---|---|---|---|
STL10 (96 × 96) | 32 × 32 | 0.15 | 44.75 | 54.44 | +9.69 |
16 × 16 | 0.14 | 43.64 | 56.16 | +12.52 | |
8 × 8 | 0.13 | 43.63 | 59.87 | +16.24 | |
Intel (150 × 150) | 32 × 32 | 0.14 | 49.61 | 59.21 | +9.60 |
16 × 16 | 0.14 | 50.68 | 60.41 | +9.72 | |
8 × 8 | 0.13 | 50.89 | 61.08 | +10.18 | |
Imagenette (512 × 512) | 32 × 32 | 0.13 | 79.79 | 83.45 | +3.66 |
16 × 16 | 0.13 | 81.20 | 84.26 | +3.07 | |
8 × 8 | 0.12 | 81.15 | 84.71 | +3.55 |
Average CS Rate | Sampling Technique | CS Rate | Reconstructed Image Quality | Detection Accuracy | ||||||
---|---|---|---|---|---|---|---|---|---|---|
Saliency () | Non-Saliency () | PSNR [dB] | SSIM | Precision | Recall | F1-Score | mAP | |||
1.00 | Original image | 1.00 | - | - | 0.752 | 0.879 | 0.796 | 0.804 | ||
0.20 | BCS | 0.20 | 27.41 | 0.85 | 0.740 | 0.849 | 0.783 | 0.768 | ||
BCS-PCT | 0.20 | 31.02 | 0.91 | 0.745 | 0.861 | 0.792 | 0.784 | |||
BCS-asymmetry | 0.20 | 31.32 | 0.91 | 0.746 | 0.860 | 0.792 | 0.784 | |||
Ours | 0.50 | 0.05 | 31.51 | 0.91 | 0.737 | 0.871 | 0.790 | 0.794 | ||
0.17 | BCS | 0.17 | 27.41 | 0.85 | 0.740 | 0.849 | 0.783 | 0.768 | ||
BCS-PCT | 0.17 | 30.93 | 0.91 | 0.746 | 0.861 | 0.793 | 0.784 | |||
BCS-asymmetry | 0.17 | 31.26 | 0.91 | 0.746 | 0.860 | 0.792 | 0.783 | |||
Ours | 0.50 | 0.01 | 27.60 | 0.83 | 0.723 | 0.834 | 0.767 | 0.751 | ||
0.16 | BCS | 0.16 | 27.41 | 0.85 | 0.740 | 0.849 | 0.783 | 0.768 | ||
BCS-PCT | 0.16 | 30.89 | 0.91 | 0.743 | 0.859 | 0.790 | 0.781 | |||
BCS-asymmetry | 0.16 | 31.24 | 0.91 | 0.744 | 0.859 | 0.790 | 0.781 | |||
Ours | 0.40 | 0.05 | 31.26 | 0.91 | 0.736 | 0.870 | 0.790 | 0.796 | ||
0.14 | BCS | 0.14 | 26.29 | 0.82 | 0.724 | 0.813 | 0.757 | 0.728 | ||
BCS-PCT | 0.14 | 29.85 | 0.88 | 0.731 | 0.843 | 0.777 | 0.761 | |||
BCS-asymmetry | 0.14 | 30.22 | 0.89 | 0.737 | 0.846 | 0.781 | 0.765 | |||
Ours | 0.40 | 0.01 | 27.47 | 0.83 | 0.727 | 0.832 | 0.769 | 0.753 | ||
0.13 | BCS | 0.13 | 26.29 | 0.82 | 0.724 | 0.813 | 0.757 | 0.728 | ||
BCS-PCT | 0.13 | 29.80 | 0.88 | 0.731 | 0.842 | 0.776 | 0.761 | |||
BCS-asymmetry | 0.13 | 30.19 | 0.89 | 0.736 | 0.845 | 0.780 | 0.763 | |||
Ours | 0.30 | 0.05 | 30.90 | 0.91 | 0.737 | 0.874 | 0.792 | 0.796 | ||
0.11 | BCS | 0.11 | 26.29 | 0.82 | 0.724 | 0.813 | 0.757 | 0.728 | ||
BCS-PCT | 0.11 | 29.67 | 0.88 | 0.731 | 0.841 | 0.775 | 0.760 | |||
BCS-asymmetry | 0.11 | 30.11 | 0.89 | 0.736 | 0.846 | 0.780 | 0.765 | |||
Ours | 0.30 | 0.01 | 27.27 | 0.82 | 0.724 | 0.830 | 0.767 | 0.750 | ||
0.10 | BCS | 0.10 | 24.76 | 0.76 | 0.711 | 0.718 | 0.706 | 0.637 | ||
BCS-PCT | 0.10 | 28.36 | 0.85 | 0.733 | 0.788 | 0.752 | 0.715 | |||
BCS-asymmetry | 0.10 | 28.78 | 0.86 | 0.731 | 0.794 | 0.755 | 0.719 | |||
Ours | 0.20 | 0.05 | 30.22 | 0.90 | 0.741 | 0.871 | 0.793 | 0.792 | ||
0.07 | BCS | 0.07 | 24.76 | 0.76 | 0.711 | 0.718 | 0.706 | 0.637 | ||
BCS-PCT | 0.07 | 28.07 | 0.84 | 0.730 | 0.784 | 0.748 | 0.705 | |||
BCS-asymmetry | 0.07 | 28.59 | 0.85 | 0.733 | 0.786 | 0.751 | 0.714 | |||
Ours | 0.20 | 0.01 | 26.90 | 0.81 | 0.721 | 0.824 | 0.762 | 0.743 | ||
0.06 | BCS | 0.06 | 24.76 | 0.76 | 0.711 | 0.718 | 0.706 | 0.637 | ||
BCS-PCT | 0.06 | 27.92 | 0.84 | 0.728 | 0.782 | 0.747 | 0.704 | |||
BCS-asymmetry | 0.06 | 28.49 | 0.85 | 0.732 | 0.788 | 0.752 | 0.716 | |||
Ours | 0.10 | 0.05 | 28.81 | 0.88 | 0.738 | 0.868 | 0.790 | 0.792 | ||
0.04 | BCS | 0.04 | 22.15 | 0.66 | 0.658 | 0.458 | 0.525 | 0.387 | ||
BCS-PCT | 0.04 | 25.48 | 0.77 | 0.692 | 0.606 | 0.633 | 0.531 | |||
BCS-asymmetry | 0.04 | 26.07 | 0.79 | 0.699 | 0.627 | 0.650 | 0.550 | |||
Ours | 0.10 | 0.01 | 26.04 | 0.79 | 0.723 | 0.802 | 0.753 | 0.723 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, L.; Nishikawa, H.; Zhou, J.; Taniguchi, I.; Onoye, T. Computer-Vision-Oriented Adaptive Sampling in Compressive Sensing. Sensors 2024, 24, 4348. https://doi.org/10.3390/s24134348
Liu L, Nishikawa H, Zhou J, Taniguchi I, Onoye T. Computer-Vision-Oriented Adaptive Sampling in Compressive Sensing. Sensors. 2024; 24(13):4348. https://doi.org/10.3390/s24134348
Chicago/Turabian StyleLiu, Luyang, Hiroki Nishikawa, Jinjia Zhou, Ittetsu Taniguchi, and Takao Onoye. 2024. "Computer-Vision-Oriented Adaptive Sampling in Compressive Sensing" Sensors 24, no. 13: 4348. https://doi.org/10.3390/s24134348
APA StyleLiu, L., Nishikawa, H., Zhou, J., Taniguchi, I., & Onoye, T. (2024). Computer-Vision-Oriented Adaptive Sampling in Compressive Sensing. Sensors, 24(13), 4348. https://doi.org/10.3390/s24134348