Next Article in Journal
Probabilistic Analysis of Critical Speed Values of a Rotating Machine as a Function of the Change of Dynamic Parameters
Next Article in Special Issue
Image/Video Coding and Processing Techniques for Intelligent Sensor Nodes
Previous Article in Journal
A Multi-Scale-Enhanced YOLO-V5 Model for Detecting Small Objects in Remote Sensing Image Information
Previous Article in Special Issue
Block Partitioning Information-Based CNN Post-Filtering for EVC Baseline Profile
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Computer-Vision-Oriented Adaptive Sampling in Compressive Sensing †

by
Luyang Liu
1,
Hiroki Nishikawa
2,
Jinjia Zhou
1,
Ittetsu Taniguchi
1,* and
Takao Onoye
1
1
Graduate School of Information Science and Technology, Osaka University, Osaka 5650871, Japan
2
Graduate School of Science and Engineering, Hosei University, Tokyo 1848584, Japan
*
Author to whom correspondence should be addressed.
This manuscript is extension version of the conference paper: Liu, L.; Nishikawa, H.; Zhou, J.; Taniguchi, I.; Onoye, T. Adaptive Sampling for Computer Vision-Oriented Compressive Sensing. In Proceedings of the 5th ACM International Conference on Multimedia in Asia (MMAsia 2023), Tainan, Taiwan, 6–8 December 2023.
Sensors 2024, 24(13), 4348; https://doi.org/10.3390/s24134348
Submission received: 12 June 2024 / Revised: 30 June 2024 / Accepted: 2 July 2024 / Published: 4 July 2024

Abstract

:
Compressive sensing (CS) is recognized for its adeptness at compressing signals, making it a pivotal technology in the context of sensor data acquisition. With the proliferation of image data in Internet of Things (IoT) systems, CS is expected to reduce the transmission cost of signals captured by various sensor devices. However, the quality of CS-reconstructed signals inevitably degrades as the sampling rate decreases, which poses a challenge in terms of the inference accuracy in downstream computer vision (CV) tasks. This limitation imposes an obstacle to the real-world application of existing CS techniques, especially for reducing transmission costs in sensor-rich environments. In response to this challenge, this paper contributes a CV-oriented adaptive CS framework based on saliency detection to the field of sensing technology that enables sensor systems to intelligently prioritize and transmit the most relevant data. Unlike existing CS techniques, the proposal prioritizes the accuracy of reconstructed images for CV purposes, not only for visual quality. The primary objective of this proposal is to enhance the preservation of information critical for CV tasks while optimizing the utilization of sensor data. This work conducts experiments on various realistic scenario datasets collected by real sensor devices. Experimental results demonstrate superior performance compared to existing CS sampling techniques across the STL10, Intel, and Imagenette datasets for classification and KITTI for object detection. Compared with the baseline uniform sampling technique, the average classification accuracy shows a maximum improvement of 26.23%, 11.69%, and 18.25%, respectively, at specific sampling rates. In addition, even at very low sampling rates, the proposal is demonstrated to be robust in terms of classification and detection as compared to state-of-the-art CS techniques. This ensures essential information for CV tasks is retained, improving the efficacy of sensor-based data acquisition systems.

1. Introduction

With the rise of the Internet of Things (IoT), there has been a trend to acquire and process image data on edge devices for computer vision (CV) tasks. IoT systems often employ a variety of sensors, including cameras, which generate large volumes of image data that require efficient processing and transmission. For example, drones equipped with cameras can be used to explore hazardous areas and outsource their images to other computers [1]. However, the large amounts of raw data generated by these sensor-equipped IoT devices significantly increase the transmission requirements. There is a significant need for efficient techniques that can reduce the transmission cost, enabling effective processing of sensor data such as images in low-bandwidth scenarios [2].
One potential solution is to use compressive sensing (CS) techniques [3], which can efficiently compress the data by requiring much fewer sampling measurements than the traditional Nyquist theory [4], which reduces the transmission cost. The fundamental concept of CS is illustrated in Figure 1. A sensor captures a signal x R N of real-world scenes; then, the encoding process results in a compressed measurement y R M by signal sampling and compression. Here, M and N represent the dimensions. Since the signal is sampled and compressed, M < N , and the sampling rate is defined as r = M / N . The sampling rate determines the size of the compressed measurement and, thus, the transmission cost. These compressed measurements are used at the data-receiving end to reconstruct the image, and the obtained reconstructed image can be used for downstream CV tasks. Relevant research has demonstrated that CS-based coding and decoding has faster speed, lower complexity, and better reconstruction quality compared to traditional methods using JPEG, H.264/AVC, or H.265/HEVC standards [5], making it highly beneficial for sensor data processing in IoT systems. By combining CS with CV, it is possible to reduce the amount of data that needs to be transmitted from the sensor data acquisition end to the data processing end.
In recent years, there have been precedents for applying CS technology in the CV field, such as separating sensitive regions of the face from compressed measurements for privacy preservation [6] and proposing backpropagation rules to efficiently localize cells in compressed measurements [7]. However, such existing CS literature has claimed that these methods still suffer from limited applicability in CV tasks since image quality always degrades with a decreasing sampling rate. In the work [8], a block-based CS technique divides the original large-size image into several equal-sized blocks and samples each block at the same sampling rate. This approach greatly reduces the complexity of sampling and became a baseline technique that inspired later research. However, since the whole image is sampled at a uniform sampling rate, the overall quality of the reconstructed image will also inevitably degrade when the sampling rate is reduced, leading to the degradation of the CV inference. Although there are some adaptive sampling CS methods that have been proposed recently, their allocatable sampling rate is still limited to the overall base sampling rate, with the maximum and minimum sampling rates fixed at certain ranges [9,10,11]. Figure 2 depicts an example with different sampling rates and image qualities. If the image is reconstructed at a uniformly low sampling rate, its image is degraded in terms of visual quality, as shown in Figure 2 (left). In addition, the accuracy of inference, such as for classification, should also be degraded when using uniformly low-quality reconstructed images. Here, we come up with the idea that CV tasks may require less information than that necessary to reconstruct an image. By adaptively allocating the sampling rates for blocks, we can achieve high accuracy even at a partially low sampling rate, such as the example in Figure 2 (right), wherein only the region of interest (e.g., the human face) is sampled, while the other regions are ignored.
In this context, the proposed technique contributes to the field of sensing technology and CS by introducing a highly adaptable CV-oriented CS framework that empowers sensor systems to selectively capture and transmit the most relevant data. This paper aims to achieve high inference accuracy in CV tasks such as image classification and object detection. While the existing CS methodologies excel at advancing the quality of image reconstruction, their primary focus is on the visual effect of the final reconstructed image rather than its impact on downstream CV tasks. The main contribution of this paper is to propose a specialized adaptive sampling rate allocation strategy that focuses on specific areas. Past methods always had difficulty with the visual incoherence of an image due to the differences in the sampling rates of each part when using an adaptive strategy. However, this research goes beyond this limited thinking and proposes a CS method that serves only the computer and not the human eye. The proposed method compresses just the information needed for the CV task at a higher sampling rate while discarding non-essential information at a lower sampling rate. This strategy may lead to degradation of the visual effect of the whole image, but the task of creating a reconstructed image that can be recognized correctly can be achieved with less sampling resources overall. In the proposed technique, by employing saliency detection and adaptive sampling, sensors can dynamically assess which data points are most relevant for the task at hand. This intelligent acquisition reduces data overload and ensures that critical information is always given precedence, thereby optimizing the overall data acquisition process. This dynamic acquisition is particularly crucial for computer vision applications, where the quality and relevance of data significantly impact performance and decision-making accuracy. Specifically, the proposed technique first performs saliency detection, and then, it adaptively samples the salient and non-salient blocks at high and low sampling rates, respectively. This saliency feature information plays an important role in subsequent CV tasks, allowing the target to be recognized and, thus, improving inference accuracy. Our technique guarantees that crucial information for CV tasks is preserved, thereby enhancing the effectiveness and efficiency of data acquisition in sensor-based systems. In addition, our technique enables more sophisticated analytics and machine learning models to be applied. This supports more sophisticated data analytics and facilitates more smart applications that use sensors. The goal of our proposal is to find a good allocation of sampling rates in such a way that the inference accuracy of the CV task is improved during sampling rate allocation.
In addition, traditional sensor systems often struggle with the transmission of large volumes of data, which can lead to bottlenecks and inefficiencies. Our technique mitigates this issue by compressing data in a way that retains essential information while discarding non-critical elements. By ensuring that only the most relevant data are prioritized, our technique maintains high data accuracy and quality even at lower sampling rates. The ability to transmit only the most relevant data minimizes the need for extensive data processing and storage infrastructure. This not only lowers operational costs but also extends the lifespan of sensor systems by reducing the computational load and energy consumption required for data processing and transmission. The reduction in data volume translates to lower costs associated with data storage and management. This is particularly beneficial for large-scale sensor networks that generate massive amounts of data. Our technique enables organizations to minimize infrastructure costs while maximizing the utility and effectiveness of their data. This leads to faster data transmission rates, lower bandwidth consumption, and improved real-time data processing capabilities, making it ideal for applications in remote monitoring and IoT networks. The proposed adaptive sampling technique marks an improvement in sensor technology and introduces a paradigm shift in how sensor systems acquire, process, and transmit data. This technique addresses several challenges faced by contemporary sensor networks and enhances their efficiency, effectiveness, and versatility. It paves the way for smarter, more responsive sensor technologies that are better equipped to meet the growing demands of modern applications.
In our previous work [12], we examined the effectiveness of the proposed CS sampling technique in classification tasks. In this paper, to assess the impact of block size on our proposed block-based sampling technique tailored for CV, we compare three different block sizes across multiple classification datasets to discern the resulting variations in accuracy. Furthermore, in order to investigate the broader implications of the proposed technique on diverse CV tasks, this paper applies our proposal to the detection task, thereby expanding the applicability of the CS technique. In order to verify the impact of the proposal on the effectiveness of the utilization of the data collected by the sensors, this paper conducts experiments on datasets collected by a variety of realistic sensors, such as STL10, Imagenette (personal cameras), and Intel (surveillance cameras) for the classification task and KITTI (vehicle camera) for the detection task. Additionally, we provide a comparative analysis with other state-of-the-art CS techniques that are oriented towards image quality.
The proposed technique leverages sensor data more effectively, making it particularly valuable for IoT applications, where data transmission and processing efficiency are crucial. The contributions of this paper are as follows:
  • Adaptive sampling for enhanced sensor data utilization: By implementing an adaptive sampling strategy based on saliency detection, our proposal improves the quality and relevance of data collected by sensors. Our proposal effectively preserves essential information that is crucial for CV tasks, leading to more accurate and efficient processing in downstream CV tasks.
  • Wide versatility for different CV tasks: To comprehensively evaluate the effectiveness of our proposal, we extend the application from image classification to more intricate objection detection. The experimental results substantiate the superiority of our proposal over existing adaptive sampling techniques. This shows the versatility and broad applicability of our proposal.
  • Improvement of CV task accuracy at low sampling rates: Unlike traditional CS techniques that focus on visual quality, our technique enhances the accuracy of CV tasks even at low sampling rates, making it a robust solution for sensor data analysis in real-world scenarios. This highlights a promising solution for maintaining accuracy at a reduced cost of sampled data.
The rest of the paper is organized as follows. Section 2 summarizes recent CS techniques. Section 3 addresses adaptive sampling with saliency detection. Section 4 mentions experiments. Section 5 concludes this paper with future remarks.

2. Related Work

The main tasks of CS are to efficiently sample data from the original signal and to accurately reconstruct the signal from the sampled data. The research on CS is divided into two aspects—sampling and reconstruction—and most of the recent work focuses on the sampling part. For example, in [5], the author introduced a low-cost, accurate rate control algorithm based on packet dropping that achieves faster coding and more accurate compression, especially for signal sequences with low and medium motion levels. When the sampling rate is reduced, the amount of data that can be captured and retained from the original signal is also reduced, which inevitably brings about the loss of information, thus causing blurring of the reconstructed image. To correct image degradation due to low sampling rates, the mainstream method is to develop dynamic sampling matrices to enhance the sampling capability. With the development of deep learning, neural-network-based CS techniques have been proposed in order to learn representations of features; these methods have proven to be effective [13]. Zhang [14] proposed a deep learning system for attention-guided dual-layer image compression to form a compact sampling matrix. Fan [15] proposed a global sensing module to collect all level features of an original image in order to reuse measurements multiple times at a multi-scale. Among many types of research, the block-based method, which splits the original image into multiple blocks and samples them simultaneously, improves the efficiency and proves to be effective at processing high-dimensional images [8]. The block-based method has also become a mainstream idea in CS and has enlightened future research.
In the work [16], the authors have noticed that in block-based-sampling CS, different blocks contain different amounts of information: some blocks are richer in texture and detail than others. Therefore, it is significant to adaptively allocate sampling rates to blocks based on information richness. Yu [9] normalized the image to obtain the distribution probabilities of saliency features and then allocated different sampling rates adaptively. Zhou [10] divided the image into asymmetric blocks for fine-grained allocation of sampling rates based on the similarity of the feature values of each part of the image. Converting the original image into a feature distribution map and then allocating different sampling rates according to the feature differences between each block has become the basic process of adaptive CS, which is the idea adopted in this paper. You [17] proposed a framework that solved the problem of the non-uniform size of compressed measurements produced by each block sampled at different sampling rates. This work provides the idea for this paper to realize the simultaneous processing of multiple-sampling-rate sampling by a single model. Chen [18] and Yang [11] applied a content-aware and moving-area-aware scalable network, respectively, to achieve high-quality reconstruction of detailed textures compared to uniform-sampling CS.
However, sampling various blocks of an image with different sampling rates results in the blocking artifact [19]. The blocking artifact means the phenomenon of significant differences between neighboring blocks. To mitigate the deterioration of image quality caused by the blocking artifact, the aforementioned adaptive CS techniques adopt a conservative sampling rate allocation strategy. The works [9,10] limited the allocated maximum and minimum sampling rates to a certain ratio. The authors of [18] fused the reconstructed images using double-sampling to minimize the dissimilarity between blocks, but this resulted in an increase in the cost of sampling and transmission. Since the existing CS techniques take the improvement of image visual quality as the only goal, this leads to the allocation of the sampling rate always needing to consider the visual effect of the whole image as the target. In contrast, our proposed technique takes improving the classification accuracy of the reconstructed images as a goal. The allocation of the sample rate focuses only on the saliency target: that is, the region that may be of interest to the CV task. By separately allocating high and low sampling rates, the overall average sampling rate is reduced while preserving the information needed for classification.

3. Proposed CV-Oriented Adaptive Sampling

In this section, we propose a saliency-based block sampling technique in which the inference accuracy of the reconstructed image in CV tasks is improved. Block-based CS has been proven to be effective at handling high-dimensional images by decomposing the original image into a number of equal-sized blocks for simultaneous sampling of each part [20,21,22]. We note that the information in an image is not always uniformly distributed. Therefore, it is necessary to allocate different sampling rates to different blocks depending on the richness of the information. This strategy is called adaptive sampling.
Our proposed adaptive sampling technique seeks to optimize image sampling to specifically enhance the accuracy of downstream CV tasks. Specifically, we determine the distribution of the information by obtaining a feature map of the image. In recent years, neural-network-based saliency detection techniques have been demonstrated to extract global features better than traditional filter transformation methods [23]. According to the definition of saliency detection, in general, locations with low spatial correlation with their surroundings are salient [24]. Based on that, we can localize the salient and non-salient blocks in the image and allocate different sampling rates to them. By combining block-based CS with saliency detection, we have implemented an adaptive sampling technique for CV tasks.
The concept of the proposed sampling technique is illustrated in Figure 3. Here, the yellow part represents block sampling, while the green part represents the saliency-based sampling rate allocation. Based on block sampling, saliency detection is carried out for input signals. For saliency detection, we perform extraction on the input with a modified MobileNetV3 [25] for obtaining a feature distribution map. Based on the differences in feature weights between each block, we can discriminate between salient and non-salient blocks, and different sampling rates are allocated to them. Finally, each block of the original image signal (green dashed lines) is sampled at the sampling rate (red solid lines) of the corresponding block in the sampling rate distribution map, and the sampling results are combined into the compression measurement. More details about saliency detection are introduced below.

3.1. Saliency Detection

Considering the efficiency of processing on edge devices, we need to control the computational cost of the saliency detection part. In this work, we utilize lightweight MobileNetV3 [25] as the saliency feature extractor. MobilenetNetV3 is constructed based on depthwise separable convolution, and its feature extraction backbone contains only 0.47 M parameters and has only about 10 ms latency on edge devices [26]; these characteristics are much smaller than those of other current mainstream CNNs. MobilenetNetV3 fully meets the requirements of low cost and real-time operation, so there is no need to be concerned with a complexity increase associated with its introduction. The original MobileNetV3 uses a stepwise upsampling operation in the decoder to recover feature map specifications. Note that for the saliency detection part of this research, we only use the MobileNetV3 backbone to get the feature map rather than for subsequent predictions such as classification and segmentation. Therefore, the computational complexity expense in the decoder is completely unnecessary, and we simplified the structure of MobileNetV3 from the dimensional recovery phase.
The structure of the modified MobileNetV3 is shown in Table 1. Each bottleneck contains a 3 × 3 depthwise convolution and an SE attention layer. Specifically, in layer 17, we use one DUpsampling layer to replace the original decoder part in order to achieve fast, one-step recovery of the feature map to the same size as the original input. DUpsampling is supposed to be used for fine-grained recovery of target edges in semantic segmentation tasks, but it has also shown effectiveness in cross-dimensional feature map size recovery [27]. Compared to traditional bilinear upsampling, DUpsampling only applies 1 × 1 convolution to the spatial dimension and is based on the correlation between each pixel and rearranges channel vectors. This allows DUpsampling to recover from low-level dimensions to high-level dimensions in one step.
Meanwhile, without changing the backbone structure of MobileNetV3, it is still able to utilize the pre-training weights on the ImageNet dataset [28]. Compared to simple stacking of several convolutional layers, using the pre-trained feature extractor is more effective for determining saliency information from complex backgrounds [29]. As shown in Formula (1), for an Input whose length, width, and number of channels are H, W, and three, respectively, an output saliency feature map S with the same size as the Input but with one channel is obtained after convolution processing and dimension recovery by the CNN (modified MobileNetV3). Here, H × W × 3 and H × W × 1 represent the dimensions of the Input and S, respectively:
S ( H × W × 1 ) = C N N ( I n p u t ( H × W × 3 ) ) .

3.2. Adaptive Sampling

In some previous works on discrete cosine transform (DCT)-based feature extraction, the researchers blocked the original image in order to calculate the DCT coefficient weights of each block and verified that the feature energy of the image is mainly concentrated in the block that has a higher-than-average DCT coefficient weight [30,31]. With reference to this fact, to correspond to subsequent block sampling, we block the feature map S obtained in the previous section and calculate the feature weight of each block to generate the block weight distribution map W. We determine the salient parts based on W. Assuming that the block size in block-based sampling is set to b × b , we divide pixels at every b × b position in the original feature map S into a block S [ b , b ] with kernel size b × b . Sum-pooling pools the summed values inside the scanning kernel [32]. While we can allocate finer-grained sampling rates for the blocks if we use smaller block sizes, the computational complexity increases with increasing the number of blocks. Let W i , j denote the feature value of the (i, j) block, and S [ b i , b j ] represents a b × b -sized region corresponding to W i , j in the original feature map S. We use sum-pooling to accumulate the feature values of each pixel in the S [ b i , b j ] block and obtain the feature weight of the block W i , j , given by:
W i , j = i = 1 W / b j = 1 H / b S [ b i , b j ] .
The proposed technique allocates the sampling rates for each block from the feature differences. The blocks, which contain the potential interest for the CV task (e.g., semantic targets), are often accompanied by rich textures or distinct edges. Such blocks have high feature weights and can be considered salient blocks. The drawback of this proposal lies in that saliency detection could fail if textures and edges are unclear. Hence, pre-processing approaches, which are out of our scope in this paper, are sometimes required in advance.
Refer to [30,31]; our scheme is to calculate the average of the block weight distribution map W and use it as a criterion for determining salient blocks. The number of blocks and a cumulative W i , j for all blocks can derive the average feature value, which is represented as a threshold t in the following formula:
t = b 2 i = 1 W / b j = 1 H / b W i , j H × W .
When a feature value on (i, j) is larger than the threshold t, the block at the current position is discriminated as a salient block and given a high sampling rate r h i g h ; otherwise, a low sampling rate r l o w is given for the non-salient block. The threshold was pre-determined by the authors based on previous research. The formula is given below:
R i , j = r h i g h , if W i , j > t r l o w , otherwise
where R i , j represents the sampling rate value at the position of the i-th row and j-th column. In this way, the sampling rate distribution is generated. Finally, according to the sampling rate distribution map, each block is sampled at different sampling rates. With the aforementioned technique, we can discriminate between the salient and non-salient blocks of an image for sampling. Like other CS techniques, users can set the sampling rate according to their needs. In order to improve the inference accuracy for CV at low sampling rates, we want to retain as much information as possible that is useful for CV during the sampling process. Therefore, r h i g h and r l o w have extremely different values, and the weight of useless information in the compressed measurements is reduced by setting a very low r l o w . We tested a variety of combinations of r h i g h and r l o w ; see Section 4 following for a detailed exploration of sampling rates.
We illustrate the transformation from the input to the sampling rate distribution map that is implemented based on the proposed allocation scheme. For example, as shown in Figure 4, an input image of size 96 × 96 is extracted by the CNN mentioned in Section 3 to derive the saliency feature map S. Then, based on the pre-set block size of 32 × 32, the original image is divided into nine same-sized blocks, and the feature weights of each block are obtained. The average block weight is derived to be 0.091. Corresponding to the original image, it can be seen that the block in the middle row, where the truck is located, has a higher feature weight than the average weight and is allocated a high sampling rate (0.50). The rest of the background parts, i.e., the sky and the road, have feature weights that are lower than the mean average weight and are allocated a low sampling rate (0.01). Based on the above process, the transformation of the original input image to the sampling rate distribution map R is implemented. Finally, each block of the original image is sampled according to the corresponding sampling rates in R.
Regarding block sampling, here, we refer to the learned sampling matrix in the work of [33], which does not need to be transferred from the encoder to the decoder, thus eliminating the extra transmission cost. The sampling rate determines the size of the sampling matrix Φ R M × N . Here, N is the number of columns of the sampling matrix, which corresponds to the dimension of the original input signal x R N . M is the number of rows of the sampling matrix, which stands for the number of sampled measurements. Therefore, M is proportional to the sampling rate. For the original input, we unfold it into blocks of the same b × b size. The term k denotes the current row of Φ . The first row corresponds to the upper left block in the original input. Each block x k is sampled by its corresponding sampling matrix Φ r k . The block compressed sampling can be expressed as:
y k = Φ r k x k
where y k is the result of compressed measurements. And r k is the corresponding allocated sampling rate in R.

4. Experiments

In this section, we conducted experiments in order to demonstrate the effectiveness of our proposed technique. The experiments tested and compared a variety of sampling techniques consisting of baseline sampling techniques with certain reconstruction techniques (Table 2, Table 3 and Table 4, and gives the example of reconstructed image in Figure 5) and state-of-the-art techniques (Figure 6, Figure 7 and Figure 8), and the accuracy of CV tasks is compared.

4.1. Classification

4.1.1. Setup

To demonstrate the effectiveness of the proposal that improves the accuracy of the classification task, we conducted the following experiments. We first compared our proposed adaptive sampling technique with baseline techniques, namely BCS [8], BCS-PCT [9], and BCS-asymmetry [10]. BCS uses block-based uniform sampling, and BCS-PCT and BCS-asymmetry use adaptive sampling. BCS-PCT implements saliency detection and sampling rate allocation based on the pulsed cosine transform (PCT), while BCS-asymmetry considers the similarity between blocks to achieve asymmetric block segmentation and fine-grained sampling rate allocation. The above three techniques and the proposed technique are all based on block sampling implementation, and here, we compare sampling principles to confirm the effectiveness of the adaptive technique. Subsequently, the proposal will also be compared with state-of-the-art CS techniques proposed in recent years.
Regarding the reconstruction part of CS, we adopt a U-Net-based method from [37]. Much research on image restoration has shown that deep convolutional neural networks can effectively solve inverse problems in the image prior. In this work, we make the reconstruction network learn the mapping between the compressed measurements and the images in order to achieve the recovery of visualized results. Specifically, the structure of the reconstruction network is the backbone of the U-Net, which contains four scales. Each scale has a skip connection between upsampling and downsampling. Each upsampling and downsampling operation contains four residual blocks. Note that this work is concerned with the improvement of the sampling phase, and image recovery is not the focus at this time. The experiments here fixed the reconstruction part in order to compare the sampling methods fairly. Each sampling technique was combined with the reconstruction model component to form a complete CS network. For fair comparison and to verify the generalizability of the proposed technique, all CS networks were trained on the Berkeley Segmentation Dataset (BSD) [38], which contains 400 images cropped to 128 × 128 patch sizes. Each network was implemented with PyTorch, used 200 training epochs on an NVIDIA RTX 3070 GPU, employed the Adam optimizer, and had the learning rate set to 0.0001. For the proposed techniques, we adopted a pre-trained MobileNetV3 for saliency detection. The testing scenario was as follows: we were given the STL10 dataset [34], Intel image classification dataset [35], and Imagenette dataset [36] with image sizes of 96 × 96, 150 × 150, and 512 × 512, respectively, as input.
We prepared ten scenarios using different sampling rate combinations that varied from 0.05 to 0.01 for non-salient blocks (hereafter called r l o w ) and from 0.50 to 0.10 for salient blocks (hereafter called r h i g h ). We employed average sampling rates in order to fairly compare the other sampling techniques since our proposal allocates different sampling rates for blocks, and the block size was set to 8 × 8. In addition, we also evaluated classification accuracy for different block sizes. Our proposed technique was compared to BCS, where the block sizes were set to 32 × 32, 16 × 16, and 8 × 8. The sampling rate allocation was consistent with the above for a total of ten scenarios. All CS techniques were evaluated in terms of the average CS rate (sampling rate), reconstructed image quality, and classification accuracy, and this paper specifically focuses on the performance of classification accuracy. As a reference, we also give the classification accuracies of the uncompressed original dataset at the top of Table 2, Table 3 and Table 4. The image quality of the reconstructed images using each CS technique was evaluated in terms of the PSNR (peak signal-to-noise ratio) and SSIM (structural similarity), which indicate the similarity of a reconstructed image to an original image. Classification accuracy was evaluated based on the three popular neural networks for classification: Xception [39], ResNet152 (hereafter called ResNet) [40], and DenseNet201 (hereafter called DenseNet) [41]. Each network was implemented with PyTorch and was trained for 200 epochs on an NVIDIA RTX 3070 GPU at a learning rate of 0.1 on the STL10, Intel, or Imagenette dataset.
Next, we compared the proposed technique with the state-of-the-art CS techniques: MR-CCSNet, which collects global information through multiple measurements and uses it for high-quality image reconstruction [15]; AMP-Net, which constructs a deep network based on an iterative denoising process to remove blurring from reconstructed images [42]; and FSOINet, which trains and strengthens the sampling matrix by learning the mapping of the original signal in the pixel space in relation to the feature space [43]. The other experimental setup was similar to the aforementioned content: we trained MR-CCSNet, AMP-Net, and FSOINet for 200 epochs on the BSD500 dataset [44], which is commonly used for CS training. To compare them, we used the proposed sampling technique whereby 0.05 and 0.01 were used for non-salient blocks as r l o w and 0.50 to 0.10 were used for salient blocks as r h i g h . In the experiments, we employed the different sampling rates for each of the three CS techniques (MR-CCSNet, with sampling rates of 0.03125, 0.06250, 0.12500, and 0.25000; AMP-Net, with sampling rates of 0.04000, 0.10000, and 0.25000; and FSOINet, with sampling rates of 0.04000, 0.100000, 0.15000, 0.20000, and 0.25000). The state-of-the-art CS techniques faced the limitations of specific parameters of the sampling matrix and of adopting distinct strategies for reconstructing images at different sampling rates. It should be noted that we can thus hardly compare them in fairness since these techniques can only achieve sensing at specific sampling rates. While it is challenging to conduct a perfectly fair comparison due to the non-uniformity of the sampling rate, it is still feasible to evaluate the effectiveness of each CS technique at image classification tasks by observing the curve’s height and direction in the graph. The results are represented by polylines because inference accuracy generally shows a positive correlation with the sampling rate. We expect this work to address the problem of reduced accuracy of reconstructed images under scenarios of low sampling rates, with the objective of decreasing the amount of data transmitted. Consistent with previous experiments, all CS networks were trained on BSD with the same parameter settings. The trained networks were given the STL10 dataset, Intel image classification dataset and Imagenette dataset, and the reconstructed images obtained by the state-of-the-art CS techniques were inferred by the classification networks, which are the same as the previous experiment: Xception [39], ResNet [40], and DenseNet [41] trained with the same parameter settings.

4.1.2. Results

Table 2, Table 3 and Table 4 display the experimental results for the STL10, Intel, and Imagenette datasets, respectively, and compare four CS sampling techniques at ten different average sampling rates. The results include two metrics: image quality, evaluated by the averages of the PSNR and SSIM, and classification accuracy, which is compared at ten sampling rates using four classification models. For example, in Table 2, at the average CS rate of 0.21, the proposed technique Ours employed two sampling rates, with r l o w set to 0.05 and r h i g h to 0.50, resulting in an overall average CS rate of 0.21. Other CS techniques are also fixed at 0.21 for fair comparison (i.e., having the same average sampling rate). The subsequent columns show the reconstructed image quality and classification accuracy of each technique. Here, “Difference” shows the differences in the average classification accuracies between the proposed technique and other techniques. The “-” means that the proposed technique is better than the other techniques.
Table 2 and Table 3 demonstrate that our proposal generally outperforms the other techniques in terms of classification accuracy, except for the case using an average sampling rate of 0.18 in Table 2. The results indicate that Ours is generally superior to the others in terms of classification accuracy because blocks that are identified as salient are allocated a higher sampling rate. Compared to BCS, Ours achieves higher classification accuracy by up to 26.23% in the case of 0.10 in Table 2 and up to 11.69% in the case of 0.15 in Table 3 for each champion case. Although BCS-PCT and BCS-asymmetry are superior to BCS, they cannot exceed Ours except for the 0.18 case in Table 2. Especially when the average sampling rate decreases, Ours shows the tendency to achieve higher classification accuracy. In terms of PSNR and SSIM, there are also quite a few cases for which Ours have higher values than the other techniques, as shown for sampling rates of 0.21, 0.17, 0.14, 0.10, 0.07, and 0.04 in Table 2 and 0.22, 0.18, 0.14, 0.11, 0.08, 0.07, and 0.04 in Table 3, respectively. Although the compared techniques aim for high image quality, Ours still shows an advantage in terms of image quality over the other techniques due to partly allocating high sampling rates to salient blocks, but the image quality does not always synchronize with the classification accuracy since the image quality in Ours largely depends on the size of the object being classified. The larger the target, the higher the number of salient blocks as a proportion of the whole image. There are the cases where Ours shows lower PSNRs and SSIMs compared to the others, as shown for sampling rates of 0.18, 0.15, 0.11, and 0.08 in Table 2 and 0.19, 0.15, and 0.12 in Table 3, respectively, but we achieve higher accuracy than the others since Ours is not oriented to the image quality but to the classification accuracy. Figure 5 exemplifies the visual differences between BCS and Ours for the 0.04 sampling rate. The original image is labeled “church”. In the reconstructed image from the uniform CS (BCS) on the left, the target church becomes blurred at the low sampling rate (0.04), leading to a classification error. In contrast, the reconstructed image from our proposed technique on the right side shows the target church with comparative clarity, leading to correct classification by focusing on the salient target using a high sampling rate (0.10). By sampling the remaining non-salient blocks (sky and greenfield) at a lower sampling rate (0.01), we reduced the weighting of non-essential information, thus decreasing the overall sampling cost. Despite having lower image qualities, Ours achieved higher classification accuracy in most cases, except for the 0.18 case. This implies that the overall image quality does not always contribute to high classification accuracy. However, in the 0.18 case, there is a possibility that feature extraction in Ours might fail, leading to higher sampling rates being allocated to non-salient blocks. This case leaves a future challenge, which is to detect true saliency maps. In addition, we still need to note that on both the STL10 and Intel datasets, there is a significant loss of accuracy in the compressed-sampled image compared to the original image due to the unavoidable loss of information at very low sampling rates. How to reasonably set the sampling rates allocated to the salient and non-salient blocks in order to maximize the accuracy preservation is still a major work for the future.
Table 4 presents the results for the Imagenette dataset, which contains larger, 512 × 512 images compared to the STL10 and Intel datasets. While Ours demonstrates advantages in classification accuracy for sampling rates lower than 0.10, in other cases, it appears to be inferior to the other techniques. One reason behind these results is that the classification task for the Imagenette dataset might be too easy to show significant differences among the techniques, given the rich information present in the original images. As seen from the table, the classification accuracies are consistently high for all techniques and are quite close to each other, making it challenging to compare them effectively. Even when the rich information is significantly reduced by compression, the classification accuracy remains high for all four classification models. However, Ours maintains relatively high accuracy at 82.65%—close to that of the original image and an improvement of up to 18.25% over the other techniques—at a sampling rate of 0.04. This suggests that our proposed technique could be particularly useful for sampling images with originally poor information that were captured using reasonable cameras equipped on edge devices with limited resources.
Table 5 explores the results using different block sizes for sampling. In the headers, we need to note that the average CS rate represents the average of the average CS rates in Table 2, Table 3 and Table 4 for Ours and BCS using three different block sizes. Overall, for a block size of 8 × 8, Ours shows the highest classification accuracy since it can allocate sampling rates for finer-grained blocks in this case versus for the other block sizes. Unlike Ours, BCS has almost the same accuracy for different block sizes due to its uniform sampling. Compared to BCS, Ours outperforms 16.24% for STL10 and 10.18% for Intel. However, there is a marginal difference for the Imagenette dataset, where both techniques show close accuracy regardless of block sizes.
As described in Section 4.1.1, some SOTA techniques can only be implemented at specified sampling rates. These inherent limitations make it difficult to compare them at the same sampling rate. Therefore, some curve graphs of sampling rate classification accuracy are used here instead of tables to visually show the performance differences of each CS technique. Figure 6, Figure 7 and Figure 8 display the experimental results for the STL10, Intel, and Imagenette datasets, respectively, to compare our proposal with three state-of-the-art CS techniques at different sampling rates. Overall, the results show a positive correlation between the inference accuracy and sampling rate.
The results in Figure 6, Figure 7 and Figure 8 show that the polylines of Ours are higher than those of MR-CCSNet and AMP-Net when the sampling rate is less than 0.20, which demonstrates that the proposal performs better than MR-CCSNet and AMP-Net at low sampling intervals for the image classification task. Although FSOINet initially performed well due to its superior image reconstruction capabilities, it inevitably experienced a loss in accuracy as the sampling rate decreased and the amount of data available for reconstruction was reduced. The performance curve of FSOINet shows a sharp decline and is eventually overtaken by our proposed technique. Specifically, Figure 8 illustrates that in scenarios with large-sized images and complex backgrounds, while the performance curves of other CS techniques drop sharply, the curve of our proposal remains stable. Even at very low sampling rates, it maintains a high level of precision, comparable to the original image. This demonstrates the effectiveness of our proposed scheme for reducing the amount of data required for downstream classification tasks, offering a viable solution to lower the transmission costs for edge devices.
Furthermore, we note that for our proposal, when the sampling rate ( r l o w ) of the non-salient blocks is set to 0.01, it consistently exhibits inferior classification accuracy compared to instances where the sampling rate is set to 0.05. This discrepancy suggests that non-salient blocks, despite lacking information about predefined classification targets, may contribute to target boundary determination and subsequently impact final classification results. Therefore, future work is to find the best combination of non-salient and salient block sampling rates to achieve the highest accuracy.

4.2. Object Detection

Image classification requires classifying an entire image into predefined categories or classes, such as identifying whether an image contains a cat or a dog [45,46]. Object detection requires identifying and locating objects of interest within an image, such as detecting and locating multiple faces in a crowd [47,48]. Compared to image classification, the object detection task is more complex and difficult; the network has to understand both global and local features of the image to localize the distribution of objects [49]. Therefore, the amount of information required to implement an image classification task is usually less than the amount of information required to implement an object detection task [50]. To verify the effectiveness of the proposal at improving the detection accuracy, we conducted the following experiments.

4.2.1. Setup

The experimental scenario is as follows: we are given the KITTI 2D object detection dataset [51] as input. The KITTI dataset is currently the main road object detection dataset for autonomous driving and has an input size of 1224 × 370. Using the KITTI dataset for experimental testing can simulate the scenario of the proposed technique in real IoT and embedded CV applications. The experimental details here are consistent with those in Section 4.1.1.
We first compared the four baseline sampling techniques: BCS, BCS-PCT, BCS-asymmetry, and Ours. The settings for training, sampling rate combinations, and block sizes were the same as those listed in Section 4.1.1. The compressed measurements obtained by sampling compression were reconstructed by the same method and fed into the object detection network for accuracy testing. For detection, we utilzied YOLOv3 [52], which has been well-established by other works such as [53,54,55,56] for inference using the KITTI dataset.
YOLOv3 was implemented with PyTorch and was trained for 100 epochs on an NVIDIA RTX 3070 GPU; we employed the Adam optimizer with a learning rate of 0.01. All CS sampling techniques were evaluated in terms of the average CS rate (sampling rate), reconstructed image quality, and detection accuracy, and this part specifically focused on detection accuracy performance. The evaluation metrics for detection accuracy include precision, recall, F1-score, and mAP.
Similar to in Section 4.1, we evaluated the effectiveness of our proposal compared to state-of-the-art CS techniques (MR-CCSNet, AMP-Net, and FSOINet) within the context of object detection tasks. The experimental setup and model training closely mirrored those outlined in Section, with the trained models tested on the KITTI dataset. The evaluation of detection accuracy was conducted through object detection using YOLOv3 on the reconstructed images generated by each CS technique, with a specific emphasis on the mAP.

4.2.2. Results

Table 6 presents the results on the KITTI dataset. The results are discussed in two parts according to the sampling rate interval.
For the high sampling rate interval from 0.20 to 0.11, Ours outperforms the other CS sampling techniques in terms of reconstructed image quality and detection accuracy when the sampling rate allocated to the non-salient blocks r l o w is 0.05. When the sampling rate r l o w is 0.01, Ours generally outperforms BCS. There were some cases that were slightly worse than for the other two adaptive CS sampling techniques in terms of image quality and detection accuracy (usually by only 0.1). However, the differences seem negligible.
For the low sampling rate interval from 0.10 to 0.04, Ours outperforms the other CS sampling techniques in terms of detection accuracy. In particular, when the average sampling rates are 0.07 and 0.04, Ours achieves higher detection accuracy even though it is lower in reconstructed image quality than BCS-asymmetry, which is specifically designed to enhance visualization. This also shows that high inference accuracy for CV tasks does not depend entirely on high image quality. When the average sampling rate is 0.04, the mAP of 0.387 for BCS drops to a little more than half of that of the original image, and the two existing adaptive CS sampling techniques, BCS-PCT and BCS-asymmetry, also have significant decreases in detection accuracy, but Ours still achieves a higher mAP of 0.723: maintaining a similar level of accuracy as the original image. Figure 9 shows the object detection results for the original image and the reconstructed images of BCS, BCS-asymmetry, and Ours (the average sampling rate is 0.04). It can be seen that at lower sampling rates, there is significant mis-detection in the reconstructed images of BCS and BCS-asymmetry. In contrast, at the same sampling rate, the reconstructed images of Ours still correctly portray predefined objects that cannot be recognized in the images from BCS and BCS-asymmetry, such as the cyclist in Sample 1 and the car in Sample 2 from Figure 9. In contrast, the results of Ours show that the detection of each predefined object is achieved with confidence that is very close to that of the original image. Although the pixel size of the predefined object is very small relative to the whole image, its features are transmitted through our proposal sampling technique. This is attributed to the fact that by allocating a higher sampling rate to the saliency parts in the original image, the information required for the object detection task is preserved. This also proves that the proposed technique can still achieve high accuracy with a lower sampling rate and higher degree of data compression for the object detection task.
To comprehensively verify the effectiveness of our proposal, we compared it with the most advanced CS techniques. Figure 10 presents a graphical representation of curves depicting the relationship between the detection mAP and sampling rate for various CS techniques on the KITTI dataset. An analysis of Figure 10 reveals distinctive performance characteristics. Our proposal consistently outperforms MR-CCSNet and AMP-Net in terms of detection accuracy, showcasing superior performance across varying sampling rates. Notably, our proposal demonstrates detection accuracy comparable to that of FSOINet when the sampling rate exceeds 0.10 (when r l o w of Ours is 0.05). As the sampling rate decreases, FSOINet exhibits a decline in detection accuracy: this is particularly noticeable when the sampling rate is lower than 0.10. In contrast, our proposal maintains a high level of detection accuracy even under conditions of very low sampling rates. This result underscores the robustness of our proposal and its versatility for effectively addressing a broad spectrum of CV tasks.

5. Conclusions and Future Remarks

This paper presents an adaptive sampling technique in CS with the aim of improving the accuracy of CV tasks even at low sampling rates. Our contribution to the field of sensing technology lies in providing a framework that enables sensor systems to intelligently prioritize and transmit the most relevant data. Our technique ensures that essential information required for CV tasks is retained, thereby improving the overall efficacy of sensor-based data acquisition systems. The experimental results validate that our technique yields superior classification and object detection accuracy on various datasets collected by real sensor devices. This highlights the potential of our technique to maintain high inference performance while significantly reducing data transmission costs in sensor-rich but bandwidth-constrained environments. Future work will be to find the best combination of non-salient and salient block sampling rates to achieve the highest accuracy and to seek better salient block detection.

Author Contributions

Conceptualization, J.Z.; methodology, L.L.; validation, L.L and H.N.; writing—original draft preparation, L.L.; writing—review and editing, H.N.; supervision, I.T.; project administration, T.O.; funding acquisition, J.Z. and H.N. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partly supported by JSPS KAKENHI grant numbers 22K12101 and 23K16858.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Dataset available on request from the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Choi, J.; Lee, W. Drone SAR Image Compression Based on Block Adaptive Compressive Sensing. Remote Sens. 2021, 13, 3947. [Google Scholar] [CrossRef]
  2. Djelouat, H.; Amira, A.; Bensaali, F. Compressive sensing-based IoT applications: A review. J. Sens. Actuator Netw. 2018, 7, 45. [Google Scholar] [CrossRef]
  3. Li, L.; Fang, Y.; Liu, L.; Peng, H.; Kurths, J.; Yang, Y. Overview of compressed sensing: Sensing model, reconstruction algorithm, and its applications. Appl. Sci. 2020, 10, 5909. [Google Scholar] [CrossRef]
  4. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  5. Belyaev, E. An efficient compressive sensed video codec with inter-frame decoding and low-complexity intra-frame encoding. Sensors 2023, 23, 1368. [Google Scholar] [CrossRef] [PubMed]
  6. Nguyen Canh, T.; Nagahara, H. Deep compressive sensing for visual privacy protection in flatcam imaging. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Republic of Korea, 27–28 October 2019; pp. 3978–3986. [Google Scholar]
  7. Xue, Y.; Bigras, G.; Hugh, J.; Ray, N. Training convolutional neural networks and compressed sensing end-to-end for microscopy cell detection. IEEE Trans. Med. Imaging 2019, 38, 2632–2641. [Google Scholar] [CrossRef] [PubMed]
  8. Adler, A.; Boublil, D.; Elad, M.; Zibulevsky, M. A deep learning approach to block-based compressed sensing of images. arXiv 2016, arXiv:1606.01519. [Google Scholar]
  9. Yu, Y.; Wang, B.; Zhang, L. Saliency-based compressive sampling for image signals. IEEE Signal Process. Lett. 2010, 17, 973–976. [Google Scholar]
  10. Zhou, S.; Xiang, S.; Liu, X.; Li, H. Asymmetric block based compressive sensing for image signals. In Proceedings of the 2018 IEEE International Conference on Multimedia and Expo (ICME), San Diego, CA, USA, 23–27 July 2018; pp. 1–6. [Google Scholar]
  11. Yang, J.; Wang, H.; Fan, Y.; Zhou, J. VCSL: Video Compressive Sensing with Low-complexity ROI Detection in Compressed Domain. In Proceedings of the 2023 Data Compression Conference (DCC), Snowbird, UT, USA, 21–24 March 2023. [Google Scholar]
  12. Liu, L.; Nishikawa, H.; Zhou, J.; Taniguchi, I.; Onoye, T. Adaptive Sampling for Computer Vision-Oriented Compressive Sensing. In Proceedings of the ACM Multimedia Asia 2023, Tainan, Taiwan, 6–8 December 2023; pp. 1–5. [Google Scholar]
  13. Cui, W.; Liu, S.; Jiang, F.; Zhao, D. Image compressed sensing using non-local neural network. IEEE Trans. Multimed. 2021, 25, 816–830. [Google Scholar] [CrossRef]
  14. Zhang, X.; Wu, X. Attention-guided image compression by deep reconstruction of compressive sensed saliency skeleton. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 13354–13364. [Google Scholar]
  15. Fan, Z.E.; Lian, F.; Quan, J.N. Global Sensing and Measurements Reuse for Image Compressed Sensing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 8954–8963. [Google Scholar]
  16. Akbari, A.; Mandache, D.; Trocan, M.; Granado, B. Adaptive saliency-based compressive sensing image reconstruction. In Proceedings of the 2016 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Seattle, WA, USA, 11–15 July 2016; pp. 1–6. [Google Scholar]
  17. You, D.; Xie, J.; Zhang, J. ISTA-Net++: Flexible deep unfolding network for compressive sensing. In Proceedings of the 2021 IEEE International Conference on Multimedia and Expo (ICME), Shenzhen, China, 5–9 July 2021; pp. 1–6. [Google Scholar]
  18. Chen, B.; Zhang, J. Content-aware scalable deep compressed sensing. IEEE Trans. Image Process. 2022, 31, 5412–5426. [Google Scholar] [CrossRef]
  19. Singh, S.; Kumar, V.; Verma, H. Reduction of blocking artifacts in JPEG compressed images. Digit. Signal Process. 2007, 17, 225–243. [Google Scholar] [CrossRef]
  20. Chun, I.Y.; Adcock, B. Compressed sensing and parallel acquisition. IEEE Trans. Inf. Theory 2017, 63, 4860–4882. [Google Scholar] [CrossRef]
  21. Gan, L. Block compressed sensing of natural images. In Proceedings of the 2007 15th International Conference on Digital Signal Processing, Cardiff, Wales, UK, 1–4 July 2007; pp. 403–406. [Google Scholar]
  22. Chun, I.Y.; Adcock, B. Uniform recovery from subgaussian multi-sensor measurements. Appl. Comput. Harmon. Anal. 2020, 48, 731–765. [Google Scholar] [CrossRef]
  23. Cong, R.; Lei, J.; Fu, H.; Cheng, M.M.; Lin, W.; Huang, Q. Review of visual saliency detection with comprehensive information. IEEE Trans. Circuits Syst. Video Technol. 2018, 29, 2941–2959. [Google Scholar] [CrossRef]
  24. Jian, M.; Wang, J.; Yu, H.; Wang, G.; Meng, X.; Yang, L.; Dong, J.; Yin, Y. Visual saliency detection by integrating spatial position prior of object with background cues. Expert Syst. Appl. 2021, 168, 114219. [Google Scholar] [CrossRef]
  25. Howard, A.; Sandler, M.; Chu, G.; Chen, L.C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for mobilenetv3. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar]
  26. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  27. Tian, Z.; He, T.; Shen, C.; Yan, Y. Decoders matter for semantic segmentation: Data-dependent decoding enables flexible feature aggregation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3126–3135. [Google Scholar]
  28. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  29. Ternausnet, I.V. U-net with vgg11 encoder pre-trained on imagenet for image segmentation. arXiv 2018, arXiv:1801.05746. [Google Scholar]
  30. Fang, Y.; Chen, Z.; Lin, W.; Lin, C.W. Saliency detection in the compressed domain for adaptive image retargeting. IEEE Trans. Image Process. 2012, 21, 3888–3901. [Google Scholar] [CrossRef]
  31. Fang, Y.; Lin, W.; Chen, Z.; Tsai, C.M.; Lin, C.W. A video saliency detection model in compressed domain. IEEE Trans. Circuits Syst. Video Technol. 2013, 24, 27–38. [Google Scholar] [CrossRef]
  32. Aich, S.; Stavness, I. Global sum pooling: A generalization trick for object counting with small datasets of large images. arXiv 2018, arXiv:1805.11123. [Google Scholar]
  33. Shi, W.; Jiang, F.; Liu, S.; Zhao, D. Image compressed sensing using convolutional neural network. IEEE Trans. Image Process. 2019, 29, 375–388. [Google Scholar] [CrossRef]
  34. Wang, D.; Tan, X. Unsupervised feature learning with C-SVDDNet. Pattern Recognit. 2016, 60, 473–485. [Google Scholar] [CrossRef]
  35. Rahimzadeh, M.; Parvin, S.; Safi, E.; Mohammadi, M.R. Wise-srnet: A novel architecture for enhancing image classification by learning spatial resolution of feature maps. arXiv 2021, arXiv:2104.12294. [Google Scholar] [CrossRef]
  36. Lelekas, I.; Tomen, N.; Pintea, S.L.; van Gemert, J.C. Top-Down Networks: A coarse-to-fine reimagination of CNNs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 752–753. [Google Scholar]
  37. Zhang, K.; Li, Y.; Zuo, W.; Zhang, L.; Van Gool, L.; Timofte, R. Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 6360–6376. [Google Scholar] [CrossRef] [PubMed]
  38. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef]
  39. Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
  40. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  41. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  42. Zhang, Z.; Liu, Y.; Liu, J.; Wen, F.; Zhu, C. AMP-Net: Denoising-based deep unfolding for compressive image sensing. IEEE Trans. Image Process. 2020, 30, 1487–1500. [Google Scholar] [CrossRef]
  43. Chen, W.; Yang, C.; Yang, X. FSOINET: Feature-space optimization-inspired network for image compressive sensing. In Proceedings of the ICASSP 2022–2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Virtual, 7–13 May 2022; pp. 2460–2464. [Google Scholar]
  44. Yang, J.; Price, B.; Cohen, S.; Lee, H.; Yang, M.H. Object contour detection with a fully convolutional encoder-decoder network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 193–202. [Google Scholar]
  45. Lu, D.; Weng, Q. A survey of image classification methods and techniques for improving classification performance. Int. J. Remote Sens. 2007, 28, 823–870. [Google Scholar] [CrossRef]
  46. Rawat, W.; Wang, Z. Deep convolutional neural networks for image classification: A comprehensive review. Neural Comput. 2017, 29, 2352–2449. [Google Scholar] [CrossRef]
  47. Zou, Z.; Chen, K.; Shi, Z.; Guo, Y.; Ye, J. Object detection in 20 years: A survey. Proc. IEEE 2023, 111, 257–276. [Google Scholar] [CrossRef]
  48. Zhao, Z.Q.; Zheng, P.; Xu, S.t.; Wu, X. Object detection with deep learning: A review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232. [Google Scholar] [CrossRef]
  49. Padilla, R.; Netto, S.L.; Da Silva, E.A. A survey on performance metrics for object-detection algorithms. In Proceedings of the 2020 International Conference on Systems, Signals and Image Processing (IWSSIP), Niteroi, Brazil, 1–3 July 2020; pp. 237–242. [Google Scholar]
  50. Zhou, S.; Deng, X.; Li, C.; Liu, Y.; Jiang, H. Recognition-oriented image compressive sensing with deep learning. IEEE Trans. Multimed. 2022, 25, 2022–2032. [Google Scholar] [CrossRef]
  51. Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? the kitti vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar]
  52. Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  53. Kumar, C.; Punitha, R. Yolov3 and yolov4: Multiple object detection for surveillance applications. In Proceedings of the 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 20–22 August 2020; pp. 1316–1321. [Google Scholar]
  54. Choi, J.; Chun, D.; Kim, H.; Lee, H.J. Gaussian yolov3: An accurate and fast object detector using localization uncertainty for autonomous driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 502–511. [Google Scholar]
  55. Tian, D.; Lin, C.; Zhou, J.; Duan, X.; Cao, Y.; Zhao, D.; Cao, D. SA-YOLOv3: An efficient and accurate object detector using self-attention mechanism for autonomous driving. IEEE Trans. Intell. Transp. Syst. 2020, 23, 4099–4110. [Google Scholar] [CrossRef]
  56. Koksal, A.; Ince, K.G.; Alatan, A. Effect of annotation errors on drone detection with YOLOv3. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 1030–1031. [Google Scholar]
Figure 1. A fundamental concept of CS.
Figure 1. A fundamental concept of CS.
Sensors 24 04348 g001
Figure 2. A potential motivation of CV-oriented latent CS [12]. Reproduced with permission from Luyang Liu, Proceedings of the 5th ACM International Conference on Multimedia in Asia; published by ACM, 2023.
Figure 2. A potential motivation of CV-oriented latent CS [12]. Reproduced with permission from Luyang Liu, Proceedings of the 5th ACM International Conference on Multimedia in Asia; published by ACM, 2023.
Sensors 24 04348 g002
Figure 3. Structure of CV-oriented adaptive sampling [12]. Reproduced with permission from Luyang Liu, Proceedings of the 5th ACM International Conference on Multimedia in Asia; published by ACM, 2023.
Figure 3. Structure of CV-oriented adaptive sampling [12]. Reproduced with permission from Luyang Liu, Proceedings of the 5th ACM International Conference on Multimedia in Asia; published by ACM, 2023.
Sensors 24 04348 g003
Figure 4. The process of adaptive sampling rate allocation: b represents the block size, r h i g h and r l o w denote the high (red) and low (black) sampling rates, respectively, and W i , j and R i , j represent the value at the position of the i-th row and j-th column [12]. Reproduced with permission from Luyang Liu, Proceedings of the 5th ACM International Conference on Multimedia in Asia; published by ACM, 2023.
Figure 4. The process of adaptive sampling rate allocation: b represents the block size, r h i g h and r l o w denote the high (red) and low (black) sampling rates, respectively, and W i , j and R i , j represent the value at the position of the i-th row and j-th column [12]. Reproduced with permission from Luyang Liu, Proceedings of the 5th ACM International Conference on Multimedia in Asia; published by ACM, 2023.
Sensors 24 04348 g004
Figure 5. Classification error due to information loss in the reconstructed image. The left shows the image by uniform sampling (BCS) with incomplete reconstruction due to missing features of the original signal, which induces classification errors; the right shows the image by Ours with recognizable reconstruction due to adaptive sampling so that the feature information of the target is preserved.
Figure 5. Classification error due to information loss in the reconstructed image. The left shows the image by uniform sampling (BCS) with incomplete reconstruction due to missing features of the original signal, which induces classification errors; the right shows the image by Ours with recognizable reconstruction due to adaptive sampling so that the feature information of the target is preserved.
Sensors 24 04348 g005
Figure 6. Comparison of results for state-of-the-art CS techniques on STL10 dataset [34].
Figure 6. Comparison of results for state-of-the-art CS techniques on STL10 dataset [34].
Sensors 24 04348 g006
Figure 7. Comparison of results for state-of-the-art CS techniques on Intel dataset [35].
Figure 7. Comparison of results for state-of-the-art CS techniques on Intel dataset [35].
Sensors 24 04348 g007
Figure 8. Comparison of results for state-of-the-art CS techniques on Imagenette dataset [36].
Figure 8. Comparison of results for state-of-the-art CS techniques on Imagenette dataset [36].
Sensors 24 04348 g008
Figure 9. Object detection results of reconstructed images with different sampling techniques.
Figure 9. Object detection results of reconstructed images with different sampling techniques.
Sensors 24 04348 g009
Figure 10. Comparison of results for state-of-the-art CS techniques on KITTI dataset [51].
Figure 10. Comparison of results for state-of-the-art CS techniques on KITTI dataset [51].
Sensors 24 04348 g010
Table 1. Network layer structure of one-step recovery MobileNetV3.
Table 1. Network layer structure of one-step recovery MobileNetV3.
LayerInputOperatorOutput ChannelStride
1H × W × 3conv2d162
2(H × W)/4 × 16bottleneck, 3 × 3161
3(H × W)/4 × 16bottleneck, 3 × 3242
4(H × W)/16 × 24bottleneck, 3 × 3241
5(H × W)/16 × 24bottleneck, 5 × 5402
6(H × W)/64 × 40bottleneck, 5 × 5401
7(H × W)/64 × 40bottleneck, 5 × 5401
8(H × W)/64 × 40bottleneck, 3 × 3802
9(H × W)/256 × 80bottleneck, 3 × 3801
10(H × W)/256 × 80bottleneck, 3 × 3801
11(H × W)/256 × 80bottleneck, 3 × 3801
12(H × W)/256 × 80bottleneck, 3 × 31121
13(H × W)/256 × 112bottleneck, 3 × 31121
14(H × W)/256 × 112bottleneck, 5 × 51602
15(H × W)/1024 × 160bottleneck, 5 × 51601
16(H × W)/1024 × 160bottleneck, 5 × 51601
17(H × W)/1024 × 160DUpsampling11
OutputH × W × 1---
Table 2. Comparison of results [12] for BCS [8], BCS-PCT [9], and BCS-asymmetry [10] on STL10 dataset [34]. Reproduced with permission from Luyang Liu, Proceedings of the 5th ACM International Conference on Multimedia in Asia; published by ACM, 2023.
Table 2. Comparison of results [12] for BCS [8], BCS-PCT [9], and BCS-asymmetry [10] on STL10 dataset [34]. Reproduced with permission from Luyang Liu, Proceedings of the 5th ACM International Conference on Multimedia in Asia; published by ACM, 2023.
Average
CS Rate
Sampling TechniqueCS RateReconstructed Image QualityClassification Accuracy [%]
Saliency ( r high )Non-Saliency ( r low )PSNR [dB]SSIMXceptionResNetDenseNetAverageDifference
1.00Original image 1.00 --86.3280.7676.7181.26-
0.21BCS 0.21 24.330.7654.4855.1751.4553.70−14.88
BCS-PCT 0.21 25.580.8162.4262.1757.9760.85−7.73
BCS-asymmetry 0.21 26.200.8366.9065.1360.8064.28−4.30
Ours0.50 0.0526.400.8571.9169.3364.5068.58±0.00
0.18BCS 0.18 24.330.7654.4855.1751.4553.70−6.20
BCS-PCT 0.18 25.420.8061.2061.5357.0859.94+0.03
BCS-asymmetry 0.18 26.030.8365.9164.5560.1163.52+3.62
Ours0.50 0.0122.970.7363.4259.2357.0659.90±0.00
0.17BCS 0.17 24.330.7654.4855.1751.4553.70−14.42
BCS-PCT 0.17 25.370.8060.7861.3156.9059.66−8.46
BCS-asymmetry 0.17 25.970.8265.3564.2359.9163.16−4.96
Ours0.40 0.0526.270.8571.1669.0564.1668.12±0.00
0.15BCS 0.15 23.350.7146.6045.6744.9145.73−13.52
BCS-PCT 0.15 24.450.7655.8755.3252.4554.55−4.70
BCS-asymmetry 0.15 25.040.7961.2159.5256.3159.01-0.23
Ours0.40 0.0122.890.7262.5158.6256.6059.24±0.00
0.14BCS 0.14 23.350.7146.6045.6744.9145.73−21.11
BCS-PCT 0.14 24.380.7655.2554.9352.0254.07−12.77
BCS-asymmetry 0.14 24.970.7960.6259.0256.0158.55−8.29
Ours0.30 0.0526.060.8469.4168.0563.0566.84±0.00
0.11BCS 0.11 23.350.7146.6045.6744.9145.73−11.71
BCS-PCT 0.11 24.160.7553.5753.1250.5652.42−5.02
BCS-asymmetry 0.11 24.720.7758.4857.1554.2256.62−0.82
Ours0.30 0.0122.770.7160.0557.1655.1157.44±0.00
0.10BCS 0.10 22.140.6438.7835.2737.8137.29−26.23
BCS-PCT 0.10 23.110.7048.3746.1146.7847.09−16.43
BCS-asymmetry 0.10 23.680.7354.2851.8851.3552.50−11.02
Ours0.20 0.0525.690.8264.5765.6860.3163.52±0.00
0.08BCS 0.08 22.140.6438.7835.2737.8137.29−15.80
BCS-PCT 0.08 22.920.6946.4343.7244.7244.96−8.13
BCS-asymmetry 0.08 23.460.7252.1049.3749.3150.26-2.83
Ours0.20 0.0122.560.6954.3054.0250.9553.09±0.00
0.07BCS 0.07 22.140.6438.7835.2737.8137.29−20.15
BCS-PCT 0.07 22.810.6844.7142.4343.6543.60−13.84
BCS-asymmetry 0.07 23.330.7150.6048.1748.4049.06−8.38
Ours0.10 0.0524.990.7956.9060.0755.3357.43±0.00
0.04BCS 0.04 19.990.5028.8822.0627.6626.20−18.35
BCS-PCT 0.04 20.580.5432.6326.7531.4730.28−14.26
BCS-asymmetry 0.04 21.080.5837.6332.8236.0735.51−9.04
Ours0.10 0.0122.100.6644.9345.2043.5144.55±0.00
Table 3. Comparison of results for BCS [8], BCS-PCT [9], and BCS-asymmetry [10] on Intel dataset [35].
Table 3. Comparison of results for BCS [8], BCS-PCT [9], and BCS-asymmetry [10] on Intel dataset [35].
Average
CS Rate
Sampling TechniqueCS RateReconstructed Image QualityClassification Accuracy [%]
Saliency ( r high )Non-Saliency ( r low )PSNR [dB]SSIMXceptionResNetDenseNetAverageDifference
1.00Original image 1.00 --88.5687.1368.6681.45-
0.22BCS 0.22 24.390.7353.2048.9667.5356.56−8.50
BCS-PCT 0.22 25.200.7756.0653.8667.5359.15−5.91
BCS-asymmetry 0.22 25.700.7958.2656.0367.5060.60−4.47
Ours0.50 0.0526.180.8264.5662.7367.9065.06±0.00
0.19BCS 0.19 23.870.7050.5646.4667.5354.85−9.49
BCS-PCT 0.19 24.700.7454.2051.8667.5357.86−6.48
BCS-asymmetry 0.19 25.170.7656.6654.5367.6659.62−4.73
Ours0.50 0.0124.100.7262.5363.7066.8064.34±0.00
0.18BCS 0.18 23.870.7050.5646.4667.5354.85−9.54
BCS-PCT 0.18 24.660.7454.0651.6367.4657.72−6.68
BCS-asymmetry 0.18 25.120.7656.7654.3067.6659.57−4.82
Ours 0.400.0525.980.8163.6661.6667.8664.39±0.00
0.15BCS 0.15 23.210.6546.7642.1367.0351.97−11.69
BCS-PCT 0.15 23.990.7050.8647.7667.1655.26−8.40
BCS-asymmetry 0.15 24.430.7352.8051.5067.1657.15-6.51
Ours0.40 0.0123.960.7161.9362.4066.6663.66±0.00
0.14BCS 0.14 23.210.6546.7642.1367.0351.97−11.02
BCS-PCT 0.14 23.940.7050.6647.7367.2055.20−7.80
BCS-asymmetry 0.14 24.380.7253.0351.0667.2057.10−5.90
Ours0.30 0.0525.690.8061.9059.2667.8363.00±0.00
0.12BCS 0.12 23.210.6546.7642.1367.0351.97−10.33
BCS-PCT 0.12 23.840.6950.3046.6067.1354.68−7.62
BCS-asymmetry 0.12 24.260.7252.0050.1367.3356.49−5.81
Ours0.30 0.0123.760.7059.7060.6066.6062.30±0.00
0.11BCS 0.11 23.210.6546.7642.1367.0351.97−8.08
BCS-PCT 0.11 23.790.6950.2046.2367.3054.58−5.47
BCS-asymmetry 0.11 24.190.7151.7049.4067.3356.14−3.91
Ours0.20 0.0525.230.7757.8654.5667.7360.05±0.00
0.08BCS 0.08 22.400.5942.5637.9067.0649.17−9.81
BCS-PCT 0.08 22.960.6346.0343.1067.1352.09−6.90
BCS-asymmetry 0.08 23.350.6647.5046.4667.4353.80−5.19
Ours0.20 0.0123.430.6755.0655.1666.7358.98±0.00
0.07BCS 0.07 22.400.5942.5637.9067.0649.17−6.94
BCS-PCT 0.07 22.880.6345.4042.6367.3051.78−4.34
BCS-asymmetry 0.07 23.260.6547.5345.6667.3053.50−2.62
Ours0.10 0.0524.500.7451.9648.5667.8356.12±0.00
0.04BCS 0.04 21.030.4833.5026.3066.5342.11−10.73
BCS-PCT 0.04 21.490.5237.6630.6066.9045.05−7.79
BCS-asymmetry 0.04 21.860.5540.5334.9067.1647.53−5.31
Ours0.10 0.0122.880.6446.9344.6666.9352.84±0.00
Table 4. Comparison of results [12] for BCS [8], BCS-PCT [9], and BCS-asymmetry [10] on Imagenette dataset [36]. Reproduced with permission from Luyang Liu, Proceedings of the 5th ACM International Conference on Multimedia in Asia; published by ACM, 2023.
Table 4. Comparison of results [12] for BCS [8], BCS-PCT [9], and BCS-asymmetry [10] on Imagenette dataset [36]. Reproduced with permission from Luyang Liu, Proceedings of the 5th ACM International Conference on Multimedia in Asia; published by ACM, 2023.
Average
CS Rate
Sampling TechniqueCS RateReconstructed Image QualityClassification Accuracy [%]
Saliency ( r high )Non-SALIENCY ( r low )PSNR [dB]SSIMXceptionResNetDenseNetAverageDifference
1.00Original image 1.00 --90.0385.4084.8686.76-
0.20BCS 0.20 29.430.8588.6483.0184.0685.24−0.88
BCS-PCT 0.20 30.790.8888.9883.9484.5185.81−0.31
BCS-asymmetry 0.20 31.370.8989.0184.2384.6085.95−0.17
Ours0.50 0.0533.500.9189.3284.4984.5486.12±0.00
0.18BCS 0.18 29.430.8588.6483.0184.0685.24+1.74
BCS-PCT 0.18 30.690.8789.0183.8684.5185.79+2.29
BCS-asymmetry 0.18 31.270.8889.1284.1284.6685.97+2.47
Ours0.50 0.0129.920.8488.5080.7681.2483.50±0.00
0.17BCS 0.17 29.430.8588.6483.0184.0685.24−0.92
BCS-PCT 0.17 30.630.8788.9883.8684.5485.79−0.36
BCS-asymmetry 0.17 31.210.8889.0784.0984.7185.96−0.20
Ours0.40 0.0533.210.9189.4484.5184.5186.15±0.00
0.14BCS 0.14 28.380.8287.1080.9083.1583.72+0.18
BCS-PCT 0.14 29.580.8588.1082.6683.9784.91+1.37
BCS-asymmetry 0.14 30.150.8688.2183.2984.0985.20+1.66
Ours0.40 0.0129.760.8488.4480.7681.4183.54±0.00
0.13BCS 0.13 28.380.8287.1080.9083.1583.72−2.38
BCS-PCT 0.13 29.510.8587.9382.8184.0384.92−1.18
BCS-asymmetry 0.13 30.070.8688.2183.1584.1285.16−0.94
Ours0.30 0.0532.800.9089.3584.4684.4986.10±0.00
0.11BCS 0.11 28.380.8287.1080.9083.1583.72+0.19
BCS-PCT 0.11 29.350.8487.8482.5283.9484.77+1.24
BCS-asymmetry 0.11 29.910.8588.1983.0184.1285.11+1.58
Ours0.30 0.0129.530.8388.4480.6781.4783.53±0.00
0.10BCS 0.10 27.020.7784.5175.6180.1680.09−5.99
BCS-PCT 0.10 28.170.8186.3979.6882.1582.74−3.34
BCS-asymmetry 0.10 28.720.8286.7180.3682.6983.25−2.83
Ours0.20 0.0532.080.8989.2784.4684.5186.08±0.00
0.07BCS 0.07 27.020.7784.5175.6180.1680.09−3.28
BCS-PCT 0.07 27.850.8086.1478.4581.7882.12−1.25
BCS-asymmetry 0.07 28.380.8186.6279.9382.1582.90−0.47
Ours0.20 0.0129.110.8288.2180.5381.3883.37±0.00
0.07BCS 0.07 27.020.7784.5175.6180.1680.09−5.92
BCS-PCT 0.07 27.850.8086.1478.4581.7882.12−3.89
BCS-asymmetry 0.07 28.380.8186.6279.9382.1582.90−3.11
Ours0.10 0.0530.760.8789.2184.2384.6086.01±0.00
0.04BCS 0.04 24.750.6874.6159.2759.3364.40−18.25
BCS-PCT 0.04 25.530.7278.7765.5968.4170.92−11.73
BCS-asymmetry 0.04 26.050.7480.7368.8371.9973.85−8.80
Ours0.10 0.0128.260.8087.4779.5380.9682.65±0.00
Table 5. Comparison using three different block sizes.
Table 5. Comparison using three different block sizes.
DatasetBlock SizeAverage
CS Rate
Average
Accuracy
of BCS [%]
Average
Accuracy
of Ours [%]
Improvement in
Accuracy of Ours
Compared with BCS [%]
STL10
(96 × 96)
32 × 320.1544.7554.44+9.69
16 × 160.1443.6456.16+12.52
8 × 80.1343.6359.87+16.24
Intel
(150 × 150)
32 × 320.1449.6159.21+9.60
16 × 160.1450.6860.41+9.72
8 × 80.1350.8961.08+10.18
Imagenette
(512 × 512)
32 × 320.1379.7983.45+3.66
16 × 160.1381.2084.26+3.07
8 × 80.1281.1584.71+3.55
Table 6. Comparison of results for BCS [8], BCS-PCT [9], and BCS-asymmetry [10] on KITTI [51].
Table 6. Comparison of results for BCS [8], BCS-PCT [9], and BCS-asymmetry [10] on KITTI [51].
Average
CS Rate
Sampling TechniqueCS RateReconstructed Image QualityDetection Accuracy
Saliency ( r high )Non-Saliency ( r low )PSNR [dB]SSIMPrecisionRecallF1-ScoremAP
1.00Original image 1.00 --0.7520.8790.7960.804
0.20BCS 0.20 27.410.850.7400.8490.7830.768
BCS-PCT 0.20 31.020.910.7450.8610.7920.784
BCS-asymmetry 0.20 31.320.910.7460.8600.7920.784
Ours0.50 0.0531.510.910.7370.8710.7900.794
0.17BCS 0.17 27.410.850.7400.8490.7830.768
BCS-PCT 0.17 30.930.910.7460.8610.7930.784
BCS-asymmetry 0.17 31.260.910.7460.8600.7920.783
Ours0.50 0.0127.600.830.7230.8340.7670.751
0.16BCS 0.16 27.410.850.7400.8490.7830.768
BCS-PCT 0.16 30.890.910.7430.8590.7900.781
BCS-asymmetry 0.16 31.240.910.7440.8590.7900.781
Ours0.40 0.0531.260.910.7360.8700.7900.796
0.14BCS 0.14 26.290.820.7240.8130.7570.728
BCS-PCT 0.14 29.850.880.7310.8430.7770.761
BCS-asymmetry 0.14 30.220.890.7370.8460.7810.765
Ours0.40 0.0127.470.830.7270.8320.7690.753
0.13BCS 0.13 26.290.820.7240.8130.7570.728
BCS-PCT 0.13 29.800.880.7310.8420.7760.761
BCS-asymmetry 0.13 30.190.890.7360.8450.7800.763
Ours0.30 0.0530.900.910.7370.8740.7920.796
0.11BCS 0.11 26.290.820.7240.8130.7570.728
BCS-PCT 0.11 29.670.880.7310.8410.7750.760
BCS-asymmetry 0.11 30.110.890.7360.8460.7800.765
Ours0.30 0.0127.270.820.7240.8300.7670.750
0.10BCS 0.10 24.760.760.7110.7180.7060.637
BCS-PCT 0.10 28.360.850.7330.7880.7520.715
BCS-asymmetry 0.10 28.780.860.7310.7940.7550.719
Ours0.20 0.0530.220.900.7410.8710.7930.792
0.07BCS 0.07 24.760.760.7110.7180.7060.637
BCS-PCT 0.07 28.070.840.7300.7840.7480.705
BCS-asymmetry 0.07 28.590.850.7330.7860.7510.714
Ours0.20 0.0126.900.810.7210.8240.7620.743
0.06BCS 0.06 24.760.760.7110.7180.7060.637
BCS-PCT 0.06 27.920.840.7280.7820.7470.704
BCS-asymmetry 0.06 28.490.850.7320.7880.7520.716
Ours0.10 0.0528.810.880.7380.8680.7900.792
0.04BCS 0.04 22.150.660.6580.4580.5250.387
BCS-PCT 0.04 25.480.770.6920.6060.6330.531
BCS-asymmetry 0.04 26.070.790.6990.6270.6500.550
Ours0.10 0.0126.040.790.7230.8020.7530.723
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, L.; Nishikawa, H.; Zhou, J.; Taniguchi, I.; Onoye, T. Computer-Vision-Oriented Adaptive Sampling in Compressive Sensing. Sensors 2024, 24, 4348. https://doi.org/10.3390/s24134348

AMA Style

Liu L, Nishikawa H, Zhou J, Taniguchi I, Onoye T. Computer-Vision-Oriented Adaptive Sampling in Compressive Sensing. Sensors. 2024; 24(13):4348. https://doi.org/10.3390/s24134348

Chicago/Turabian Style

Liu, Luyang, Hiroki Nishikawa, Jinjia Zhou, Ittetsu Taniguchi, and Takao Onoye. 2024. "Computer-Vision-Oriented Adaptive Sampling in Compressive Sensing" Sensors 24, no. 13: 4348. https://doi.org/10.3390/s24134348

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop