Next Article in Journal
Motion Periods of Planet Gear Fault Meshing Behavior
Next Article in Special Issue
Hybrid reference-based Video Source Identification
Previous Article in Journal
Improvement of Anomalous Behavior Detection of GNSS Signal Based on TDNN for Augmentation Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Dataset for Source Identification of High Dynamic Range Images

1
Department of Information Engineering, University of Florence, Via di S. Marta, 3, 50139 Florence, Italy
2
Department of Electronic Media, Saudi Electronic University, Abi Bakr As Sadiq Rd, Riyadh 11673, Saudi Arabia
3
Beijing Key Laboratory of Advanced Information Science and Network Technology, Beijing Jiaotong University, Beijing 100044, China
4
Institute of Information Science, Beijing Jiaotong University, Beijing 100044, China
5
FORLAB, Multimedia Forensics Laboratory, PIN Scrl, Piazza G. Ciardi, 25, 59100 Prato, Italy
*
Authors to whom correspondence should be addressed.
Sensors 2018, 18(11), 3801; https://doi.org/10.3390/s18113801
Submission received: 14 September 2018 / Revised: 29 October 2018 / Accepted: 2 November 2018 / Published: 6 November 2018
(This article belongs to the Special Issue Camera Identification on Mobile Devices)

Abstract

:
Digital source identification is one of the most important problems in the field of multimedia forensics. While Standard Dynamic Range (SDR) images are commonly analyzed, High Dynamic Range (HDR) images are a less common research subject, which leaves space for further analysis. In this paper, we present a novel database of HDR and SDR images captured in different conditions, including various capturing motions, scenes and devices. As a possible application of this dataset, the performance of the well-known reference pattern noise-based source identification algorithm was tested on both kinds of images. Results have shown difficulties in source identification conducted on HDR images, due to their complexity and wider dynamic range. It is concluded that capturing conditions and devices themselves can have an impact on source identification, thus leaving space for more research in this field.

1. Introduction

Digital media have become a crucial source of information worldwide. The increase in popularity of smartphone devices, camcorders, cameras and other digital media devices has brought many advantages, but the security aspects are endangered. The obtained data can easily be transferred and edited to change their perspective altogether. This phenomenon has led to difficulties pertaining to the authentication of the information shared in the form of multimedia content. Multimedia forensics is a branch of forensic sciences that deals with this problem. Its role ranges from the investigation of operational problems to the recovery of intentionally or unintentionally caused damage of the original information [1,2]. One of the cornerstones of this branch is accumulation of and fetching data regarding criminal activities [3,4], sharing of the data that have been tampered with and content manipulation [5]. The main problem is that alterations and manipulations can be performed at such a high level that it becomes very difficult to distinguish the original content from the one that has been tampered with or is fake. Authentication of the information involves tracing of specific codes, links, logos, ambiance, lighting or any sort of clue that was present when the original content was made. Identifying the source of the information can therefore be of high importance. Numerous algorithms have been developed to perform digital image source identification. The process can be conducted using approaches such as artifact detection [6,7], detection of pixel defects [8] and supervised learning [9]. Source identification based on detection of the reference pattern noise [10,11], better known as Photo-Response Non-Uniformity noise (PRNU), has proven to be very successful. It was first introduced in [12] as a general solution for reliable source identification. This approach is based on the fact that pixels have different sensitivity to illumination, which provides the ability to recognize the source device, even if the manufacturer did not imprint an invisible watermark on the images. Despite the significant success of the PRNU method, there is still space for further research. Image capturing devices have developed rapidly in the past decade, providing a wide range of options, such as image stitching [13] enabled by multi-lense mobile devices [14], various techniques of image composition [15,16] and fusing. Smartphone devices are commonly equipped to provide not only image stitching, but also a wide range of image post-processing options that the user can apply without knowing which processes are performed to get the final result. The High Dynamic Range (HDR) is a very popular option, which provides the ability of representing a wider luminance range, in comparison to the conventional Standard Dynamic Range (SDR) images, and generate much more realistic visual content [17], as shown in Figure 1. While the SDR profile does not allow big luminosity adjustments and is therefore sensitive in cases of bad lighting conditions and facing the source of light, the HDR profile copes with these problems and simulates the way the human visual system adjusts to these kinds of lighting changes. This can be noticed by comparing the images given in Figure 1. HDR images are believed to become important multimedia files in the near future. As their possibilities are still a common research topic, new standards have been created for still HDR images [17], and a number of researchers in the field of multimedia security have studied steganography [18] and watermarking [19] for this image type. However, as far as we know, no research or dataset focusing on source camera identification based on HDR images captured by smartphone devices is available up to this date. This fact gave us a strong motivation to build a novel image dataset.
In contrast to the SDR images, their HDR counterparts are characterized by a high irradiance dynamic range and localized contrast [20,21]. Currently, there are three major tools for generating HDR images: Computer Graphics (CG), HDR cameras and SDR cameras. As smartphone devices are commonly used for HDR image acquisition using ‘HDR MODE’, they are worth being paid attention. In general, the acquisition [22,23,24,25] includes several important stages: multi-exposure image capturing [23], image alignment [26], image fusion [22,23,24,27] and tone mapping [28,29]. During the stage of image alignment, geometric transformation can be executed on the misaligned images. Furthermore, image fusion and tone mapping can lead to a non-linear transformation of pixel values. Considering the previous statements, it can be concluded that the PRNU-based method, an effective way to identify the source camera for SDR images, is facing new challenges in the case of HDR images. Therefore, it is of high importance to test the performance of the PRNU-based method on HDR images.
This paper highlights a forensics analysis of HDR images deploying the novel datasets acquired through mobile camera applications. It caters to the authentication techniques of forensic analysis through the source identification process and forgery detection. This will be profitable in backtracking the rightful owner of the content and will be helpful in determination of the information’s authenticity. Moreover, forgery detection will analyze the amount of tampering with or manipulation done to the original information, as well as techniques or processes through which it has been performed.
The remainder of paper is organized as follows: Section 2 gives the dataset description. The principles of PRNU detection are briefly introduced in Section 3. Section 4 and Section 5 present the performed experiments and their results, respectively. Finally, Section 6 gives the conclusions to this research.

2. The Dataset

Following the procedure adopted to build the VISION Dataset [30], a novel dataset with compressed HDR images was created. The term CDR (Compressed Dynamic Range), which describes compressed HDR images, is not commonly used in the literature, but is often referred to as HDR. Therefore, we adopt the term HDR in the remainder of this paper. Twenty three mobile devices were used for capturing a total of 5415 images in different scenarios. All images described in this paper will be available at https://lesc.dinfo.unifi.it/en/datasets. This approach enabled the analysis of differences between HDR and SDR images, their application and usability in source identification.
The brands of the employed devices included Huawei, Samsung, Xiaomi, Gionee, One Plus, Asus and Apple. Among them, there were seven different models of Huawei, four of Samsung, three of Xiaomi, six of Apple and one for each of Asus, Gionee and OnePlus. Seventeen of the employed devices used the Android operating system, while six of them used iOS. Further information about the devices, the image resolutions they provide and the number of images taken is given in Table 1. Devices were named in accordance with their operating system, e.g., “A” stands for a device that uses Android, while “I” indicates the iOS operating system. Images were further named in the format “device_category_movement_number”, where “device” represents the abbreviated name of the device model, as previously explained, “category” refers to HDR or SDR, “movement” defines camera movements during the acquisition, which can be TRIPOD, HAND or SHAKING, while “number” represents the ordinal number of the captured image. All the selected mobile devices were configured to capture photos in the default camera settings of the software system. Photos were taken without using flash, in different atmospheres, including both indoor and outdoor scenes. As the analysis requires both HDR and standard SDR images of the same scene, two photos of each scene were captured.
For source identification purposes, the images were divided into two categories: FLAT and NAT. FLAT images represent approximately uniform surfaces, which are flattish in terms of texture and allow computing a clearer PRNU reference, in comparison to the images representing natural scenes. Thus, FLAT images are devoted to sensor-noise-based source identification. Specifically, images of walls and skies are in this category. On the other hand, NAT images are available for any image forensic application. While FLAT images are homogeneous, NAT images can contain a large number of details and colors. Therefore, the NAT category includes generic images, which contain a large span of scenes. Depending on the way they were created, NAT images were further divided into three categories:
-
images taken from the tripod (TRIPOD),
-
images taken by hand (HAND),
-
images taken by a shaky hand (SHAKING).
The stability of the image highly depends on the camera steadiness, which was the reason for capturing the images with three different motions. Tripod allows the camera to be as still as possible. Capturing the image with the device held in a steady hand is the most common way of taking photographs, which usually causes small pixel artifacts that are not very noticeable to the human eye. Finally, images taken by a shaky hand can be blurred, because of the pixel shifting, caused by the camera shaking. As HDR images are usually obtained from multiple SDR images, it is expected that the motion could have an impact on the source identification results for HDR images. The previously described structure of the dataset is shown in Figure 2. A sample of images from the created dataset is given in Figure 3.

3. PRNU-Based Source Identification

PRNU is also known as a unique stochastic fingerprint of imaging sensors, and it is obtained from a set of N images taken by the same device, using the Maximum Likelihood Estimator (MLE) [11]. It is shown that the best estimation performances can be achieved if the number of images N is a sufficiently large integer and the images are uniformly white, but not fully saturated [11]. In this paper, an improved PRNU estimator, presented in [11], is employed. The improvement is primarily reflected in the reduced number of images (minimum of 30 instead of 50 images) required for PRNU estimation, retaining the basic concepts of the original method.
MLE is modeled from the simplified sensor output model [11], defined by Equation (1), which applies to each pixel of the image. Symbol I denotes the luminance value of the analyzed pixel; Y is illumination; g stands for the channel color gain; γ is the correction factor; Θ q is the quantization noise; while Λ includes a combination of other noise sources [31]. Finally, K is the PRNU factor, which is a noise-like signal responsible for the fingerprint [11] and which is estimated from N images taken by the camera.
I = g γ × ( 1 + K ) Y + Λ γ + Θ q
The fingerprint is obtained as an approximation to the Photo Response Non-Uniformity (PRNU) noise [12]. The framework of the PRNU-based algorithm is shown in Figure 4. N images from the set are first denoised using the wavelet-based denoising filter. Noise residuals W are then averaged in order to compute the fingerprint. In particular, the maximum likelihood estimate K ^ is obtained from partial derivation of the log-likelihood L ( K ) of ratio W I solved for K [11], as shown in Equation (2), where σ 2 denotes the variance of White Gaussian Noise (WGN). WGN is accepted as a simplified model of the noise term, without significant impact on the results of the procedure.
δ L ( K ) δ K = k = 1 N W k / I k K σ 2 / ( I k ) 2 = 0 K ^ = k = 1 N W k I k k = 1 N ( I k ) 2
In order to perform a source identification, the noise is extracted from the image under analysis and then correlated with the previously found camera reference pattern noise (fingerprint). The maximum of the normalized correlation ρ is considered to be a good approximation of the generalized likelihood ratio test [32] and is therefore computed, in accordance with the statistical signal theory relation for the correlation computation. Finally, the Neyman–Pearson approach can be employed for correlation thresholding and final source identification.
Due to the dependence of the correlation factor on the image size, it is not a suitable parameter for further analysis of the results. The Peak to Correlation Energy ratio (PCE) is a better comparison factor [33], and it can be defined by Equation (3), where s p e a k denotes the coordinates of the peak, m and n are the image dimensions and M is a small neighborhood around the peak [33].
PCE = ρ ( s p e a k ; X , Y ) 2 1 m n | M | s M ρ ( s ; X , Y ) 2
PCE considers a possible special shifts between the fingerprint and the noise extracted from the image due to possible cropping or use of the image. Then, a correlation is conducted for each shift, and if correlation proof is found, the corresponding shift is considered to give the correct output.

4. Experiments

The experiment was conducted by computing a camera fingerprint over three different sets of flat images, for each employed device, namely:
-
HDR, which contained a set of 50–87 flat HDR images per device,
-
SDR, which contained a set of 50–59 flat SDR images per device,
-
MIX, which contained a set of 100–137 images, including both flat HDR and flat SDR images per device.
Each fingerprint was used for further computation of the correlation with the noise extracted from each image belonging to one of the natural datasets.
After performing the PCE computation for all of the images taken by the device of interest, plots of PCE values for single images were generated for each of the three analyzed categories. These results served as a starting point of digital source identification reliability analysis. If the PCE value of an image was above the threshold, the image was considered to be reliably paired with a digital source with which it was captured. The threshold value was chosen to be 45, in accordance with the conclusions drawn in [11].

5. Results

The analysis was first conducted by comparison of the PCE values of single SDR and HDR images when the noise extracted from them was correlated with fingerprints created from flat SDR, HDR and MIX sets of images. The impact of image and fingerprint types, as well as the impact of motions that occurred during image capturing were observed. The reliability of source identification was examined afterwards.
It is worth noting that PCE computation was performed for all the analyzed images captured by a certain camera model and was subsequently averaged. Results have shown that averaged PCE was less prone to result variations, and camera movements had less impact on the results in the case of using this parameter.

5.1. Analysis in Terms of Image Type: SDR vs. HDR

First of all, we analyzed the correlation of images with the flat SDR-based fingerprint. As expected, generally higher PCE values were obtained using SDR images and a flat SDR-based fingerprint, considering they were of the same type. The results can be seen in Figure 5. Devices A01–A06 have shown the biggest difference between PCE values in the case when the captures were taken using a tripod. While SDRs were characterized by high PCE values in that case, the PCEs of most HDRs were low and sometimes were even below the threshold. The difference in terms of a higher PCE value for SDRs in comparison to HDRs was noticeable in the case of captures taken using a tripod, as well. In the case of captures made by a shaky hand, an analogy to the previous two cases cannot be applied. While Devices A02, A04 and A05 were shown to have similar PCE values for both SDR and HDR images, the other half of the devices were shown to have higher PCE values when SDR images were employed in combination with the SDR-based fingerprint.
Similar results were obtained with Devices A07–A17. The differences between PCE values of SDR and HDR images were not as emphasized as for the previously analyzed set of devices, rather similar in the case of Devices A07–A10. On the other hand, A11–A17 followed the same behavior as A01–A06. Images captured by a shaky hand did not follow any pattern. While PCEs were similar for Devices A08–A11 and A15, they were distinctively higher for SDR than HDR images captured by other devices from the mentioned set.
Finally, an analysis of Devices I01–I06 showed similar PCE values for both image types, regardless of the camera motion. While PCE values obtained by analyzing SDR images from Devices I02–I06 were slightly higher than the ones corresponding to the HDR images, I01 showed unexpected results. Indeed, with this device, the PCE values for images taken by a steady and by a shaky hand were shown to be higher for HDR images correlated with the SDR-based fingerprint.
The analysis was further conducted by comparison of the PCE values when noise from the SDR and HDR images was correlated with the HDR-based fingerprint. The results are shown in Figure 6. All the devices showed analogous behavior as the previously described one.
It is worth noting that the majority of SDR images combined with SDR-based fingerprints, as well as HDR images combined with HDR-based fingerprints produced PCE values higher than the threshold value. On the other hand, the combination of different types of images and fingerprints resulted in varying PCE values, depending on the employed camera device. An example of the results obtained by a single device is shown in Figure 7. It is noticeable that the PCE values of the HDR images captured by the I02 device model were above the threshold value for all the tested images, when they were correlated with the fingerprint of the SDR image set. This occurred regardless of camera movements. Similar results were obtained by using SDR images captured by a completely different device model, A07, and correlating them to the fingerprint of HDR images. The obtained result is shown in Figure 8. The analysis results led us to the conclusion that some of the devices can be identified more easily than other, and that the correct identification of those devices can be provided regardless of the type (HDR or SDR) of images. On the other hand, most of the devices have shown significantly different PCE values of images, depending on their type.

5.2. Analysis in Terms of Fingerprint Type

As it was concluded that, in most cases, the combination of different types of images and one fingerprint type gave a lower PCE value in comparison to the case of employing only SDRs or HDRs in the analysis, it is useful to analyze the impact of different fingerprints on the same sets of images. Comparing Figure 5 and Figure 6, it is noticeable that SDR images had a higher PCE value when the HDR-based fingerprint is employed in the case of A01, A05, I01 and I03, for each of the motion scenarios. A difference in terms of PCE values between scenarios cannot be seen for these devices. On the other hand, images taken by Devices A11, A15 and A17 had a significantly higher PCE value when the correlation of SDRs was computed with the SDR-based fingerprint. Images captured by all of the other devices are shown to have similar PCE values for both of the fingerprints.
The analysis of fingerprint impact on PCE values was performed for HDR images, as well. Images from Devices A11 and A15 were shown to have higher PCE values in correlation with the SDR-based fingerprint. While A11 showed no differences in the amount of PCE improvement for different motion scenarios, images from A15 had a significantly higher PCE in the case of capturing by a shaky hand. Improvements were noticeable in the case of images captured using a tripod, but there were no differences in the case of images taken by a steady hand. In contrast to the previously-mentioned devices, A01, A17, I01 and I03 showed better performances when their HDR images were correlated with the HDR-based fingerprint. Differences in terms of camera motions were not noticeable in these cases.
The obtained results for Devices A13 and A14 were especially interesting, because they showed a distinctive difference between SDR and HDR images taken by those devices. SDR images were shown to have high PCE values, no matter if the correlation were performed using the flat SDR- or HDR-based fingerprint, while HDR images obtained low PCE values, except from the images captured while shaking the camera device. Therefore, fingerprint type did not have a big influence on the results in this case, but the type of images did.

5.3. MIX Category Results’ Analysis

The previously described analysis has shown that sources were, very often, identifiable in the cases when correlation was computed between the HDR image noise and SDR image fingerprint and vice versa. Considering this fact, it is natural that fingerprints computed from the MIX FLAT set of images provided good results, as well. They are shown in Figure 9. As the MIX category includes both HDR and SDR images, it was considered to be the most relevant for the analysis.
The averaged PCE of images captured by most of the devices was above the threshold when the MIX category was used as a reference, as can be seen from Figure 9. SDR images from Devices A01–A06 showed better performances than their HDR counterparts when they were captured by hand and a tripod. Images captured by a shaking hand using Devices A02, A04 and A05 were shown to have similar PCE values, regardless of the image type, in correlation with the MIX flat fingerprint. The other half of the devices from this set had shown better results for SDR images in the case of shaking motion. Captures made by a shaky hand led to bigger variations in the results, compared to more steadily captured images. This observation was expected and can be explained by the fact that the camera movement shifted the fingerprint matrices, making different offsets for the analyzed images. The offset depended on the velocity of the camera, which had not been measured during the dataset formation process.
The difference between SDRs and HDRs in terms of the PCE value was not significant for Devices A07–A10 when the MIX flat fingerprint was employed. SDRs have shown better performances for Devices A11 and A12, with the exception of images taken by a shaky hand using A11. In that case, the PCEs were comparable for both SDRs and HDRs. Similar conclusions can be made by analyzing the results obtained for Devices A13–A17, where only captures in shaking motion taken by Devices A15 and A17 have similar PCE values for both SDR and HDR images, while the rest of the devices and scenarios show the advantage of SDR images in the source identification process using the PRNU method.
Deviation from the previous results occurred in the analysis of iPhone devices. Images taken by I01–I06 with different motions have shown comparable PCE values for both SDR and HDR images. All the values were above the threshold, with the exception of the part of the images taken by I04 using the tripod. These results led us to the conclusion that iPhone devices are more easily identifiable image sources than other devices included in this research, no matter the type of analyzed image. This conclusion corresponds to the one conducted after analyzing the impact of using different types of images and the same SDR- or HDR-based fingerprint for PRNU computation.

5.4. Reliability of Source Identification

The threshold value was chosen to be equal to 45. Results have shown that both SDR and HDR image sources can be detected using this value, with the exception of HDR images taken from Devices A12, A14 and partially A6 and A17. Considering this fact, it is clear that the PRNU method cannot be generally applied, because the devices themselves can introduce variable hidden digital content to the images they produce or affect the procedure in another manner.
The most reliable source identification was made for Devices A07, A09, A10, I01, I02, I03, I05 and I06. Camera movements and usage of flat images were shown to have a minimal effect on the PCE value for the previously mentioned devices. Images taken by iPhone devices were shown to have a PCE value above the threshold and therefore provided digital source identification using the PRNU method. On the other hand, Devices A06, A12 and A16 were shown to give better PCE values for SDR, than HDR images. Furthermore, source identification from SDR images was less prone to camera movements for those devices. Taking the previous statements into account, it can be concluded that the complexity of HDR images introduces difficulties in digital source identification for some devices. This phenomenon requires further analysis of the HDR images creation procedure for the devices of interest.

5.5. Analysis of Low PCE Values

The result obtained from A01 is provided in Figure 10. Twenty groups of images were captured in different motions and modes. Each group was provided the same image content as the controlled variable. Considering the difference of acquisition processes of SDR and HDR images, it can be concluded, by comparing the PCE values among three different motions, that image alignment had a serious impact on the performance of the PRNU-based method. As shown in Figure 10, the PCE values of SDR images were higher than the ones of HDR images captured with the hand motion. However, situation is opposite with the tripod motion. The reason could be that the image alignment operation with the hand motion changed the positions of pixels, which led to the mismatch between the noise image extracted from the HDR image and R-PRNU. In the case of the tripod motion, multiple images with perfect alignment were used to extract the noise image. It is well known that the more images are employed, the more precisely is the PRNU calculated. Therefore, higher PCE values would be obtained for HDR images in this case. In the case of the shaking motion, depending on the algorithms used in each device, on the one hand, the shift among the images would be too big to align them, which improved the PCE value of HDR images. On the other hand, image alignment was executed, reducing the PCE value of HDR images.
In order to further explore the reason behind the change of PCE value between SDR and HDR images, the PRNU method based on the pixel patches was applied. Firstly, the images and R-PRNU were cropped into non-overlapping pixel patches with a 128 × 128 size. Then, the PCE values for each pixel patch were calculated, and for each image pair (SDR and HDR images), they were mapped into the same scale with the log function, in order to obtain the PCE map. PCE maps of SDR and HDR images given in Figure 1 captured with the hand motion are shown in Figure 11. An interesting phenomenon occurred at smooth image regions with low luminosity, such as the ground with low brightness, where PCE values of HDR images had higher values than their SDR counterparts. The same results were obtained for both over- and under-exposed image regions. On the contrary, PCE values were decreased for the pixel patches with smooth and high luminance, such as the blue sky. The reason could be that HDR images kept a balance between the dark and bright areas, and the PRNU-based method performed better for the images with much smoother and higher luminance. According to the above analysis, we can make the conclusion that for the smooth pixel patches with higher luminance, but not saturation, HDR and SDR images both had high PCE values. Moreover, image regions with over-/under-exposure usually led to a low PCE value. In addition, the images captured with a strong amount of noise, such as the night scene shown in the last column of Figure 11, also had a low PCE value.
The above presented analysis is more specific, rather than general, due to the fact that each device has its own specifics, which directly influence the results of PCE values obtained on images acquired by them. Considering that, further analysis in terms of image acquisition [34] and sensor pattern noise specifics [35] is required. The proposed dataset provides the ability for this and wider research, such as estimation of displacement fields from pairs of digital images [36] and characterization of the dynamic behavior of a mechanical chain tensioner by functional tolerancing [37].

6. Conclusions

With the help of powerful editing software programs, digital media has become vulnerable to manipulation. One of the possibilities of coping with this problem is digital source identification. This process is challenging, especially for non-standard images, such as HDRs, due to the complexity of their creation and wider dynamic range.
In this paper, we have presented a novel image dataset composed of both SDR and HDR images captured by several smartphone devices. The dataset has been built under controlled acquisition conditions, ensuring that it can be used by the image forensic community for several applications. As an example of using the dataset, this paper represented source identification performed using reference pattern noise, by employing PRNU matching. The analysis has shown that standard SDR images provide more reliable source identification in comparison to HDR images when the PRNU method is applied. Out of the total of seven brands of employed devices, only one brand has shown very small differences in the results for SDR and HDR images, which implies that source identification depends on the device characteristics themselves. This fact can serve as a motivation for the analysis of the acquisition processes adopted by each device. Research focusing on this topic can provide valuable results for the digital source identification process, due to the fact that hardware components leave their marks during the acquisition stage, thus producing a unique camera fingerprint.
The types of single images, as well as the types of images used for fingerprint computation were shown to have an impact on the obtained PCE values used for identification purposes. Moreover, examination of the effect of camera motions at the moment of capturing has shown that motions have a bigger impact on source identification in the case of HDR images, compared to SDR images. Differences in the results were less noticeable in the case of images captured by a steady camera, although they were generally dependent on the device type.
Despite difficulties in processing HDR images and identifying the source camera, the PRNU algorithm has shown its robustness, enabling correct source identification for a large number of tested devices. However, the novel dataset introduced in this paper can be employed in research on the topic of improving the performances of source identification based on HDR images.

Author Contributions

Conceptualization, A.P.; Data Collection, O.A.S., P.Y.; Writing—Original Draft Preparation, O.A.S., P.Y.; Writing—Review & Editing, A.P., R.N.; Supervision, A.P., R.N., Y.Z.

Funding

This material is based on research partially sponsored by the Air Force Research Laboratory and the Defense Advanced Research Projects Agency under Agreement Number FA8750-16-2-0188. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory and the Defense Advanced Research Projects Agency or the U.S. Government. This work was supported in part by the National Key Research and Development of China (No. 2016YFB0800404), the National NSF of China (Nos. 61672090, 61532005, 61332012) and the Fundamental Research Funds for the Central Universities (Nos. 2018JBZ001, 2017YJS054).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Piva, A. An Overview on Image Forensics. ISRN Signal Process. 2013, 2013, 1–22. [Google Scholar] [CrossRef]
  2. De Rosa, A.; Piva, A.; Fontani, M.; Iuliani, M. Investigating multimedia contents. In Proceedings of the 2014 International Carnahan Conference on Security Technology (ICCST), Rome, Italy, 13–16 October 2014; pp. 1–6. [Google Scholar]
  3. Stamm, M.; Wu, M.; Liu, K. Information Forensics: An Overview of the First Decade. IEEE Access 2013, 1, 167–200. [Google Scholar] [CrossRef]
  4. Wen, C.Y.; Yang, K.T. Image authentication for digital image evidence. Forensic Sci. J. 2006, 5, 1–11. [Google Scholar]
  5. Cheddad, A.; Condell, J.; Curran, K.; Mc Kevitt, P. Digital image steganography: Survey and analysis of current methods. Signal Process. 2010, 90, 727–752. [Google Scholar] [CrossRef] [Green Version]
  6. Swaminathan, A.; Wu, M.; Liu, K. Image authentication via intrinsic fingerprints. In Proceedings of the Electronic Imaging, Security, Steganography, and Watermarking of Multimedia Contents IX, San Jose, CA, USA, 28 January–1 February 2007; Volume 6505, pp. 1J–1K. [Google Scholar]
  7. Bayram, S.; Sencar, H.; Memon, N.; Avcibas, I. Source camera identification based on CFA interpolation. In Proceedings of the IEEE International Conference on Image Processing 2005, Genova, Italy, 14 September 2005; Volume 3. [Google Scholar]
  8. Geradts, Z.J.; Bijhold, J.; Kieft, M.; Kurosawa, K.; Kuroki, K.; Saitoh, N. Methods for identification of images acquired with digital cameras. In Proceedings of the Enabling Technologies for Law Enforcement and Security, Boston, MA, USA, 6–8 November 2001; Volume 4232, pp. 505–513. [Google Scholar]
  9. Kharrazi, M.; Sencar, H.T.; Memon, N. Blind source camera identification. In Proceedings of the 2004 International Conference on Image Processing, Singapore, 24–27 October 2004; Volume 1, pp. 709–712. [Google Scholar]
  10. Dirik, A.E.; Sencar, H.T.; Memon, N. Digital single lens reflex camera identification from traces of sensor dust. IEEE Trans. Inf. Forensics Secur. 2008, 3, 539–552. [Google Scholar] [CrossRef]
  11. Chen, M.; Fridrich, J.; Goljan, M.; Lukás, J. Determining image origin and integrity using sensor noise. IEEE Trans. Inf. Forensics Secur. 2008, 3, 74–90. [Google Scholar] [CrossRef]
  12. Lukas, J.; Fridrich, J.; Goljan, M. Digital camera identification from sensor pattern noise. IEEE Trans. Inf. Forensics Secur. 2006, 1, 205–214. [Google Scholar] [CrossRef]
  13. Xiong, Y.; Pulli, K. Fast image stitching and editing for panorama painting on mobile phones. In Proceedings of the CVPR Workshops, San Francisco, CA, USA, 13–18 June 2010; pp. 47–52. [Google Scholar]
  14. Kozko, D. Enabling Multiple Field of View Image Capture within a Surround Image Mode for Multi-Lense Mobile Devices. U.S. Patent 9,380,207, 28 June 2016. [Google Scholar]
  15. Bhardwaj, A.; Raman, S. Robust PCA-based solution to image composition using augmented Lagrange multiplier (ALM). Vis. Comput. 2016, 32, 591–600. [Google Scholar] [CrossRef]
  16. Bouwmans, T.; Javed, S.; Zhang, H.; Lin, Z.; Otazo, R. On the Applications of Robust PCA in Image and Video Processing. Proc. IEEE 2018, 106, 1427–1457. [Google Scholar] [CrossRef]
  17. Artusi, A.; Richter, T.; Ebrahimi, T.; Mantiuk, R.K. High Dynamic Range Imaging Technology. IEEE Signal Process. Mag. 2017, 34, 165–172. [Google Scholar] [CrossRef]
  18. Cheng, Y.M.; Wang, C.M. A Novel Approach to Steganography in High-Dynamic-Range Images. IEEE MultiMedia 2009, 16, 70–80. [Google Scholar] [CrossRef]
  19. Nagurammal, A.; Meyyappan, T. Lossless Image Watermarking for HDR Images Using Tone Mapping. Int. J. Comput. Sci. Netw. Secur. IJCSNS 2013, 13, 113–117. [Google Scholar]
  20. Aguerrebere, C.; Delon, J.; Gousseau, Y.; Musé, P. Best algorithms for HDR image generation. A study of performance bounds. SIAM J. Imaging Sci. 2014, 7, 1–34. [Google Scholar] [CrossRef] [Green Version]
  21. Bateman, P.J.; Ho, A.T.; Briffa, J.A. Image forensics of high dynamic range imaging. In International Workshop on Digital Watermarking; Springer: Berlin/Heidelberg, Germany, 2011; pp. 336–348. [Google Scholar]
  22. Adams, A.; Talvala, E.V.; Park, S.H.; Jacobs, D.E.; Ajdin, B.; Gelfand, N.; Dolson, J.; Vaquero, D.; Baek, J.; Tico, M.; et al. The Frankencamera: An experimental platform for computational photography. In Proceedings of the ACM Transactions on Graphics (TOG), Los Angeles, CA, USA, 26–30 July 2010; Volume 29, p. 29. [Google Scholar]
  23. Gelfand, N.; Adams, A.; Park, S.H.; Pulli, K. Multi-exposure imaging on mobile devices. In Proceedings of the 18th ACM international conference on Multimedia, Firenze, Italy, 25–29 October 2010; pp. 823–826. [Google Scholar]
  24. Mertens, T.; Kautz, J.; Van Reeth, F. Exposure Fusion: A Simple and Practical Alternative to High Dynamic Range Photography. In Computer Graphics Forum; Wiley Online Library: Hoboken, NJ, USA, 2009; pp. 161–171. [Google Scholar]
  25. Mantiuk, R.; Cichowicz, M.; Smyk, M. Implementation of HDR photographic pipeline in mobile devices. In Proceedings of the International Conference Image Analysis and Recognition, Aveiro, Portugal, 25–27 June 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 367–374. [Google Scholar]
  26. Adams, A.; Gelfand, N.; Pulli, K. Viewfinder alignment. In Computer Graphics Forum; Wiley Online Library: Hoboken, NJ, USA, 2008; Volume 27, pp. 597–606. [Google Scholar]
  27. Robertson, M.A.; Borman, S.; Stevenson, R.L. Dynamic range improvement through multiple exposures. In Proceedings of the 1999 International Conference on Image ProcessingImage Processing, Kobe, Japan, 24–28 October 1999; Volume 3, pp. 159–163. [Google Scholar]
  28. Reinhard, E.; Stark, M.; Shirley, P.; Ferwerda, J. Photographic tone reproduction for digital images. ACM Trans. Graph. TOG 2002, 21, 267–276. [Google Scholar]
  29. Guarnieri, G. High Dynamic Range Images: Processing, Display and Perceptual Quality Assessment; University of Trieste: Trieste, Italy, 2009. [Google Scholar]
  30. Shullani, D.; Fontani, M.; Iuliani, M.; Al Shaya, O.; Piva, A. VISION: A video and image dataset for source identification. EURASIP J. Inf. Secur. 2017, 2017, 15. [Google Scholar] [CrossRef]
  31. Julliand, T.; Nozick, V.; Talbot, H. Image noise and digital image forensics. In Proceedings of the International Workshop on Digital Watermarking, Tokyo, Japan, 7–10 October 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 3–17. [Google Scholar]
  32. Goljan, M.; Chen, M.; Fridrich, J. Identifying common source digital camera from image pairs. In Proceedings of the 2007 IEEE International Conference on Image Processing, San Antonio, TX, USA, 16 September–19 October 2007; Volume 6, p. VI-125. [Google Scholar]
  33. Goljan, M.; Fridrich, J.; Filler, T. Large scale test of sensor fingerprint camera identification. In Proceedings of the Media Forensics and Security, San Jose, CA, USA, 18–22 January 2009; Volume 7254, p. 72540I. [Google Scholar]
  34. Sencar, H.T.; Memon, N. Digital image forensics: There Is More to a Picture Than Meets the Eye. In Counter-Forensics: Attacking Image Forensics; Springer: New York, NY, USA, 2013; pp. 327–366. [Google Scholar]
  35. Li, C.T. Source camera identification using enhanced sensor pattern noise. IEEE Trans. Inf. Forensics Secur. 2010, 5, 280–287. [Google Scholar]
  36. Besnard, G.; Hild, F.; Roux, S. “Finite-element” displacement fields analysis from digital images: Application to Portevin–Le Châtelier bands. Exp. Mech. 2006, 46, 789–803. [Google Scholar] [CrossRef]
  37. Calì, M.; Oliveri, S.M.; Ambu, R.; Fichera, G. An Integrated Approach to Characterize the Dynamic Behaviour of a Mechanical Chain Tensioner by Functional Tolerancing. J. Mech. Eng. 2018, 64, 245–257. [Google Scholar]
Figure 1. Examples of (a) Standard Dynamic Range (SDR) and (b) High Dynamic Range (HDR) images.
Figure 1. Examples of (a) Standard Dynamic Range (SDR) and (b) High Dynamic Range (HDR) images.
Sensors 18 03801 g001
Figure 2. The dataset structure.
Figure 2. The dataset structure.
Sensors 18 03801 g002
Figure 3. Sample pictures from the dataset: (a) FLAT SDR, (b) FLAT HDR, (c) Tripod SDR, (d) Tripod HDR, (e) Shaky hand SDR, (f) Shaky hand HDR, (g) Hand SDR and (h) Hand HDR.
Figure 3. Sample pictures from the dataset: (a) FLAT SDR, (b) FLAT HDR, (c) Tripod SDR, (d) Tripod HDR, (e) Shaky hand SDR, (f) Shaky hand HDR, (g) Hand SDR and (h) Hand HDR.
Sensors 18 03801 g003
Figure 4. The framework of the Photo Response Non-Uniformity (PRNU)-based algorithm. PCE, Peak to Correlation Energy ratio.
Figure 4. The framework of the Photo Response Non-Uniformity (PRNU)-based algorithm. PCE, Peak to Correlation Energy ratio.
Sensors 18 03801 g004
Figure 5. PCE values obtained for SDR and HDR images when compared with a flat SDR-based fingerprint.
Figure 5. PCE values obtained for SDR and HDR images when compared with a flat SDR-based fingerprint.
Sensors 18 03801 g005
Figure 6. PCE values obtained by SDR and HDR images when compared with a flat HDR-based fingerprint.
Figure 6. PCE values obtained by SDR and HDR images when compared with a flat HDR-based fingerprint.
Sensors 18 03801 g006
Figure 7. Example of the result obtained when correlating noise from HDR images captured by the I02 model with the SDR image fingerprint. I, iOS.
Figure 7. Example of the result obtained when correlating noise from HDR images captured by the I02 model with the SDR image fingerprint. I, iOS.
Sensors 18 03801 g007
Figure 8. Example of the result obtained when correlating noise from SDR images captured by the A07 model with the HDR image fingerprint. A, Apple.
Figure 8. Example of the result obtained when correlating noise from SDR images captured by the A07 model with the HDR image fingerprint. A, Apple.
Sensors 18 03801 g008
Figure 9. PCE values obtained by SDR and HDR images when compared with a flat MIX-based fingerprint.
Figure 9. PCE values obtained by SDR and HDR images when compared with a flat MIX-based fingerprint.
Sensors 18 03801 g009
Figure 10. Example of the result obtained correlating noise from HDR images captured by A01 with the SDR image fingerprint.
Figure 10. Example of the result obtained correlating noise from HDR images captured by A01 with the SDR image fingerprint.
Sensors 18 03801 g010
Figure 11. PCE maps for examples of SDR and HDR images.
Figure 11. PCE maps for examples of SDR and HDR images.
Sensors 18 03801 g011
Table 1. Characteristics of the employed devices and captured images.
Table 1. Characteristics of the employed devices and captured images.
Device
Class
Device
Name
BrandModelOSImage
Resolution
SDR
Flat
HDR
Flat
SDR
Hand
HDR
Hand
SDR
Shaking
HDR
Shaking
SDR
Tripod
HDR
Tripod
A12Huawei-Honor6plusHuaweiPE-TL10Android 6.02448 × 326450 (wall)50 (wall)202020202020
A13Huawei-Honor6plusHuaweiPE-TL20Android 4.4.22448 × 326450 (wall)50 (wall)202020202020
A02Huawei-P8HuaweiGRA-L09Android 6.04160 × 312050 (wall)50 (wall)242424242424
A06Huawei-Y5HuaweiCUN-L21Android 5.13264 × 244850 (wall)50 (wall)242424242424
A04Huawei-P10HuaweiVTR-AL00Android 7.03968 × 297651 (wall)50 (wall)151520202628
A03Huawei-Honor9HuaweiSTF-AL00Android 7.03264 × 184050 (sky)50 (sky)202020202020
A05Huawei-Mate10ProHuaweiBLA-L29Android 8.03968 × 297650 (wall)50 (wall)242424242424
A09Galaxy-Note5SamsungSM-N920CAndroid 7.05312 × 298850 (sky)50 (sky)242424242424
A07Galaxy-S7SamsungSM-G930FAndroid 7.04032 × 302452 (wall)50 (wall)212124242121
A08Galaxy-S7SamsungSM-G930FAndroid 7.04032 × 226850 (sky)50 (sky)242424242424
A10Galaxy-J7SamsungSM-J730FAndroid 7.04128 × 309650 (sky)50 (sky)242424242424
A15Xiaom-3XiaomiRedmi Note3Android 7.14608 × 259250 (wall)50 (wall)242424242424
A11Xiaomi5XiaomiMI 5Android 7.03456 × 460850 (wall)87 (wall)212121212121
A14Xiaomi-5AXiaomiNote 5A PrimeAndroid 7.14160 × 234050 (sky)50 (sky)242424242424
A01GioneeS55GioneeGN9000Android 4.43120 × 420850 (sky)50 (sky)202020202020
A17AsusZenfone-2AsusASUS_Z00EDAndroid 6.13264 × 183650 (sky)50 (sky)242424242424
A16OnePlus-3tOnePlusA3003Android 8.04640 × 348050 (wall)50 (wall)242424242424
I06iPhone 5SApple15A372iOS 113264 × 244850 (wall)50 (wall)242424242424
I04iPad AirAppleA1475iOS 11.0.12592 × 193650 (wall)50 (wall)242424242424
I05iPhone 6AppleA1586iOS 11.32448 × 326450 (wall)50 (wall)212121242121
I02iPhone seAppleA1723iOS 10.3.34032 × 302454 (sky)54 (sky)191919191919
I03iPhone 7AppleA1778iOS 11.34032 × 302450 (wall)50 (wall)242424242424
I01iPhone-8AppleA1863iOS 11.33024 × 403250 (sky)50 (sky)151515151515

Share and Cite

MDPI and ACS Style

Shaya, O.A.; Yang, P.; Ni, R.; Zhao, Y.; Piva, A. A New Dataset for Source Identification of High Dynamic Range Images. Sensors 2018, 18, 3801. https://doi.org/10.3390/s18113801

AMA Style

Shaya OA, Yang P, Ni R, Zhao Y, Piva A. A New Dataset for Source Identification of High Dynamic Range Images. Sensors. 2018; 18(11):3801. https://doi.org/10.3390/s18113801

Chicago/Turabian Style

Shaya, Omar Al, Pengpeng Yang, Rongrong Ni, Yao Zhao, and Alessandro Piva. 2018. "A New Dataset for Source Identification of High Dynamic Range Images" Sensors 18, no. 11: 3801. https://doi.org/10.3390/s18113801

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop