Intelligent Media Processing

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Multimedia Systems and Applications".

Deadline for manuscript submissions: closed (31 December 2021) | Viewed by 40219

Special Issue Editors


E-Mail Website
Guest Editor
Department of Imaging Sciences, Graduate School of Science and Engineering, Chiba University, 1-33 Yayoicho, Inage-ku, Chiba-shi, Chiba 263-8522, Japan
Interests: image processing; multimedia security; image coding

E-Mail Website
Guest Editor
Department of Imaging Sciences, Chiba University, Chiba, Japan
Interests: color image sensing; analysis; processing; reproduction; evaluation

Special Issue Information

Dear Colleagues,

Thanks to the continuous progress and development of internet services and consumer equipment, multimedia images have become more common and, in fact, necessary in our daily life. Image processing is tackled in multiple fields, such as broadcast, printing, and storage, and applied to industry and social activities. Additionally, the integration of different types of media and cross-media strategies has triggered a new form of image distribution. To achieve further, advanced developments, mixed approaches in intelligent processing, cross-reality, soft computing, security, etc. are strongly required.

On another front, visual material appearance and affective engineering have also attracted a great deal of attention in human-centered imaging, and they are expected to contribute to high-value-added applications.

This Special Issue on “Intelligent Media Processing” is planned as a venue for presenting leading-edge research articles that may contribute to enhance the worth of digital images and multimedia. The topics of interest include but are not limited to:

  • Intelligent image processing;
  • Computer graphics;
  • Augmented reality/virtual reality/mixed reality;
  • Computer vision;
  • Deep learning;
  • High dynamic range images;
  • Security applications;
  • Image quality criteria;
  • Human-centered imaging;
  • Industrial applications.

Prof. Dr. Shoko Imaizumi
Prof. Dr. Keita Hirai
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Intelligent image processing
  • Computer graphics
  • Augmented reality/virtual reality/mixed reality
  • Computer vision
  • Deep learning
  • High dynamic range images
  • Security applications
  • Image quality criteria
  • Human-centered imaging
  • Industrial applications

Published Papers (17 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 4778 KiB  
Article
U-Net-Based Segmentation of Microscopic Images of Colorants and Simplification of Labeling in the Learning Process
by Ikumi Hirose, Mari Tsunomura, Masami Shishikura, Toru Ishii, Yuichiro Yoshimura, Keiko Ogawa-Ochiai and Norimichi Tsumura
J. Imaging 2022, 8(7), 177; https://doi.org/10.3390/jimaging8070177 - 23 Jun 2022
Cited by 3 | Viewed by 1608
Abstract
Colored product textures correspond to particle size distributions. The microscopic images of colorants must be divided into regions to determine the particle size distribution. The conventional method used for this process involves manually dividing images into areas, which may be inefficient. In this [...] Read more.
Colored product textures correspond to particle size distributions. The microscopic images of colorants must be divided into regions to determine the particle size distribution. The conventional method used for this process involves manually dividing images into areas, which may be inefficient. In this paper, we have overcome this issue by developing two different modified architectures of U-Net convolution neural networks to automatically determine the particle sizes. To develop these modified architectures, a significant amount of ground truth data must be prepared to train the U-Net, which is difficult for big data as the labeling is performed manually. Therefore, we also aim to reduce this process by using incomplete labeling data. The first objective of this study is to determine the accuracy of our modified U-Net architectures for this type of image. The second objective is to reduce the difficulty of preparing the ground truth data by testing the accuracy of training on incomplete labeling data. The results indicate that efficient segmentation can be realized using our modified U-Net architectures, and the generation of ground truth data can be simplified. This paper presents a preliminary study to improve the efficiency of determining particle size distributions with incomplete labeling data. Full article
(This article belongs to the Special Issue Intelligent Media Processing)
Show Figures

Figure 1

14 pages, 2850 KiB  
Article
High-Capacity Reversible Data Hiding in Encrypted Images with Flexible Restoration
by Eichi Arai and Shoko Imaizumi
J. Imaging 2022, 8(7), 176; https://doi.org/10.3390/jimaging8070176 - 21 Jun 2022
Cited by 5 | Viewed by 1527
Abstract
In this paper, we propose a novel reversible data hiding in encrypted images (RDH-EI) method that achieves the highest hiding capacity in the RDH-EI research field and full flexibility in the processing order without restrictions. In the previous work in this field, there [...] Read more.
In this paper, we propose a novel reversible data hiding in encrypted images (RDH-EI) method that achieves the highest hiding capacity in the RDH-EI research field and full flexibility in the processing order without restrictions. In the previous work in this field, there exist two representative methods; one provides flexible processing with a high hiding capacity of 2.17 bpp, and the other achieves the highest hiding capacity of 2.46 bpp by using the BOWS-2 dataset. The latter method has critical restrictions on the processing order. We focus on the advantage of the former method and introduce two efficient algorithms for maximizing the hiding capacity. With these algorithms, the proposed method can predict each pixel value with higher accuracy and refine the embedding algorithm. Consequently, the hiding capacity is effectively enhanced to 2.50 bpp using the BOWS-2 dataset, and a series of processes can be freely conducted without considering any restrictions on the order between data hiding and encryption. In the same way, there are no restrictions on the processing order in the restoration process. Thus, the proposed method provides flexibility in the privileges requested by users. Experimental results show the effectiveness of the proposed method in terms of hiding capacity and marked-image quality. Full article
(This article belongs to the Special Issue Intelligent Media Processing)
Show Figures

Figure 1

16 pages, 692 KiB  
Article
Coded DNN Watermark: Robustness against Pruning Models Using Constant Weight Code
by Tatsuya Yasui, Takuro Tanaka, Asad Malik and Minoru Kuribayashi
J. Imaging 2022, 8(6), 152; https://doi.org/10.3390/jimaging8060152 - 26 May 2022
Cited by 3 | Viewed by 2359
Abstract
Deep Neural Network (DNN) watermarking techniques are increasingly being used to protect the intellectual property of DNN models. Basically, DNN watermarking is a technique to insert side information into the DNN model without significantly degrading the performance of its original task. A pruning [...] Read more.
Deep Neural Network (DNN) watermarking techniques are increasingly being used to protect the intellectual property of DNN models. Basically, DNN watermarking is a technique to insert side information into the DNN model without significantly degrading the performance of its original task. A pruning attack is a threat to DNN watermarking, wherein the less important neurons in the model are pruned to make it faster and more compact. As a result, removing the watermark from the DNN model is possible. This study investigates a channel coding approach to protect DNN watermarking against pruning attacks. The channel model differs completely from conventional models involving digital images. Determining the suitable encoding methods for DNN watermarking remains an open problem. Herein, we presented a novel encoding approach using constant weight codes to protect the DNN watermarking against pruning attacks. The experimental results confirmed that the robustness against pruning attacks could be controlled by carefully setting two thresholds for binary symbols in the codeword. Full article
(This article belongs to the Special Issue Intelligent Media Processing)
Show Figures

Figure 1

13 pages, 5105 KiB  
Article
Colored Point Cloud Completion for a Head Using Adversarial Rendered Image Loss
by Yuki Ishida, Yoshitsugu Manabe and Noriko Yata
J. Imaging 2022, 8(5), 125; https://doi.org/10.3390/jimaging8050125 - 26 Apr 2022
Cited by 2 | Viewed by 2491
Abstract
Recent advances in depth measurement and its utilization have made point cloud processing more critical. Additionally, the human head is essential for communication, and its three-dimensional data are expected to be utilized in this regard. However, a single RGB-Depth (RGBD) camera is prone [...] Read more.
Recent advances in depth measurement and its utilization have made point cloud processing more critical. Additionally, the human head is essential for communication, and its three-dimensional data are expected to be utilized in this regard. However, a single RGB-Depth (RGBD) camera is prone to occlusion and depth measurement failure for dark hair colors such as black hair. Recently, point cloud completion, where an entire point cloud is estimated and generated from a partial point cloud, has been studied, but only the shape is learned, rather than the completion of colored point clouds. Thus, this paper proposes a machine learning-based completion method for colored point clouds with XYZ location information and the International Commission on Illumination (CIE) LAB (L*a*b*) color information. The proposed method uses the color difference between point clouds based on the Chamfer Distance (CD) or Earth Mover’s Distance (EMD) of point cloud shape evaluation as a color loss. In addition, an adversarial loss to L*a*b*-Depth images rendered from the output point cloud can improve the visual quality. The experiments examined networks trained using a colored point cloud dataset created by combining two 3D datasets: hairstyles and faces. Experimental results show that using the adversarial loss with the colored point cloud renderer in the proposed method improves the image domain’s evaluation. Full article
(This article belongs to the Special Issue Intelligent Media Processing)
Show Figures

Figure 1

18 pages, 12874 KiB  
Article
Surreptitious Adversarial Examples through Functioning QR Code
by Aran Chindaudom, Prarinya Siritanawan, Karin Sumongkayothin and Kazunori Kotani
J. Imaging 2022, 8(5), 122; https://doi.org/10.3390/jimaging8050122 - 22 Apr 2022
Cited by 2 | Viewed by 2619
Abstract
The continuous advances in the technology of Convolutional Neural Network (CNN) and Deep Learning have been applied to facilitate various tasks of human life. However, security risks of the users’ information and privacy have been increasing rapidly due to the models’ vulnerabilities. We [...] Read more.
The continuous advances in the technology of Convolutional Neural Network (CNN) and Deep Learning have been applied to facilitate various tasks of human life. However, security risks of the users’ information and privacy have been increasing rapidly due to the models’ vulnerabilities. We have developed a novel method of adversarial attack that can conceal its intent from human intuition through the use of a modified QR code. The modified QR code can be consistently scanned with a reader while retaining adversarial efficacy against image classification models. The QR adversarial patch was created and embedded into an input image to generate adversarial examples, which were trained against CNN image classification models. Experiments were performed to investigate the trade-off in different patch shapes and find the patch’s optimal balance of scannability and adversarial efficacy. Furthermore, we have investigated whether particular classes of images are more resistant or vulnerable to the adversarial QR attack, and we also investigated the generality of the adversarial attack across different image classification models. Full article
(This article belongs to the Special Issue Intelligent Media Processing)
Show Figures

Figure 1

20 pages, 1154 KiB  
Article
Face Attribute Estimation Using Multi-Task Convolutional Neural Network
by Hiroya Kawai, Koichi Ito and Takafumi Aoki
J. Imaging 2022, 8(4), 105; https://doi.org/10.3390/jimaging8040105 - 10 Apr 2022
Cited by 1 | Viewed by 3107
Abstract
Face attribute estimation can be used for improving the accuracy of face recognition, customer analysis in marketing, image retrieval, video surveillance, and criminal investigation. The major methods for face attribute estimation are based on Convolutional Neural Networks (CNNs) that solve face attribute estimation [...] Read more.
Face attribute estimation can be used for improving the accuracy of face recognition, customer analysis in marketing, image retrieval, video surveillance, and criminal investigation. The major methods for face attribute estimation are based on Convolutional Neural Networks (CNNs) that solve face attribute estimation as a multiple two-class classification problem. Although one feature extractor should be used for each attribute to explore the accuracy of attribute estimation, in most cases, one feature extractor is shared to estimate all face attributes for the parameter efficiency. This paper proposes a face attribute estimation method using Merged Multi-CNN (MM-CNN) to automatically optimize CNN structures for solving multiple binary classification problems to improve parameter efficiency and accuracy in face attribute estimation. We also propose a parameter reduction method called Convolutionalization for Parameter Reduction (CPR), which removes all fully connected layers from MM-CNNs. Through a set of experiments using the CelebA and LFW-a datasets, we demonstrate that MM-CNN with CPR exhibits higher efficiency of face attribute estimation in terms of estimation accuracy and the number of weight parameters than conventional methods. Full article
(This article belongs to the Special Issue Intelligent Media Processing)
Show Figures

Figure 1

16 pages, 7596 KiB  
Article
Exploring Metrics to Establish an Optimal Model for Image Aesthetic Assessment and Analysis
by Ying Dai
J. Imaging 2022, 8(4), 85; https://doi.org/10.3390/jimaging8040085 - 23 Mar 2022
Cited by 3 | Viewed by 2187
Abstract
To establish an optimal model for photo aesthetic assessment, in this paper, an internal metric called the disentanglement-measure (D-measure) is introduced, which reflects the disentanglement degree of the final layer FC (full connection) nodes of convolutional neural network (CNN). By combining the F-measure [...] Read more.
To establish an optimal model for photo aesthetic assessment, in this paper, an internal metric called the disentanglement-measure (D-measure) is introduced, which reflects the disentanglement degree of the final layer FC (full connection) nodes of convolutional neural network (CNN). By combining the F-measure with the D-measure to obtain an FD measure, an algorithm of determining the optimal model from many photo score prediction models generated by CNN-based repetitively self-revised learning (RSRL) is proposed. Furthermore, the aesthetics features of the model regarding the first fixation perspective (FFP) and the assessment interest region (AIR) are defined by means of the feature maps so as to analyze the consistency with human aesthetics. The experimental results show that the proposed method is helpful in improving the efficiency of determining the optimal model. Moreover, extracting the FFP and AIR of the models to the image is useful in understanding the internal properties of these models related to the human aesthetics and validating the external performances of the aesthetic assessment. Full article
(This article belongs to the Special Issue Intelligent Media Processing)
Show Figures

Figure 1

14 pages, 3877 KiB  
Article
Fabrication of a Human Skin Mockup with a Multilayered Concentration Map of Pigment Components Using a UV Printer
by Kazuki Nagasawa, Shoji Yamamoto, Wataru Arai, Kunio Hakkaku, Chawan Koopipat, Keita Hirai and Norimichi Tsumura
J. Imaging 2022, 8(3), 73; https://doi.org/10.3390/jimaging8030073 - 15 Mar 2022
Cited by 2 | Viewed by 2319
Abstract
In this paper, we propose a pipeline that reproduces human skin mockups using a UV printer by obtaining the spatial concentration map of pigments from an RGB image of human skin. The pigment concentration distributions were obtained by a separating method of skin [...] Read more.
In this paper, we propose a pipeline that reproduces human skin mockups using a UV printer by obtaining the spatial concentration map of pigments from an RGB image of human skin. The pigment concentration distributions were obtained by a separating method of skin pigment components with independent component analysis from the skin image. This method can extract the concentration of melanin and hemoglobin components, which are the main pigments that make up skin tone. Based on this concentration, we developed a procedure to reproduce a skin mockup with a multi-layered structure that is determined by mapping the absorbance of melanin and hemoglobin to CMYK (Cyan, Magenta, Yellow, Black) subtractive color mixing. In our proposed method, the multi-layered structure with different pigments in each layer contributes greatly to the accurate reproduction of skin tones. We use a UV printer because the printer is capable of layered fabrication by using UV-curable inks. As the result, subjective evaluation showed that the artificial skin reproduced by our method has a more skin-like appearance than that produced using conventional printing. Full article
(This article belongs to the Special Issue Intelligent Media Processing)
Show Figures

Figure 1

14 pages, 7021 KiB  
Article
Photo2Video: Semantic-Aware Deep Learning-Based Video Generation from Still Content
by Paula Viana, Maria Teresa Andrade, Pedro Carvalho, Luis Vilaça, Inês N. Teixeira, Tiago Costa and Pieter Jonker
J. Imaging 2022, 8(3), 68; https://doi.org/10.3390/jimaging8030068 - 10 Mar 2022
Cited by 1 | Viewed by 2272
Abstract
Applying machine learning (ML), and especially deep learning, to understand visual content is becoming common practice in many application areas. However, little attention has been given to its use within the multimedia creative domain. It is true that ML is already popular for [...] Read more.
Applying machine learning (ML), and especially deep learning, to understand visual content is becoming common practice in many application areas. However, little attention has been given to its use within the multimedia creative domain. It is true that ML is already popular for content creation, but the progress achieved so far addresses essentially textual content or the identification and selection of specific types of content. A wealth of possibilities are yet to be explored by bringing the use of ML into the multimedia creative process, allowing the knowledge inferred by the former to influence automatically how new multimedia content is created. The work presented in this article provides contributions in three distinct ways towards this goal: firstly, it proposes a methodology to re-train popular neural network models in identifying new thematic concepts in static visual content and attaching meaningful annotations to the detected regions of interest; secondly, it presents varied visual digital effects and corresponding tools that can be automatically called upon to apply such effects in a previously analyzed photo; thirdly, it defines a complete automated creative workflow, from the acquisition of a photograph and corresponding contextual data, through the ML region-based annotation, to the automatic application of digital effects and generation of a semantically aware multimedia story driven by the previously derived situational and visual contextual data. Additionally, it presents a variant of this automated workflow by offering to the user the possibility of manipulating the automatic annotations in an assisted manner. The final aim is to transform a static digital photo into a short video clip, taking into account the information acquired. The final result strongly contrasts with current standard approaches of creating random movements, by implementing an intelligent content- and context-aware video. Full article
(This article belongs to the Special Issue Intelligent Media Processing)
Show Figures

Figure 1

15 pages, 3511 KiB  
Article
Glossiness Index of Objects in Halftone Color Images Based on Structure and Appearance Distortion
by Donghui Li, Midori Tanaka and Takahiko Horiuchi
J. Imaging 2022, 8(3), 59; https://doi.org/10.3390/jimaging8030059 - 27 Feb 2022
Viewed by 2166
Abstract
This paper proposes an objective glossiness index for objects in halftone color images. In the proposed index, we consider the characteristics of the human visual system (HVS) and associate the image’s structure distortion and statistical information. According to the difference in the number [...] Read more.
This paper proposes an objective glossiness index for objects in halftone color images. In the proposed index, we consider the characteristics of the human visual system (HVS) and associate the image’s structure distortion and statistical information. According to the difference in the number of strategies adopted by the HVS in judging the difference between images, it is divided into single and multi-strategy modeling. In this study, we advocate multiple strategies to determine glossy or non-glossy quality. We assumed that HVS used different visual mechanisms to evaluate glossy and non-glossy objects. For non-glossy images, the image structure dominated, so the HVS tried to use structural information to judge distortion (a strategy based on structural distortion detection). For glossy images, the glossy appearance dominated; thus, the HVS tried to search for the glossiness difference (an appearance-based strategy). Herein, we present an index for glossiness assessment that attempts to explicitly model the structural dissimilarity and appearance distortion. We used the contrast sensitivity function to account for the mechanism of halftone images when viewed by the human eye. We estimated the structure distortion for the first strategy by using local luminance and contrast masking; meanwhile, local statistics changing in the spatial frequency components for skewness and standard deviation were used to estimate the appearance distortion for the second strategy. Experimental results showed that these two mixed-distortion measurement strategies performed well in consistency with the subjective ratings of glossiness in halftone color images. Full article
(This article belongs to the Special Issue Intelligent Media Processing)
Show Figures

Figure 1

10 pages, 5514 KiB  
Article
Texture Management for Glossy Objects Using Tone Mapping
by Ikumi Hirose, Kazuki Nagasawa, Norimichi Tsumura and Shoji Yamamoto
J. Imaging 2022, 8(2), 34; https://doi.org/10.3390/jimaging8020034 - 30 Jan 2022
Viewed by 2105
Abstract
In this paper, we proposed a method for matching the color and glossiness of an object between different displays by using tone mapping. Since displays have their own characteristics, such as maximum luminance and gamma characteristics, the color and glossiness of an object [...] Read more.
In this paper, we proposed a method for matching the color and glossiness of an object between different displays by using tone mapping. Since displays have their own characteristics, such as maximum luminance and gamma characteristics, the color and glossiness of an object when displayed differs from one display to another. The color can be corrected by conventional color matching methods, but the glossiness, which greatly changes the impression of an object, needs to be corrected. Our practical challenge was to use tone mapping to correct the high-luminance part, also referred to as the glossy part, which cannot be fully corrected by color matching. Therefore, we performed color matching and tone mapping using high dynamic range images, which can record a wider range of luminance information as input. In addition, we varied the parameters of the tone-mapping function and the threshold at which the function was applied to study the effect on the object’s appearance. We conducted a subjective evaluation experiment using the series category method on glossy-corrected images generated by applying various functions to each display. As a result, we found that the differences in glossiness between displays could be corrected by selecting the optimal function for each display. Full article
(This article belongs to the Special Issue Intelligent Media Processing)
Show Figures

Figure 1

14 pages, 3215 KiB  
Article
An Extension of Reversible Image Enhancement Processing for Saturation and Brightness Contrast
by Yuki Sugimoto and Shoko Imaizumi
J. Imaging 2022, 8(2), 27; https://doi.org/10.3390/jimaging8020027 - 28 Jan 2022
Cited by 6 | Viewed by 2230
Abstract
This paper proposes a reversible image processing method for color images that can independently improve saturation and enhance brightness contrast. Image processing techniques have been popularly used to obtain desired images. The existing techniques generally do not consider reversibility. Recently, many reversible image [...] Read more.
This paper proposes a reversible image processing method for color images that can independently improve saturation and enhance brightness contrast. Image processing techniques have been popularly used to obtain desired images. The existing techniques generally do not consider reversibility. Recently, many reversible image processing methods have been widely researched. Most of the previous studies have investigated reversible contrast enhancement for grayscale images based on data hiding techniques. When these techniques are simply applied to color images, hue distortion occurs. Several efficient methods have been studied for color images, but they could not guarantee complete reversibility. We previously proposed a new method that reversibly controls not only the brightness contrast, but also saturation. However, this method cannot fully control them independently. To tackle this issue, we extend our previous work without losing its advantages. The proposed method uses the HSV cone model, while our previous method uses the HSV cylinder model. The experimental results demonstrate that our method flexibly controls saturation and brightness contrast reversibly and independently. Full article
(This article belongs to the Special Issue Intelligent Media Processing)
Show Figures

Figure 1

17 pages, 66075 KiB  
Article
Reliable Estimation of Deterioration Levels via Late Fusion Using Multi-View Distress Images for Practical Inspection
by Keisuke Maeda, Naoki Ogawa, Takahiro Ogawa and Miki Haseyama
J. Imaging 2021, 7(12), 273; https://doi.org/10.3390/jimaging7120273 - 9 Dec 2021
Cited by 1 | Viewed by 2038
Abstract
This paper presents reliable estimation of deterioration levels via late fusion using multi-view distress images for practical inspection. The proposed method simultaneously solves the following two problems that are necessary to support the practical inspection. Since maintenance of infrastructures requires a high level [...] Read more.
This paper presents reliable estimation of deterioration levels via late fusion using multi-view distress images for practical inspection. The proposed method simultaneously solves the following two problems that are necessary to support the practical inspection. Since maintenance of infrastructures requires a high level of safety and reliability, this paper proposes a neural network that can generate an attention map from distress images and text data acquired during the inspection. Thus, deterioration level estimation with high interpretability can be realized. In addition, since multi-view distress images are taken for single distress during the actual inspection, it is necessary to estimate the final result from these images. Therefore, the proposed method integrates estimation results obtained from the multi-view images via the late fusion and can derive an appropriate result considering all the images. To the best of our knowledge, no method has been proposed to solve these problems simultaneously, and this achievement is the biggest contribution of this paper. In this paper, we confirm the effectiveness of the proposed method by conducting experiments using data acquired during the actual inspection. Full article
(This article belongs to the Special Issue Intelligent Media Processing)
Show Figures

Figure 1

13 pages, 3512 KiB  
Article
A Reversible Data Hiding Method in Encrypted Images for Controlling Trade-Off between Hiding Capacity and Compression Efficiency
by Ryota Motomura, Shoko Imaizumi and Hitoshi Kiya
J. Imaging 2021, 7(12), 268; https://doi.org/10.3390/jimaging7120268 - 7 Dec 2021
Cited by 4 | Viewed by 2547
Abstract
In this paper, we propose a new framework for reversible data hiding in encrypted images, where both the hiding capacity and lossless compression efficiency are flexibly controlled. There exist two main purposes; one is to provide highly efficient lossless compression under a required [...] Read more.
In this paper, we propose a new framework for reversible data hiding in encrypted images, where both the hiding capacity and lossless compression efficiency are flexibly controlled. There exist two main purposes; one is to provide highly efficient lossless compression under a required hiding capacity, while the other is to enable us to extract an embedded payload from a decrypted image. The proposed method can decrypt marked encrypted images without data extraction and derive marked images. An original image is arbitrarily divided into two regions. Two different methods for reversible data hiding in encrypted images (RDH-EI) are used in our method, and each one is used for either region. Consequently, one region can be decrypted without data extraction and also losslessly compressed using image coding standards even after the processing. The other region possesses a significantly high hiding rate, around 1 bpp. Experimental results show the effectiveness of the proposed method in terms of hiding capacity and lossless compression efficiency. Full article
(This article belongs to the Special Issue Intelligent Media Processing)
Show Figures

Figure 1

14 pages, 2536 KiB  
Article
Improved Coefficient Recovery and Its Application for Rewritable Data Embedding
by Alan Sii, Simying Ong and KokSheik Wong
J. Imaging 2021, 7(11), 244; https://doi.org/10.3390/jimaging7110244 - 18 Nov 2021
Viewed by 1300
Abstract
JPEG is the most commonly utilized image coding standard for storage and transmission purposes. It achieves a good rate–distortion trade-off, and it has been adopted by many, if not all, handheld devices. However, often information loss occurs due to transmission error or damage [...] Read more.
JPEG is the most commonly utilized image coding standard for storage and transmission purposes. It achieves a good rate–distortion trade-off, and it has been adopted by many, if not all, handheld devices. However, often information loss occurs due to transmission error or damage to the storage device. To address this problem, various coefficient recovery methods have been proposed in the past, including a divide-and-conquer approach to speed up the recovery process. However, the segmentation technique considered in the existing method operates with the assumption of a bi-modal distribution for the pixel values, but most images do not satisfy this condition. Therefore, in this work, an adaptive method was employed to perform more accurate segmentation, so that the real potential of the previous coefficient recovery methods can be unleashed. In addition, an improved rewritable adaptive data embedding method is also proposed that exploits the recoverability of coefficients. Discrete cosine transformation (DCT) patches and blocks for data hiding are judiciously selected based on the predetermined precision to control the embedding capacity and image distortion. Our results suggest that the adaptive coefficient recovery method is able to improve on the conventional method up to 27% in terms of CPU time, and it also achieved better image quality with most considered images. Furthermore, the proposed rewritable data embedding method is able to embed 20,146 bits into an image of dimensions 512×512. Full article
(This article belongs to the Special Issue Intelligent Media Processing)
Show Figures

Figure 1

15 pages, 3660 KiB  
Article
Three-Color Balancing for Color Constancy Correction
by Teruaki Akazawa, Yuma Kinoshita, Sayaka Shiota and Hitoshi Kiya
J. Imaging 2021, 7(10), 207; https://doi.org/10.3390/jimaging7100207 - 6 Oct 2021
Cited by 4 | Viewed by 1989
Abstract
This paper presents a three-color balance adjustment for color constancy correction. White balancing is a typical adjustment for color constancy in an image, but there are still lighting effects on colors other than white. Cheng et al. proposed multi-color balancing to improve the [...] Read more.
This paper presents a three-color balance adjustment for color constancy correction. White balancing is a typical adjustment for color constancy in an image, but there are still lighting effects on colors other than white. Cheng et al. proposed multi-color balancing to improve the performance of white balancing by mapping multiple target colors into corresponding ground truth colors. However, there are still three problems that have not been discussed: choosing the number of target colors, selecting target colors, and minimizing error which causes computational complexity to increase. In this paper, we first discuss the number of target colors for multi-color balancing. From our observation, when the number of target colors is greater than or equal to three, the best performance of multi-color balancing in each number of target colors is almost the same regardless of the number of target colors, and it is superior to that of white balancing. Moreover, if the number of target colors is three, multi-color balancing can be performed without any error minimization. Accordingly, we propose three-color balancing. In addition, the combination of three target colors is discussed to achieve color constancy correction. In an experiment, the proposed method not only outperforms white balancing but also has almost the same performance as Cheng’s method with 24 target colors. Full article
(This article belongs to the Special Issue Intelligent Media Processing)
Show Figures

Figure 1

12 pages, 3196 KiB  
Article
A Detection Method of Operated Fake-Images Using Robust Hashing
by Miki Tanaka, Sayaka Shiota and Hitoshi Kiya
J. Imaging 2021, 7(8), 134; https://doi.org/10.3390/jimaging7080134 - 5 Aug 2021
Cited by 7 | Viewed by 3151
Abstract
SNS providers are known to carry out the recompression and resizing of uploaded images, but most conventional methods for detecting fake images/tampered images are not robust enough against such operations. In this paper, we propose a novel method for detecting fake images, including [...] Read more.
SNS providers are known to carry out the recompression and resizing of uploaded images, but most conventional methods for detecting fake images/tampered images are not robust enough against such operations. In this paper, we propose a novel method for detecting fake images, including distortion caused by image operations such as image compression and resizing. We select a robust hashing method, which retrieves images similar to a query image, for fake-image/tampered-image detection, and hash values extracted from both reference and query images are used to robustly detect fake-images for the first time. If there is an original hash code from a reference image for comparison, the proposed method can more robustly detect fake images than conventional methods. One of the practical applications of this method is to monitor images, including synthetic ones sold by a company. In experiments, the proposed fake-image detection is demonstrated to outperform state-of-the-art methods under the use of various datasets including fake images generated with GANs. Full article
(This article belongs to the Special Issue Intelligent Media Processing)
Show Figures

Figure 1

Back to TopTop