applsci-logo

Journal Browser

Journal Browser

Digital Image Processing: Technologies and Applications

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 July 2025 | Viewed by 11829

Special Issue Editors


E-Mail Website
Guest Editor
Instituto Politécnico Nacional, Av. Luis Enrique Erro S/N, Unidad Profesional Adolfo López Mateos, Zacatenco, Alcaldía Gustavo A. Madero, Ciudad de México 07738, Mexico
Interests: compressive sensing; speech recognition; digital watermarking; data hiding; speech processing; digital image processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Instituto Politécnico Nacional, Av. Santa Ana 1000, Coyoacan, Mexico City CP4040, Mexico
Interests: medical images; pattern recognition; digital watermarking; data hiding; deep learning; digital image processing

Special Issue Information

Dear Colleagues,

Digital image processing has been a topic of active research for many years, and has found application in several fields.  In recent years, with the advance of computer technology the use of image processing technology has widely spread in fields, such as information security, biometrics, medicine, image compression, restoration, access control, pattern recognition, image synthesis, and image understanding, among others.

We would like to invite the academic and industrial research community to submit original research, as well as review articles to this Special Issue. Topics include:

  • Biometric pattern recognition;
  • Compressive sensing applications;
  • Digital watermarking;
  • Image classification;
  • Image clustering;
  • Image restoration;
  • Image authentication;
  • Image denoising;
  • Image compression;
  • Image encryption;
  • Face expression recognition;
  • Deep learning-based pattern recognition;
  • 3D image processing;
  • Medical image analysis.

Prof. Dr. Héctor Manuel Pérez-Meana
Prof. Dr. Mariko Nakano-Miyatake
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • biometrics
  • pattern recognition
  • enhancement
  • compression
  • authentication
  • encryption
  • watermarking
  • restoration

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

28 pages, 9935 KiB  
Article
Optimum Multilevel Thresholding for Medical Brain Images Based on Tsallis Entropy, Incorporating Bayesian Estimation and the Cauchy Distribution
by Xianwen Wang, Yingyuan Yang, Minhang Nan, Guanjun Bao and Guoyuan Liang
Appl. Sci. 2025, 15(5), 2355; https://doi.org/10.3390/app15052355 - 22 Feb 2025
Viewed by 304
Abstract
Entropy-based thresholding is a widely used technique for medical image segmentation. Its principle is to determine the optimal threshold by maximizing or minimizing the image’s entropy, dividing the image into different regions or categories. The intensity distributions of objects and backgrounds often overlap [...] Read more.
Entropy-based thresholding is a widely used technique for medical image segmentation. Its principle is to determine the optimal threshold by maximizing or minimizing the image’s entropy, dividing the image into different regions or categories. The intensity distributions of objects and backgrounds often overlap and contain many outliers, making segmentation extremely difficult. In this paper, we introduce a novel thresholding method that incorporates the Cauchy distribution into the Tsallis entropy framework based on Bayesian estimation. By introducing Bayesian prior probability estimation to address the overlap in intensity distributions between the two classes, we enhance the estimation of the probability that a pixel belongs to either class. Additionally, we utilize the Cauchy distribution, known for its heavy-tailed characteristics, to fit grayscale pixel distributions with outliers, enhancing tolerance to extreme values. The optimal threshold is derived through the optimization of an information measure formulated using updated Tsallis entropy. Experimental results demonstrate that the proposed method, called Cauchy-TB, achieves significant superiority to existing approaches on two public medical brain image datasets. Full article
(This article belongs to the Special Issue Digital Image Processing: Technologies and Applications)
Show Figures

Figure 1

19 pages, 8990 KiB  
Article
Optimizing Image Watermarking with Dual-Tree Complex Wavelet Transform and Particle Swarm Intelligence for Secure and High-Quality Protection
by Abed Al Raoof Bsoul and Alaa Bani Ismail
Appl. Sci. 2025, 15(3), 1315; https://doi.org/10.3390/app15031315 - 27 Jan 2025
Viewed by 636
Abstract
Watermarking is a technique used to address issues related to the widespread use of the internet, such as copyright protection, tamper localization, and authentication. However, most watermarking approaches negatively affect the quality of the original image. In this research, we propose an optimized [...] Read more.
Watermarking is a technique used to address issues related to the widespread use of the internet, such as copyright protection, tamper localization, and authentication. However, most watermarking approaches negatively affect the quality of the original image. In this research, we propose an optimized image watermarking approach that utilizes the dual-tree complex wavelet transform and particle swarm optimization algorithm. Our approach focuses on maintaining the highest possible quality of the watermarked image by minimizing any noticeable changes. During the embedding phase, we break down the original image using a technique called dual-tree complex wavelet transform (DTCWT) and then use particle swarm optimization (PSO) to choose specific coefficients. We embed the bits of a binary logo into the least significant bits of these selected coefficients, creating the watermarked image. To extract the watermark, we reverse the embedding process by first decomposing both versions of the input image using DTCWT and extracting the same coefficients to retrieve those corresponding bits (watermark). In our experiments, we used a common dataset from watermarking research to demonstrate the functionality against various watermarked copies and peak signal-to-noise ratio (PSNR) and normalized cross-correlation (NCC) metrics. The PSNR is a measure of how well the watermarked image maintains its original quality, and the NCC reflects how accurately the watermark can be extracted. Our method gives mean PSNR and NCC of 80.50% and 92.51%, respectively. Full article
(This article belongs to the Special Issue Digital Image Processing: Technologies and Applications)
Show Figures

Figure 1

21 pages, 10322 KiB  
Article
Development of Automated Image Processing for High-Throughput Screening of Potential Anti-Chikungunya Virus Compounds
by Pathaphon Wiriwithya, Siwaporn Boonyasuppayakorn, Pattadon Sawetpiyakul, Duangpron Peypala and Gridsada Phanomchoeng
Appl. Sci. 2025, 15(1), 385; https://doi.org/10.3390/app15010385 - 3 Jan 2025
Viewed by 1126
Abstract
Chikungunya virus, a member of the Alphavirus genus, continues to present a global health challenge due to its widespread occurrence and the absence of specific antiviral therapies. Accurate detection of viral infections, such as chikungunya, is critical for antiviral research, yet traditional methods [...] Read more.
Chikungunya virus, a member of the Alphavirus genus, continues to present a global health challenge due to its widespread occurrence and the absence of specific antiviral therapies. Accurate detection of viral infections, such as chikungunya, is critical for antiviral research, yet traditional methods are time-consuming and prone to error. This study presents the development and validation of an automated image processing algorithm designed to improve the accuracy and speed of high-throughput screening for potential anti-chikungunya virus compounds. Using MvTec Halcon software (Version 22.11), the algorithm was developed to detect and classify infected and uninfected cells in viral assays, and its performance was validated against manual counts conducted by virology experts, showing a strong correlation with Pearson correlation coefficients of 0.9807 for cell detection and 0.9886 for virus detection. These values indicate a high correlation between the algorithm and manual counts performed by three virology experts, demonstrating that the algorithm’s accuracy closely matches expert manual evaluations. Following statistical validation, the algorithm was applied to screen antiviral compounds, demonstrating its effectiveness in enhancing the throughput and accuracy of drug discovery workflows. This technology can be seamlessly integrated into existing virological research pipelines, offering a scalable and efficient tool to accelerate drug discovery and improve diagnostic workflows for vector-borne and emerging viral diseases. By addressing critical bottlenecks in speed and accuracy, it holds promise for tackling global virology challenges and advancing research into other viral infections. Full article
(This article belongs to the Special Issue Digital Image Processing: Technologies and Applications)
Show Figures

Figure 1

21 pages, 2048 KiB  
Article
A New Class of Edge Filter Based on a Cross-correlation-like Equation Derived from Fractional Calculus Principles
by Mario Gonzalez-Lee, Hector Vazquez-Leal, Jose R. Garcia-Martinez, Eli G. Pale-Ramon, Luis J. Morales-Mendoza, Mariko Nakano-Miyatake and Hector Perez-Meana
Appl. Sci. 2024, 14(13), 5428; https://doi.org/10.3390/app14135428 - 22 Jun 2024
Viewed by 920
Abstract
In this paper, we propose a new sliding window edge-oriented filter that computes the output pixels using a cross-correlation-like equation derived from the principles of fractional calculus (FC); thus, we call it the “fractional cross-correlation filter” (FCCF). We assessed the performance of this [...] Read more.
In this paper, we propose a new sliding window edge-oriented filter that computes the output pixels using a cross-correlation-like equation derived from the principles of fractional calculus (FC); thus, we call it the “fractional cross-correlation filter” (FCCF). We assessed the performance of this filter utilizing exclusively edge-preservation-oriented metrics such as the gradient conduction mean square error (GCMSE), the edge-based structural similarity (EBSSIM), and the multi-scale structural similarity (MS-SSIM); we conducted a statistical assessment of the performance of the filter based on those metrics by using the Berkeley segmentation dataset benchmark as a test corpus. Experimental data reveal that our approach achieves higher performance compared to conventional edge filters for all the metrics considered in this study. This is supported by the statistical analysis we carried out; specifically, the FCCF demonstrates a consistent enhancement in edge detection. We also conducted additional experiments for determining the main filter parameters, which we found to be optimal for a broad spectrum of images. The results underscore the FCCF’s potential to make significant contributions to the advancement of image processing techniques since many practical applications such as medical imaging, image enhancement, and computer vision rely heavily on edge detection filters. Full article
(This article belongs to the Special Issue Digital Image Processing: Technologies and Applications)
Show Figures

Figure 1

21 pages, 6212 KiB  
Article
High-Noise Grayscale Image Denoising Using an Improved Median Filter for the Adaptive Selection of a Threshold
by Ning Cao and Yupu Liu
Appl. Sci. 2024, 14(2), 635; https://doi.org/10.3390/app14020635 - 11 Jan 2024
Cited by 19 | Viewed by 3496
Abstract
Grayscale image processing is a key research area in the field of computer vision and image analysis, where image quality and visualization effects may be seriously damaged by high-density salt and pepper noise. A traditional median filter for noise removal may result in [...] Read more.
Grayscale image processing is a key research area in the field of computer vision and image analysis, where image quality and visualization effects may be seriously damaged by high-density salt and pepper noise. A traditional median filter for noise removal may result in poor detail reservation performance under strong noise and the judgment performance of different noise characteristics has strong dependence and rather weak robustness. In order to reduce the effects of high-density salt and pepper noise on image quality when processing high-noise grayscale images, an improved two-dimensional maximum Shannon entropy median filter (TSETMF) is proposed for the adaptive selection of a threshold to enhance the filter performance while stably and effectively retaining the details of the images. The framework of the proposed improved TSETMF algorithm is designed in detail. The noise in images is filtered by means of automatically partitioning a window size, the threshold value of which is adaptively calculated using two-dimensional maximum Shannon entropy. The theoretical model is verified and analyzed through comparative experiments using three kinds of classical grayscale images. The experimental results demonstrate that the proposed improved TSETMF algorithm exhibits better processing performance than that of the traditional filter, with a higher suppression of high-density noise and denoising stability. This stronger ability while processing high-density noise is demonstrated by a higher peak signal-to-noise ratio (PSNR) of 24.97 dB with a 95% noise density located in the classical Lena grayscale image. The better denoising stability, with a noise density from 5% to 95%, is demonstrated by the minor decline in the PSNR of approximately 10.78% relative to a PSNR of 23.10 dB located in the classical Cameraman grayscale image. Furthermore, it can be advanced to promote higher noise filtering and stability for processing high-density salt and pepper noise in grayscale images. Full article
(This article belongs to the Special Issue Digital Image Processing: Technologies and Applications)
Show Figures

Figure 1

22 pages, 11592 KiB  
Article
Genetic Programming to Remove Impulse Noise in Color Images
by Daniel Fajardo-Delgado, Ansel Y. Rodríguez-González, Sergio Sandoval-Pérez, Jesús Ezequiel Molinar-Solís and María Guadalupe Sánchez-Cervantes
Appl. Sci. 2024, 14(1), 126; https://doi.org/10.3390/app14010126 - 22 Dec 2023
Cited by 1 | Viewed by 1515
Abstract
This paper presents a new filter to remove impulse noise in digital color images. The filter is adaptive in the sense that it uses a detection stage to only correct noisy pixels. Detecting noisy pixels is performed by a binary classification model generated [...] Read more.
This paper presents a new filter to remove impulse noise in digital color images. The filter is adaptive in the sense that it uses a detection stage to only correct noisy pixels. Detecting noisy pixels is performed by a binary classification model generated via genetic programming, a paradigm of evolutionary computing based on natural biological selection. The classification model training considers three impulse noise models in color images: salt and pepper, uniform, and correlated. This is the first filter generated by genetic programming exploiting the correlation among the color image channels. The correction stage consists of a vector median filter version that modifies color channel values if some are noisy. An experimental study was performed to compare the proposed filter with others in the state-of-the-art related to color image denoising. Their performance was measured objectively through the image quality metrics PSNR, MAE, SSIM, and FSIM. Experimental findings reveal substantial variability among filters based on noise model and image characteristics. The findings also indicate that, on average, the proposed filter consistently exhibited top-tier performance values for the three impulse noise models, surpassed only by a filter employing a deep learning-based approach. Unlike deep learning filters, which are black boxes with internal workings invisible to the user, the proposed filter has a high interpretability with a performance close to an equilibrium point for all images and noise models used in the experiment. Full article
(This article belongs to the Special Issue Digital Image Processing: Technologies and Applications)
Show Figures

Figure 1

20 pages, 7517 KiB  
Article
Image Copy-Move Forgery Detection Based on Fused Features and Density Clustering
by Guiwei Fu, Yujin Zhang and Yongqi Wang
Appl. Sci. 2023, 13(13), 7528; https://doi.org/10.3390/app13137528 - 26 Jun 2023
Cited by 7 | Viewed by 2635
Abstract
Image copy-move forgery is a common simple tampering technique. To address issues such as high time complexity in most copy-move forgery detection algorithms and difficulty detecting forgeries in smooth regions, this paper proposes an image copy-move forgery detection algorithm based on fused features [...] Read more.
Image copy-move forgery is a common simple tampering technique. To address issues such as high time complexity in most copy-move forgery detection algorithms and difficulty detecting forgeries in smooth regions, this paper proposes an image copy-move forgery detection algorithm based on fused features and density clustering. Firstly, the algorithm combines two detection methods, speeded up robust features (SURF) and accelerated KAZE (A-KAZE), to extract descriptive features by setting a low contrast threshold. Then, the density-based spatial clustering of applications with noise (DBSCAN) algorithm removes mismatched pairs and reduces false positives. To improve the accuracy of forgery localization, the algorithm uses the original image and the image transformed by the affine matrix to compare similarities in the same position in order to locate the forged region. The proposed method was tested on two datasets (Ardizzone and CoMoFoD). The experimental results show that the method effectively improved the accuracy of forgery detection in smooth regions, reduced computational complexity, and exhibited strong robustness against post-processing operations such as rotation, scaling, and noise addition. Full article
(This article belongs to the Special Issue Digital Image Processing: Technologies and Applications)
Show Figures

Figure 1

Back to TopTop