Mathematics for Visual Computing: Acquisition, Processing, Analysis and Rendering of Visual Information

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: 31 May 2025 | Viewed by 688

Special Issue Editor


E-Mail Website
Guest Editor
School of Automation Science and Engineering, South China University of Technology, Guangzhou 510640, China
Interests: Image processing/analysis; machine learning; artificial intelligence and scientific/information visualization

Special Issue Information

Dear Colleagues,

We would like to invite you to publish your valuable work with us. Visual computing is a generic term for all computer science disciplines handling the generation and processing of visual information. The core challenges are the acquisition, processing, analysis and rendering of visual information, in which mathematical theory, mathematical modelling, computational methods and tools are used for computer graphics, image processing, visualization, computer vision, virtual and augmented reality, video processing, machine learning, etc. In recent years, the research community has witnessed an increased number of significant and novel methods and applications in mathematics for visual computing, including computer graphics and computer animation, image analysis and computer vision, visualization and visual analytics, geometric modelling and 3D printing, image processing and image editing, virtual and augmented reality, human computer interaction, and so on.

This Special Issue aims to publish recent advances in mathematical modelling, algorithms and applications for visual computing that involve acquisition, processing, analysis and rendering of visual information.

In this Special Issue, original research articles and reviews are welcome. Topics appropriate for this Special Issue include, but are not limited to, mathematical modelling in visual computing, computational methods in machine learning, feature representation and learning of visual information, explainable artificial intelligence based on visualization, information registration and fusion based on visual computing in multi-modal, multi-scale and multi-temporal images, segmentation and classification of visual information, representation and rendering of visual information, virtual and augmented reality in image-guided applications, and so on.

We look forward to receiving your contributions.

Dr. Bin Li
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computational methods in visual computing
  • acquisition of vision information
  • processing and analysis of vision information
  • representation and rendering of visual information
  • scientific/ information visualization
  • virtual and augmented reality
  • image-guided applications
  • explainable artificial intelligence based on visualization
  • mathematical modelling in visual computing

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 8740 KiB  
Article
A Rapid Detection Method for Coal Ash Content in Tailings Suspension Based on Absorption Spectra and Deep Feature Extraction
by Wenbo Zhu, Xinghao Zhang, Zhengjun Zhu, Weijie Fu, Neng Liu and Zhengquan Zhang
Mathematics 2024, 12(11), 1685; https://doi.org/10.3390/math12111685 - 29 May 2024
Viewed by 169
Abstract
Traditional visual detection methods that employ image data are often unstable due to environmental influences like lighting conditions. However, microfiber spectrometers are capable of capturing the specific wavelength characteristics of tail coal suspensions, effectively circumventing the instability caused by lighting variations. Utilizing spectral [...] Read more.
Traditional visual detection methods that employ image data are often unstable due to environmental influences like lighting conditions. However, microfiber spectrometers are capable of capturing the specific wavelength characteristics of tail coal suspensions, effectively circumventing the instability caused by lighting variations. Utilizing spectral analysis techniques for detecting ash content in tail coal appears promising as a more stable method of indirect ash detection. In this context, this paper proposes a rapid detection method for the coal ash content in tailings suspensions based on absorption spectra and deep feature extraction. Initially, a preprocessing method, the inverse time weight function (ITWF), is presented, focusing on the intrinsic connection between the sedimentation phenomena of samples. This enables the model to learn and retain spectral time memory features, thereby enhancing its analytical capabilities. To better capture the spectral characteristics of tail coal suspensions, we designed the DSFN (DeepSpectraFusionNet) model. This model has an MSCR (multi-scale convolutional residual) module, addressing the conventional models’ oversight of the strong correlation between adjacent wavelengths in the spectrum. This facilitates the extraction of relative positional information. Additionally, to uncover potential temporal relationships in sedimentation, we propose a CLSM-CS (convolutional long-short memory with candidate states) module, designed to strengthen the capturing of local information and sequential memory. Ultimately, the method employs a fused convolutional deep classifier to integrate and reconstruct both temporal memory and positional features. This results in a model that effectively correlates the ash content of suspensions with their absorption spectral characteristics. Experimental results confirmed that the proposed model achieved an accuracy of 80.65%, an F1-score of 80.45%, a precision of 83.43%, and a recall of 80.65%. These results outperformed recent coal recognition models and classical temporal models, meeting the high standards required for industrial on-site ash detection tasks. Full article
Show Figures

Figure 1

19 pages, 10565 KiB  
Article
AMSMC-UGAN: Adaptive Multi-Scale Multi-Color Space Underwater Image Enhancement with GAN-Physics Fusion
by Dong Chao, Zhenming Li, Wenbo Zhu, Haibing Li, Bing Zheng, Zhongbo Zhang and Weijie Fu
Mathematics 2024, 12(10), 1551; https://doi.org/10.3390/math12101551 - 16 May 2024
Viewed by 337
Abstract
Underwater vision technology is crucial for marine exploration, aquaculture, and environmental monitoring. However, the challenging underwater conditions, including light attenuation, color distortion, reduced contrast, and blurring, pose difficulties. Current deep learning models and traditional image enhancement techniques are limited in addressing these challenges, [...] Read more.
Underwater vision technology is crucial for marine exploration, aquaculture, and environmental monitoring. However, the challenging underwater conditions, including light attenuation, color distortion, reduced contrast, and blurring, pose difficulties. Current deep learning models and traditional image enhancement techniques are limited in addressing these challenges, making it challenging to acquire high-quality underwater image signals. To overcome these limitations, this study proposes an approach called adaptive multi-scale multi-color space underwater image enhancement with GAN-physics fusion (AMSMC-UGAN). AMSMC-UGAN leverages multiple color spaces (RGB, HSV, and Lab) for feature extraction, compensating for RGB’s limitations in underwater environments and enhancing the use of image information. By integrating a membership degree function to guide deep learning based on physical models, the model’s performance is improved across different underwater scenes. In addition, the introduction of a multi-scale feature extraction module deepens the granularity of image information, learns the degradation distribution of different image information of the same image content more comprehensively, and provides useful guidance for more comprehensive data for image enhancement. AMSMC-UGAN achieved maximum scores of 26.04 dB, 0.87, and 3.2004 for PSNR, SSIM, and UIQM metrics, respectively, on real and synthetic underwater image datasets. Additionally, it obtained gains of at least 6.5%, 6%, and 1% for these metrics. Empirical evaluations on real and artificially distorted underwater image datasets demonstrate that AMSMC-GAN outperforms existing techniques, showcasing superior performance with enhanced quantitative metrics and strong generalization capabilities. Full article
Show Figures

Figure 1

Back to TopTop