New Technology of Image & Video Processing

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Electronic Multimedia".

Deadline for manuscript submissions: closed (15 October 2023) | Viewed by 4011

Special Issue Editors


E-Mail Website
Guest Editor
School of Natural and Computational Sciences (SNCS), Massey University (Albany Campus), Palmerston North 4442, New Zealand
Interests: artificial intelligence and its applications

E-Mail Website
Guest Editor
College of Computer Science and Technology, China University of Petroleum (East China), Qingdao 266580, China
Interests: big data; AI; recommender systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Information Technology and Electrical Engineering, University of Oulu, 90570 Oulu, Finland
Interests: quantum software engineering; software process improvement; multi-criteria decision analysis; DevOps; microservices architecture; AI ethics; agile software development; soft computing; evidence-based software engineering
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Science, Qufu Normal University, Rizhao 276826, China
Interests: recommender systems and big data

Special Issue Information

Dear Colleagues,

With the emergence and popularity of cell phones and cameras, we have witnessed an unprecedented growth in the volume of images and videos. To process these image and video data effectively and efficiently, image and video processing (IVP) has been studied for use in various multimedia-related computing applications. However, some IVP problems remain unsolved: how to adapt generic image or video feature representations to specific deployments to improve accuracy; how to utilize the latest learning methods to solve real-life visual problems, such as preventing pollution with remote sensing images and detect anomalies in car parts using CCTV camera video; and how to build cloud platforms that can cope with a seemingly unlimited supply of content coming from traditional media sources. When IVP is introduced to real-life applications, many interesting issues and challenges will be created. Therefore, we expect new technologies, models, frameworks and architectures to tackle the IVP problems in a big data context.

This Special Issue serves as a forum to bring together active researchers from both industry and academia to exchange their opinions and submit original works on recent advances in image and video processing.

Topics include, but are not limited to, the following:

  • Feature representation and modeling in IVP;
  • Image and video analysis and segmentation;
  • Synthesis, rendering, and visualization;
  • Motion estimation, registration and fusion;
  • Image and video perception and quality models;
  • Interpretation and understanding;
  • Detection, recognition, and classification;
  • Understanding remote sensing images;
  • Restoration and enhancement;
  • Image and video labeling and retrieval;
  • Interpolation, super-resolution, and mosaicing;
  • Biometrics, forensics, and security;
  • Compression, oding, and transmission;
  • Image and video systems and applications;
  • Biomedical and biological image processing;
  • Document analysis and processing;
  • Deep learning for image and video;
  • Stereoscopic, multi-view, and 3D processing;
  • Cloud-enhanced mobile media applications and platform;
  • Architecture of cloud image/video platform;
  • Cloud/edge-based IVP applications;
  • Trust and security in visual computing on Cloud/edge.

Prof. Dr. Ruili Wang
Prof. Dr. Lianyong Qi
Dr. Arif Ali Khan
Dr. Weiyi Zhong
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image
  • video
  • feature representation
  • data processing
  • learning

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 2143 KiB  
Article
Optimization of the Generative Multi-Symbol Architecture of the Binary Arithmetic Coder for UHDTV Video Encoders
by Grzegorz Pastuszak
Electronics 2023, 12(22), 4643; https://doi.org/10.3390/electronics12224643 - 14 Nov 2023
Viewed by 578
Abstract
Previous studies have shown that the application of the M-coder in the H.264/AVC and H.265/HEVC video coding standards allows for highly parallel implementations without decreasing maximal frequencies. Although the primary limitation on throughput, originating from the range register update, can be eliminated, other [...] Read more.
Previous studies have shown that the application of the M-coder in the H.264/AVC and H.265/HEVC video coding standards allows for highly parallel implementations without decreasing maximal frequencies. Although the primary limitation on throughput, originating from the range register update, can be eliminated, other limitations are associated with low register processing. Their negative impact is revealed at higher degrees of parallelism, leading to a gradual throughput saturation. This paper presents optimizations introduced to the generative hardware architecture to increase throughputs and hardware efficiencies. Firstly, it can process more than one bypass-mode subseries in one clock cycle. Secondly, aggregated contributions to the codestream are buffered before the low register update. Thirdly, the number of contributions used to update the low register in one clock cycle is decreased to save resources. Fourthly, the maximal one-clock-cycle renormalization shift of the low register is increased from 32 to 64 bit positions. As a result of these optimizations, the binary arithmetic coder, configured for series lengths of 27 and 2 symbols, increases the throughput from 18.37 to 37.42 symbols per clock cycle for high-quality H.265/HEVC compression. The logic consumption increases from 205.6k to 246.1k gates when synthesized on 90 nm TSMC technology. The design can operate at 570 MHz. Full article
(This article belongs to the Special Issue New Technology of Image & Video Processing)
Show Figures

Figure 1

21 pages, 3671 KiB  
Article
A Novel Standardized Collaborative Online Model for Processing and Analyzing Remotely Sensed Images in Geographic Problems
by Xueshen Zhang, Qiulan Wu, Feng Zhang, Xiang Sun, Huarui Wu, Shumin Wu and Xuefei Chen
Electronics 2023, 12(21), 4394; https://doi.org/10.3390/electronics12214394 - 24 Oct 2023
Viewed by 925
Abstract
In recent years, remote sensing image processing technology has developed rapidly, and the variety of remote sensing images has increased. Solving a geographic problem often requires multiple remote sensing images to be used together. For an image processing analyst, it is difficult to [...] Read more.
In recent years, remote sensing image processing technology has developed rapidly, and the variety of remote sensing images has increased. Solving a geographic problem often requires multiple remote sensing images to be used together. For an image processing analyst, it is difficult to become proficient in the image processing of multiple types of remote sensing images. Therefore, it is necessary to have multiple image processing analysts collaborate to solve geographic problems. However, as a result of the naturally large volumes of data and the computer resources they consume for analysis, remote sensing images present a barrier in the collaboration of multidisciplinary remote sensing undertakings and analysts. As a result, during the development of the collaborative analysis process, it is necessary to achieve the online processing and analysis of remote sensing images, as well as to standardize the online remote sensing image collaborative analysis process. To address the above issues, a hierarchical collaborative online processing and analysis framework was developed in this paper. This framework defined a clear collaborative analysis structure, and it identifies what kinds of online image processing and analysis activities participants can engage in to successfully conduct collaborative processes. In addition, a collaborative process construction model and an online remote sensing image processing analysis model were developed to assist participants in creating a standard collaborative online image processing and analysis process. In order to demonstrate the feasibility and effectiveness of the framework and model, this paper developed a collaborative online post-disaster assessment process that utilizes radar images and optical remote sensing images for a real forest fire event. This process was based on the BPMN2.0 and OGC dual standards. Based on the results, the proposed framework provides a hierarchical collaborative remote sensing image processing and analysis process with well-defined stages and activities to guide the participants’ mutual collaboration. Additionally, the proposed model can help participants to develop a standardized collaborative online image processing process in terms of process structure and information interactions. Full article
(This article belongs to the Special Issue New Technology of Image & Video Processing)
Show Figures

Figure 1

18 pages, 9920 KiB  
Article
Improving the Accuracy of Lane Detection by Enhancing the Long-Range Dependence
by Bo Liu, Li Feng, Qinglin Zhao, Guanghui Li and Yufeng Chen
Electronics 2023, 12(11), 2518; https://doi.org/10.3390/electronics12112518 - 2 Jun 2023
Cited by 4 | Viewed by 1679
Abstract
Lane detection is a common task in computer vision that involves identifying the boundaries of lanes on a road from an image or a video. Improving the accuracy of lane detection is of great help to advanced driver assistance systems and autonomous driving [...] Read more.
Lane detection is a common task in computer vision that involves identifying the boundaries of lanes on a road from an image or a video. Improving the accuracy of lane detection is of great help to advanced driver assistance systems and autonomous driving that help cars to identify and keep in the correct lane. Current high-accuracy models of lane detection are mainly based on artificial neural networks. Among them, CLRNet is the latest famous model, which attains high lane detection accuracy. However, in some scenarios, CLRNet attains lower lane detection accuracy, and we revealed that this is caused by insufficient global dependence information. In this study, we enhanced CLRNet and proposed a new model called NonLocal CLRNet (NLNet). NonLocal is an algorithmic mechanism that captures long-range dependence. NLNet employs NonLocal to acquire more long-range dependence information or global information and then applies the acquired information to a Feature Pyramid Network (FPN) in CLRNet for improving lane detection accuracy. Using the CULane dataset, we trained NLNet. The experimental results showed that NLNet outperformed state-of-the-art models in terms of accuracy in most scenarios, particularly in the no-line scenario and night scenario. This study is very helpful for developing more accurate lane detection models. Full article
(This article belongs to the Special Issue New Technology of Image & Video Processing)
Show Figures

Figure 1

Back to TopTop