Biologically Inspired Vision and Image Processing

A special issue of Biomimetics (ISSN 2313-7673). This special issue belongs to the section "Bioinspired Sensorics, Information Processing and Control".

Deadline for manuscript submissions: closed (20 January 2024) | Viewed by 6609

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Science, Sichuan University, Chengdu 610065, China
Interests: biologically inspired vision and image processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The visual system of the brain is a complex and efficient image processing system, and it is an important source of computer vision theory and technological innovation. Brain-inspired and imitated brains are important breakthroughs in the theoretical innovation and technological revolution of the new generation of artificial intelligence. From the perspective of computational simulation, it helps to clarify or predict some information processing mechanisms of the brain's visual system; on the other hand, it also provides a series of new general-purpose computing models and common key technologies for many engineering applications centered on intelligent environment perception. This proposed Biologically Inspired Vision and Image Processing (BIVIP) Special Issue welcomes original, unpublished contributions from authors. Topics include (but are not limited to): 

  • Models for the neurons of various visual levels;
  • Neural coding and decoding of visual information;
  • Neural networks for local visual circuits;
  • Visual mechanism inspired deep neural networks;
  • Visual models for image processing;
  • Visual mechanism inspired models for computer vision applications;
  • Hardware implementations of visual models;
  • Artificial vision related software and hardware;
  • Visual models for temporal information processing;
  • Receptive field-based models;
  • Biologically inspired novel spiking neural networks and optimization methods;
  • Visual dynamic information processing technology based on event camera.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere.

Dr. Shaobing Gao
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Biomimetics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • visual cognitive computing
  • brain simulation
  • computational neuroscience
  • biologically inspired computer vision
  • artificial intelligence

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 2456 KiB  
Article
A Biologically Inspired Movement Recognition System with Spiking Neural Networks for Ambient Assisted Living Applications
by Athanasios Passias, Karolos-Alexandros Tsakalos, Ioannis Kansizoglou, Archontissa Maria Kanavaki, Athanasios Gkrekidis, Dimitrios Menychtas, Nikolaos Aggelousis, Maria Michalopoulou, Antonios Gasteratos and Georgios Ch. Sirakoulis
Biomimetics 2024, 9(5), 296; https://doi.org/10.3390/biomimetics9050296 - 15 May 2024
Viewed by 655
Abstract
This study presents a novel solution for ambient assisted living (AAL) applications that utilizes spiking neural networks (SNNs) and reconfigurable neuromorphic processors. As demographic shifts result in an increased need for eldercare, due to a large elderly population that favors independence, there is [...] Read more.
This study presents a novel solution for ambient assisted living (AAL) applications that utilizes spiking neural networks (SNNs) and reconfigurable neuromorphic processors. As demographic shifts result in an increased need for eldercare, due to a large elderly population that favors independence, there is a pressing need for efficient solutions. Traditional deep neural networks (DNNs) are typically energy-intensive and computationally demanding. In contrast, this study turns to SNNs, which are more energy-efficient and mimic biological neural processes, offering a viable alternative to DNNs. We propose asynchronous cellular automaton-based neurons (ACANs), which stand out for their hardware-efficient design and ability to reproduce complex neural behaviors. By utilizing the remote supervised method (ReSuMe), this study improves spike train learning efficiency in SNNs. We apply this to movement recognition in an elderly population, using motion capture data. Our results highlight a high classification accuracy of 83.4%, demonstrating the approach’s efficacy in precise movement activity classification. This method’s significant advantage lies in its potential for real-time, energy-efficient processing in AAL environments. Our findings not only demonstrate SNNs’ superiority over conventional DNNs in computational efficiency but also pave the way for practical neuromorphic computing applications in eldercare. Full article
(This article belongs to the Special Issue Biologically Inspired Vision and Image Processing)
Show Figures

Figure 1

19 pages, 1818 KiB  
Article
Multi-Modal Enhancement Transformer Network for Skeleton-Based Human Interaction Recognition
by Qianshuo Hu and Haijun Liu
Biomimetics 2024, 9(3), 123; https://doi.org/10.3390/biomimetics9030123 - 20 Feb 2024
Viewed by 1221
Abstract
Skeleton-based human interaction recognition is a challenging task in the field of vision and image processing. Graph Convolutional Networks (GCNs) achieved remarkable performance by modeling the human skeleton as a topology. However, existing GCN-based methods have two problems: (1) Existing frameworks cannot effectively [...] Read more.
Skeleton-based human interaction recognition is a challenging task in the field of vision and image processing. Graph Convolutional Networks (GCNs) achieved remarkable performance by modeling the human skeleton as a topology. However, existing GCN-based methods have two problems: (1) Existing frameworks cannot effectively take advantage of the complementary features of different skeletal modalities. There is no information transfer channel between various specific modalities. (2) Limited by the structure of the skeleton topology, it is hard to capture and learn the information about two-person interactions. To solve these problems, inspired by the human visual neural network, we propose a multi-modal enhancement transformer (ME-Former) network for skeleton-based human interaction recognition. ME-Former includes a multi-modal enhancement module (ME) and a context progressive fusion block (CPF). More specifically, each ME module consists of a multi-head cross-modal attention block (MH-CA) and a two-person hypergraph self-attention block (TH-SA), which are responsible for enhancing the skeleton features of a specific modality from other skeletal modalities and modeling spatial dependencies between joints using the specific modality, respectively. In addition, we propose a two-person skeleton topology and a two-person hypergraph representation. The TH-SA block can embed their structural information into the self-attention to better learn two-person interaction. The CPF block is capable of progressively transforming the features of different skeletal modalities from low-level features to higher-order global contexts, making the enhancement process more efficient. Extensive experiments on benchmark NTU-RGB+D 60 and NTU-RGB+D 120 datasets consistently verify the effectiveness of our proposed ME-Former by outperforming state-of-the-art methods. Full article
(This article belongs to the Special Issue Biologically Inspired Vision and Image Processing)
Show Figures

Figure 1

21 pages, 16277 KiB  
Article
Complex-Exponential-Based Bio-Inspired Neuron Model Implementation in FPGA Using Xilinx System Generator and Vivado Design Suite
by Maruf Ahmad, Lei Zhang, Kelvin Tsun Wai Ng and Muhammad E. H. Chowdhury
Biomimetics 2023, 8(8), 621; https://doi.org/10.3390/biomimetics8080621 - 18 Dec 2023
Viewed by 1356
Abstract
This research investigates the implementation of complex-exponential-based neurons in FPGA, which can pave the way for implementing bio-inspired spiking neural networks to compensate for the existing computational constraints in conventional artificial neural networks. The increasing use of extensive neural networks and the complexity [...] Read more.
This research investigates the implementation of complex-exponential-based neurons in FPGA, which can pave the way for implementing bio-inspired spiking neural networks to compensate for the existing computational constraints in conventional artificial neural networks. The increasing use of extensive neural networks and the complexity of models in handling big data lead to higher power consumption and delays. Hence, finding solutions to reduce computational complexity is crucial for addressing power consumption challenges. The complex exponential form effectively encodes oscillating features like frequency, amplitude, and phase shift, streamlining the demanding calculations typical of conventional artificial neurons through levering the simple phase addition of complex exponential functions. The article implements such a two-neuron and a multi-neuron neural model using the Xilinx System Generator and Vivado Design Suite, employing 8-bit, 16-bit, and 32-bit fixed-point data format representations. The study evaluates the accuracy of the proposed neuron model across different FPGA implementations while also providing a detailed analysis of operating frequency, power consumption, and resource usage for the hardware implementations. BRAM-based Vivado designs outperformed Simulink regarding speed, power, and resource efficiency. Specifically, the Vivado BRAM-based approach supported up to 128 neurons, showcasing optimal LUT and FF resource utilization. Such outcomes accommodate choosing the optimal design procedure for implementing spiking neural networks on FPGAs. Full article
(This article belongs to the Special Issue Biologically Inspired Vision and Image Processing)
Show Figures

Figure 1

20 pages, 8675 KiB  
Article
Perceiving like a Bat: Hierarchical 3D Geometric–Semantic Scene Understanding Inspired by a Biomimetic Mechanism
by Chi Zhang, Zhong Yang, Bayang Xue, Haoze Zhuo, Luwei Liao, Xin Yang and Zekun Zhu
Biomimetics 2023, 8(5), 436; https://doi.org/10.3390/biomimetics8050436 - 19 Sep 2023
Viewed by 1409
Abstract
Geometric–semantic scene understanding is a spatial intelligence capability that is essential for robots to perceive and navigate the world. However, understanding a natural scene remains challenging for robots because of restricted sensors and time-varying situations. In contrast, humans and animals are able to [...] Read more.
Geometric–semantic scene understanding is a spatial intelligence capability that is essential for robots to perceive and navigate the world. However, understanding a natural scene remains challenging for robots because of restricted sensors and time-varying situations. In contrast, humans and animals are able to form a complex neuromorphic concept of the scene they move in. This neuromorphic concept captures geometric and semantic aspects of the scenario and reconstructs the scene at multiple levels of abstraction. This article seeks to reduce the gap between robot and animal perception by proposing an ingenious scene-understanding approach that seamlessly captures geometric and semantic aspects in an unexplored environment. We proposed two types of biologically inspired environment perception methods, i.e., a set of elaborate biomimetic sensors and a brain-inspired parsing algorithm related to scene understanding, that enable robots to perceive their surroundings like bats. Our evaluations show that the proposed scene-understanding system achieves competitive performance in image semantic segmentation and volumetric–semantic scene reconstruction. Moreover, to verify the practicability of our proposed scene-understanding method, we also conducted real-world geometric–semantic scene reconstruction in an indoor environment with our self-developed drone. Full article
(This article belongs to the Special Issue Biologically Inspired Vision and Image Processing)
Show Figures

Figure 1

18 pages, 5550 KiB  
Article
A Novel Underwater Image Enhancement Using Optimal Composite Backbone Network
by Yuhan Chen, Qingfeng Li, Dongxin Lu, Lei Kou, Wende Ke, Yan Bai and Zhen Wang
Biomimetics 2023, 8(3), 275; https://doi.org/10.3390/biomimetics8030275 - 27 Jun 2023
Cited by 1 | Viewed by 1267
Abstract
Continuous exploration of the ocean has made underwater image processing an important research field, and plenty of CNN (convolutional neural network)-based underwater image enhancement methods have emerged over time. However, the feature-learning ability of existing CNN-based underwater image enhancement is limited. The networks [...] Read more.
Continuous exploration of the ocean has made underwater image processing an important research field, and plenty of CNN (convolutional neural network)-based underwater image enhancement methods have emerged over time. However, the feature-learning ability of existing CNN-based underwater image enhancement is limited. The networks were designed to be complicated or embed other algorithms for better results, which cannot simultaneously meet the requirements of suitable underwater image enhancement effects and real-time performance. Although the composite backbone network (CBNet) was introduced in underwater image enhancement, we proposed OECBNet (optimal underwater image-enhancing composite backbone network) to obtain a better enhancement effect and shorten the running time. Herein, a comprehensive study of different composite architectures in an underwater image enhancement network was carried out by comparing the number of backbones, connection strategies, pruning strategies for composite backbones, and auxiliary losses. Then, a CBNet with optimal performance was obtained. Finally, cross-sectional research of the obtained network with the state-of-the-art underwater enhancement network was performed. The experiments showed that our optimized composite backbone network achieved better-enhanced images than those of existing CNN-based methods. Full article
(This article belongs to the Special Issue Biologically Inspired Vision and Image Processing)
Show Figures

Figure 1

Back to TopTop