Symmetry Applied in Computer Vision, Automation, and Robotics

A special issue of Symmetry (ISSN 2073-8994). This special issue belongs to the section "Computer".

Deadline for manuscript submissions: 30 September 2024 | Viewed by 2406

Special Issue Editors


E-Mail Website
Guest Editor
College of Information Sciences and Technology, Donghua University, Shanghai 201620, China
Interests: image processing; point cloud processing; Artificial Intelligence; intelligent visual surveillance and plant phenotyping

E-Mail Website
Guest Editor
College of Information Sciences and Technology, Donghua University, Shanghai 201620, China
Interests: computer vision; deep learning; brain science; natural language processing; evolutionary computation

E-Mail Website
Guest Editor
College of Information Sciences and Technology, Donghua University, Shanghai 201620, China
Interests: model predictive control; complex systems

Special Issue Information

Dear Colleagues,

Computer vision has become an indispensable part of many fields such as manufacturing, health care, transportation, and has profoundly influenced our daily life (e.g., through algorithms in smart phones and cameras, search engines, and social networks). Automation and robotic systems, such as self-driving vehicles, autonomous robots and unmanned workshop plants, have also had a significant impact on society and human life. Therefore, it is necessary to keep carrying out advanced research on computer vision, automation and robotics.

Symmetry plays a crucial role in various aspects of computer vision, automation and robotics. This Special Issue, entitled “Symmetry in Applied Computer Vision, Automation, and Robotics”, mainly covers the topics of the theory, phenomenon, and research regarding symmetry in applied computer vision, automation, and robotics. This Special Issue will also attempt to cover the whole field of symmetry (and asymmetry) in its widest sense. We cordially and earnestly invite researchers to contribute their original and high-quality research papers that will inspire advances in computer vision, image processing, 3D sensing, automation, control system and control engineering, robotics, robotic control, optimization, and their symmetry-related applications. We encourage submissions that cover a wide range of topics, including, but not limited to, the following:

  • Learning-based modeling of automation systems with symmetry;
  • Advanced controller design for automation systems with symmetry;
  • Model predictive control based on symmetry;
  • Motion planning and control strategies with symmetry;
  • Symmetry in robot design and morphology;
  • Human–robot interaction with symmetry;
  • Multi-robot systems and swarm robotics with symmetry;
  • Learning and adaptation in robotics with symmetry;
  • Robot manipulation and grasping with symmetry;
  • Robot mapping and localization with symmetry;
  • Symmetry in computer vision;
  • Image processing with symmetry;
  • Point cloud processing with symmetry;
  • Pattern analysis and machine learning in computer vision with symmetry applications;
  • Deep learning on images or other regular data forms with symmetry.

Dr. Dawei Li
Dr. Xuesong Tang
Dr. Xin Cai
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Symmetry is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • symmetry phenomenon and theory
  • symmetry-based applications
  • dynamical systems
  • optimization
  • optimal control
  • model predictive control
  • learning-based control
  • learning-based modelling
  • robot design and morphology
  • multi-robot systems
  • robot manipulation and grasping
  • motion planning
  • robot mapping and localization
  • computer vision
  • image processing
  • pattern analysis and machine learning
  • deep learning

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

26 pages, 34712 KiB  
Article
Research on LFD System of Humanoid Dual-Arm Robot
by Ze Cui, Lang Kou, Zenghao Chen, Peng Bao, Donghai Qian, Lang Xie and Yue Tang
Symmetry 2024, 16(4), 396; https://doi.org/10.3390/sym16040396 - 28 Mar 2024
Viewed by 630
Abstract
Although robots have been widely used in a variety of fields, the idea of enabling them to perform multiple tasks in the same way that humans do remains a difficulty. To solve this, we investigate the learning from demonstration (LFD) system with our [...] Read more.
Although robots have been widely used in a variety of fields, the idea of enabling them to perform multiple tasks in the same way that humans do remains a difficulty. To solve this, we investigate the learning from demonstration (LFD) system with our independently designed symmetrical humanoid dual-arm robot. We present a novel action feature matching algorithm. This algorithm accurately transforms human demonstration data into task models that robots can directly execute, considerably improving LFD’s generalization capabilities. In our studies, we used motion capture cameras to capture human demonstration actions, which included combinations of simple actions (the action layer) and a succession of complicated operational tasks (the task layer). For the action layer data, we employed Gaussian mixture models (GMM) for processing and constructing an action primitive library. As for the task layer data, we created a “keyframe” segmentation method to transform this data into a series of action primitives and build another action primitive library. Guided by our algorithm, the robot successfully imitated complex human tasks. Results show its excellent task learning and execution, providing an effective solution for robots to learn from human demonstrations and significantly advancing robot technology. Full article
(This article belongs to the Special Issue Symmetry Applied in Computer Vision, Automation, and Robotics)
Show Figures

Figure 1

15 pages, 20617 KiB  
Article
Automatic Control of Virtual Cameras for Capturing and Sharing User Focus and Interaction in Collaborative Virtual Reality
by Junhyeok Lee, Dongkeun Lee, Seowon Han, Hyun K. Kim and Kang Hoon Lee
Symmetry 2024, 16(2), 228; https://doi.org/10.3390/sym16020228 - 13 Feb 2024
Viewed by 672
Abstract
As VR technology advances and network speeds rise, social VR platforms are gaining traction. These platforms enable multiple users to socialize and collaborate within a shared virtual environment using avatars. Virtual reality, with its ability to augment visual information, offers distinct advantages for [...] Read more.
As VR technology advances and network speeds rise, social VR platforms are gaining traction. These platforms enable multiple users to socialize and collaborate within a shared virtual environment using avatars. Virtual reality, with its ability to augment visual information, offers distinct advantages for collaboration over traditional methods. Prior research has shown that merely sharing another person’s viewpoint can significantly boost collaborative efficiency. This paper presents an innovative non-verbal communication technique designed to enhance the sharing of visual information. By employing virtual cameras, our method captures where participants are focusing and what they are interacting with, then displays these data above their avatars. The direction of the virtual camera is automatically controlled by considering the user’s gaze direction, the position of the object the user is interacting with, and the positions of other objects around that object. The automatic adjustment of these virtual cameras and the display of captured images are symmetrically conducted for all participants engaged in the virtual environment. This approach is especially beneficial in collaborative settings, where multiple users work together on a shared structure of multiple objects. We validated the effectiveness of our proposed technique through an experiment with 20 participants tasked with collaboratively building structures using block assembly. Full article
(This article belongs to the Special Issue Symmetry Applied in Computer Vision, Automation, and Robotics)
Show Figures

Figure 1

23 pages, 19390 KiB  
Article
Semi-Symmetrical, Fully Convolutional Masked Autoencoder for TBM Muck Image Segmentation
by Ke Lei, Zhongsheng Tan, Xiuying Wang and Zhenliang Zhou
Symmetry 2024, 16(2), 222; https://doi.org/10.3390/sym16020222 - 12 Feb 2024
Viewed by 735
Abstract
Deep neural networks are effectively utilized for the instance segmentation of muck images from tunnel boring machines (TBMs), providing real-time insights into the surrounding rock condition. However, the high cost of obtaining quality labeled data limits the widespread application of this method. Addressing [...] Read more.
Deep neural networks are effectively utilized for the instance segmentation of muck images from tunnel boring machines (TBMs), providing real-time insights into the surrounding rock condition. However, the high cost of obtaining quality labeled data limits the widespread application of this method. Addressing this challenge, this study presents a semi-symmetrical, fully convolutional masked autoencoder designed for self-supervised pre-training on extensive unlabeled muck image datasets. The model features a four-tier sparse encoder for down-sampling and a two-tier sparse decoder for up-sampling, connected via a conventional convolutional neck, forming a semi-symmetrical structure. This design enhances the model’s ability to capture essential low-level features, including geometric shapes and object boundaries. Additionally, to circumvent the trivial solutions in pixel regression that the original masked autoencoder faced, Histogram of Oriented Gradients (HOG) descriptors and Laplacian features have been integrated as novel self-supervision targets. Testing shows that the proposed model can effectively discern essential features of muck images in self-supervised training. When applied to subsequent end-to-end training tasks, it enhances the model’s performance, increasing the prediction accuracy of Intersection over Union (IoU) for muck boundaries and regions by 5.9% and 2.4%, respectively, outperforming the enhancements made by the original masked autoencoder. Full article
(This article belongs to the Special Issue Symmetry Applied in Computer Vision, Automation, and Robotics)
Show Figures

Figure 1

Back to TopTop