Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (4)

Search Parameters:
Keywords = triplet ontological semantic model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 16392 KB  
Article
TOSD: A Hierarchical Object-Centric Descriptor Integrating Shape, Color, and Topology
by Jun-Hyeon Choi, Jeong-Won Pyo, Ye-Chan An and Tae-Yong Kuc
Sensors 2025, 25(15), 4614; https://doi.org/10.3390/s25154614 - 25 Jul 2025
Cited by 1 | Viewed by 647
Abstract
This paper introduces a hierarchical object-centric descriptor framework called TOSD (Triplet Object-Centric Semantic Descriptor). The goal of this method is to overcome the limitations of existing pixel-based and global feature embedding approaches. To this end, the framework adopts a hierarchical representation that is [...] Read more.
This paper introduces a hierarchical object-centric descriptor framework called TOSD (Triplet Object-Centric Semantic Descriptor). The goal of this method is to overcome the limitations of existing pixel-based and global feature embedding approaches. To this end, the framework adopts a hierarchical representation that is explicitly designed for multi-level reasoning. TOSD combines shape, color, and topological information without depending on predefined class labels. The shape descriptor captures the geometric configuration of each object. The color descriptor focuses on internal appearance by extracting normalized color features. The topology descriptor models the spatial and semantic relationships between objects in a scene. These components are integrated at both object and scene levels to produce compact and consistent embeddings. The resulting representation covers three levels of abstraction: low-level pixel details, mid-level object features, and high-level semantic structure. This hierarchical organization makes it possible to represent both local cues and global context in a unified form. We evaluate the proposed method on multiple vision tasks. The results show that TOSD performs competitively compared to baseline methods, while maintaining robustness in challenging cases such as occlusion and viewpoint changes. The framework is applicable to visual odometry, SLAM, object tracking, global localization, scene clustering, and image retrieval. In addition, this work extends our previous research on the Semantic Modeling Framework, which represents environments using layered structures of places, objects, and their ontological relations. Full article
(This article belongs to the Special Issue Event-Driven Vision Sensor Architectures and Application Scenarios)
Show Figures

Figure 1

14 pages, 1251 KB  
Article
Construction of a 3D Model Knowledge Base Based on Feature Description and Common Sense Fusion
by Pengbo Zhou and Sheng Zeng
Appl. Sci. 2023, 13(11), 6595; https://doi.org/10.3390/app13116595 - 29 May 2023
Cited by 2 | Viewed by 2297
Abstract
Three-dimensional models represent the shape and appearance of real-world objects in a virtual manner, enabling users to obtain a comprehensive and accurate understanding by observing their appearance from multiple perspectives. The semantic retrieval of 3D models is closer to human understanding, but semantic [...] Read more.
Three-dimensional models represent the shape and appearance of real-world objects in a virtual manner, enabling users to obtain a comprehensive and accurate understanding by observing their appearance from multiple perspectives. The semantic retrieval of 3D models is closer to human understanding, but semantic annotation for describing 3D models is difficult to automate, and it is still difficult to construct an easy-to-use 3D model knowledge base. This paper proposes a method for building a 3D model knowledge base to enhance the ability to intelligently manage and reuse 3D models. The sources of 3D model knowledge are obtained from two aspects: on the one hand, constructing mapping rules between the 3D model features and semantics, and on the other hand, extraction from a common sense database. Firstly, the viewpoint orientation is established, the semantic transformation rules of different feature values are established, and the representation degree of different features is divided to describe the degree of the contour approximating the regular shape under different perspectives through classification. An automatic output model semantic description of the contour is combined with spatial orientation. Then, a 3D model visual knowledge ontology is designed from top to bottom based on the upper ontology of the machine-readable comprehensive knowledge base and the relational structure of the ConceptNet ontology. Finally, using a weighted directed graph representation method with a sparse-matrix-integrated semantic dictionary as a carrier, an entity dictionary and a relational dictionary are established, covering attribute names and attribute value data. The sparse matrix is used to record the index information of knowledge triplets to form a three-dimensional model knowledge base. The feasibility of this method is demonstrated by semantic retrieval and reasoning on the label meshes dataset and the cultural relics dataset. Full article
Show Figures

Figure 1

19 pages, 9521 KB  
Article
A Flexible Semantic Ontological Model Framework and Its Application to Robotic Navigation in Large Dynamic Environments
by Sunghyeon Joo, Sanghyeon Bae, Junhyeon Choi, Hyunjin Park, Sangwook Lee, Sujeong You, Taeyoung Uhm, Jiyoun Moon and Taeyong Kuc
Electronics 2022, 11(15), 2420; https://doi.org/10.3390/electronics11152420 - 3 Aug 2022
Cited by 8 | Viewed by 3150
Abstract
Advanced research in robotics has allowed robots to navigate diverse environments autonomously. However, conducting complex tasks while handling unpredictable circumstances is still challenging for robots. The robots should plan the task by understanding the working environments beyond metric information and need countermeasures against [...] Read more.
Advanced research in robotics has allowed robots to navigate diverse environments autonomously. However, conducting complex tasks while handling unpredictable circumstances is still challenging for robots. The robots should plan the task by understanding the working environments beyond metric information and need countermeasures against various situations. In this paper, we propose a semantic navigation framework based on a Triplet Ontological Semantic Model (TOSM) to manage various conditions affecting the execution of tasks. The framework allows robots with different kinematics to perform tasks in indoor and outdoor environments. We define the TOSM-based semantic knowledge and generate a semantic map for the domains. The robots execute tasks according to their characteristics by converting inferred knowledge to Planning Domain Definition Language (PDDL). Additionally, to make the framework sustainable, we determine a policy of maintaining the map and re-planning when in unexpected situations. The various experiments on four different kinds of robots and four scenarios validate the scalability and reliability of the proposed framework. Full article
(This article belongs to the Special Issue AI in Mobile Robotics)
Show Figures

Figure 1

30 pages, 6013 KB  
Article
Autonomous Navigation Framework for Intelligent Robots Based on a Semantic Environment Modeling
by Sung-Hyeon Joo, Sumaira Manzoor, Yuri Goncalves Rocha, Sang-Hyeon Bae, Kwang-Hee Lee, Tae-Yong Kuc and Minsung Kim
Appl. Sci. 2020, 10(9), 3219; https://doi.org/10.3390/app10093219 - 5 May 2020
Cited by 41 | Viewed by 10752
Abstract
Humans have an innate ability of environment modeling, perception, and planning while simultaneously performing tasks. However, it is still a challenging problem in the study of robotic cognition. We address this issue by proposing a neuro-inspired cognitive navigation framework, which is composed of [...] Read more.
Humans have an innate ability of environment modeling, perception, and planning while simultaneously performing tasks. However, it is still a challenging problem in the study of robotic cognition. We address this issue by proposing a neuro-inspired cognitive navigation framework, which is composed of three major components: semantic modeling framework (SMF), semantic information processing (SIP) module, and semantic autonomous navigation (SAN) module to enable the robot to perform cognitive tasks. The SMF creates an environment database using Triplet Ontological Semantic Model (TOSM) and builds semantic models of the environment. The environment maps from these semantic models are generated in an on-demand database and downloaded in SIP and SAN modules when required to by the robot. The SIP module contains active environment perception components for recognition and localization. It also feeds relevant perception information to behavior planner for safely performing the task. The SAN module uses a behavior planner that is connected with a knowledge base and behavior database for querying during action planning and execution. The main contributions of our work are the development of the TOSM, integration of SMF, SIP, and SAN modules in one single framework, and interaction between these components based on the findings of cognitive science. We deploy our cognitive navigation framework on a mobile robot platform, considering implicit and explicit constraints for autonomous robot navigation in a real-world environment. The robotic experiments demonstrate the validity of our proposed framework. Full article
(This article belongs to the Special Issue Intelligent Control and Robotics)
Show Figures

Figure 1

Back to TopTop