Next Article in Journal
Analysis of Vehicle–Bridge Coupling Vibration for Corrugated Steel Web Box Girder Bridges Considering Three-Dimensional Pavement Roughness
Previous Article in Journal
Thermo-Mechanical Behavior Simulation and Experimental Validation of Segmented Tire Molds Based on Multi-Physics Coupling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

A Study on Vector-Based Processing and Texture Application Techniques for 3D Object Creation and Visualization

by
Donghwi Kang
,
Jeongyeon Kim
,
Jongchan Lee
,
Haeju Lee
,
Jihyeok Kim
and
Jungwon Byun
*
WISEiTECH Research Institute, Gwacheon 13824, Republic of Korea
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2025, 15(7), 4011; https://doi.org/10.3390/app15074011
Submission received: 27 February 2025 / Revised: 26 March 2025 / Accepted: 1 April 2025 / Published: 5 April 2025

Abstract

:
This study proposes a technique for generating 3D objects from Shapefile-based 2D spatial data and converting them to comply with the CityGML 3.0 standard. In particular, the proposed Wise Interpolated Texture (hereafter referred to as WIT) technique optimizes texture mapping and enhances visual quality. High-resolution Z-values were extracted using DEM data, and computational efficiency was improved by applying the constrained Delaunay triangulation algorithm. This study implemented more realistic visual representations using high-resolution orthorectified imagery (hereafter referred to as orthoimages) TIF files and improved data retrieval speed compared to existing raster methods through vector-based processing techniques. In this research, data weight reduction, parallel processing, and polygon simplification algorithms were applied to optimize the 3D model generation speed. Additionally, the WIT technique minimized discontinuity between textures and improved UV mapping alignment to achieve more natural and uniform textures. Experimental results confirmed that the proposed technique improved texture mapping speed, enhanced rendering quality, and increased large-scale data processing efficiency compared to conventional methods. Nevertheless, limitations still exist in real-time data integration and optimization of large-scale 3D models. Future research should consider dynamic modeling reflecting real-time image data, BIM data integration, and large-scale texture streaming techniques.

1. Introduction

1.1. Research Background and Necessity

Recent advancements in virtual environment technology have led to rapid progress in reconstructing real-world environments. The concept of a digital twin has emerged as a crucial technology for efficiently integrating, processing, and visualizing large-scale 3D spatial data, and various industrial applications utilizing this technology continue to emerge [1]. Three-dimensional terrain models are essential components of digital twin technology, presenting challenges in the rapid and precise processing of extensive datasets. In particular, the optimization of 3D object generation speed, enhancement of texture mapping quality, and reduction in memory consumption have become primary focal points of research [2]. Modeling complex urban structures as 3D objects serves as an essential component in various domains including smart cities, urban planning, and disaster management, with CityGML being widely utilized as a standardized data format for these applications. The recently established CityGML 3.0 standard [3] has emerged as a pivotal framework for urban modeling through enhanced data management and interoperability [4]. However, the high-resolution texture mapping process remains technically complex and necessitates optimized data processing techniques [5]. A 3D model with meticulously applied textures enhances visual realism, improves the accuracy of spatial analysis, and facilitates intuitive interpretation in decision-making processes [6].

1.2. Research Objectives and Scope

This study aims to propose an optimized technique for generating Level of Detail 1 (LoD1) 3D objects in a digital twin environment and converting them into the CityGML 3.0 standard format. The study area was set within a 1 km radius of Suseo Station in Seoul, utilizing Shapefile-based 2D spatial data to analyze the efficiency of 3D object generation and texture mapping. The data used in this study were collected in various formats, including orthoimages, satellite images, aerial photographs, buildings, roads, administrative district, and DEM data, as provided by the national operating platform [7]. The key information is as follows in Table 1.
The primary scope and objectives of this study are as follows:
  • Optimization of 3D Object Generation Process Efficiency
    This study generates 3D objects by integrating high-resolution Digital Elevation Models (DEMs) with Shapefile-based 2D spatial data [8], including buildings, roads, and administrative districts. To enhance processing speed and reduce file size, a structured triangular mesh is generated using constrained Delaunay triangulation (CDT), which is then combined with the Douglas–Peucker algorithm to develop an efficient data processing method [9,10].
  • Enhancement of Texture Mapping Quality
    High-resolution textures are applied to the generated 3D objects using orthoimages (TIF files). To improve texture alignment accuracy, this study proposes a UV mapping approach utilizing the Wise Interpolated Texture (WIT) technique. The WIT technique aims to reduce texture discontinuities and minimize lighting inconsistencies through interpolation and optimization of texture coordinates. This method enables smoother transitions and sharper details, enhancing accuracy and visual consistency compared to conventional UV mapping techniques.
  • Optimization of Vector-based Data Processing and Memory Usage Efficiency
    To address memory consumption issues arising from large-scale datasets, an efficient algorithm was designed. By incorporating parallel processing techniques and utilizing the CityGML Validator tool [11], the proposed approach reduces resource consumption during data transformation and validation while improving processing speed.
As a result, this study focuses on generating interoperable 3D models that comply with the CityGML 3.0 standard, making them applicable across various digital twin platforms. Furthermore, it aims to propose efficient processing techniques for large-scale spatial data, ensuring optimized data handling and integration.

1.3. Research Differentiation and Contributions

This study proposes an innovative approach that integrates 3D object generation, texture mapping, and data optimization, providing differentiated contributions from existing research:
  • Optimization of 3D Object Generation Processing Speed: The system generates precise 3D objects by integrating Shapefile-based 2D spatial data (buildings, roads, administrative district) with high-resolution Digital Elevation Model (DEM) data. It generates structured triangular meshes using Delaunay triangulation [12], which is combined with the Douglas–Peucker algorithm to enhance computational speed and optimize data size [13].
  • Enhancement of Texture Mapping Quality: This study applies the WIT technique using orthoimages. The proposed technique enhances texture alignment accuracy by refining UV mapping precision and improves visual quality by minimizing lighting inconsistencies and masking artifacts [14,15].
  • Optimization of Vector-based Spatial Information Data Processing: This study performs memory usage reduction through efficient data compression and algorithm optimization in large-scale datasets. By implementing CityGML Validator and parallel processing technologies, it reduces resource consumption during data conversion and validation processes [16,17].
This research presents CityGML 3.0-validated three-dimensional models, establishing a comprehensive foundation for practical implementation across multiple domains, including smart city development, disaster management systems, and urban planning initiatives.

2. Related Work

2.1. Prior Research on 3D Object Generation Techniques

Various techniques have been researched for efficient processing of large-scale spatial data and implementing accurate modeling in 3D object generation. One study proposed a simplified building modeling technique using cell decomposition and primitive instancing, which improved processing speed. Nevertheless, this approach had limitations in integrating complex urban structures and large-scale terrain data [18].
Delaunay triangulation is a widely utilized technique for generating precise 3D meshes, ensuring structural stability and efficiency of the mesh. Previous studies, however, have not sufficiently utilized parallel processing techniques to improve processing speed when handling large-scale data and have not fully resolved real-time processing issues in large datasets [19].
Despite advancements in mesh generation techniques, real-time processing of large-scale datasets remains an unresolved challenge due to the limited application of parallel processing and GPU acceleration technologies [20,21]. This study aims to address these challenges by incorporating optimized parallel processing techniques and GPU-based acceleration to enhance the speed and efficiency of 3D object generation, particularly in large-scale urban modeling.

2.2. Enhancement of Texture Mapping Quality

Texture mapping, as a key technology determining the visual realism of 3D models, has been studied with various techniques utilizing high-resolution images. One study proposed a texture mapping technique using orthoimages that improved texture alignment accuracy and enhanced texture-model correspondence through UV mapping [22]. Nevertheless, challenges remain in fully resolving issues such as texture distortion, lighting inconsistencies, and data interference.
Furthermore, high-resolution texture data present challenges of large data size and reduced processing speed. To address these issues, techniques have been proposed that apply multi-resolution textures or utilize high-resolution textures only in critical areas. However, these techniques have limited application to specific scenarios and face constraints in universal application across diverse terrains and urban structures [23].
However, challenges persist in addressing texture distortion, lighting inconsistencies, and data interference, which significantly degrade visual quality [24,25]. In particular, existing research lacks efficient technical approaches for processing high-resolution texture data while maintaining optimal performance. To overcome these limitations, this study enhances texture mapping quality by refining UV mapping techniques and applying texture correction algorithms, ensuring more accurate texture alignment and reducing visual artifacts [26,27].

2.3. Vector-Based Spatial Information Processing Technologies

Vector-based data are essential for the precise representation and management of spatial information, requiring optimization techniques for processing large-scale data. One study improved GIS analysis accuracy and efficiency by standardizing building height data in LoD1 models [28]. However, this research primarily focused on building-centric data processing and did not address the integrated processing of various data types (e.g., roads, terrain). However, this research focused primarily on building-centric data processing and did not address the integrated processing of various data types (e.g., roads, terrain).
Traditional raster-based GIS processing has been widely used in spatial analysis, including Digital Elevation Model (DEM) interpolation, satellite image classification, and land-use mapping [29,30]. Raster-based methods represent spatial information as a fixed grid of pixels, making them suitable for continuous spatial phenomena such as terrain modeling and remote sensing. However, these methods suffer from high memory consumption, computational inefficiency in large-scale queries, and reduced precision in representing discrete geographic features (e.g., buildings, roads) [31,32].
To address memory usage issues in processing large-scale vector data, data compression techniques and parallel processing have been proposed. Notably, GPU-based vector data processing demonstrates the potential for performance scalability proportional to dataset size. These techniques, however, primarily focus on single workflow processes, limiting their application in the integrated processing of multi-source data and generating models compliant with standards such as CityGML.
Despite these advancements, integrated management of large-scale vector data remains inefficient due to excessive memory usage and resource consumption. Additionally, CityGML 3.0 schema validation continues to impose significant computational overhead, limiting its applicability in large-scale urban modeling. To address these issues, this study proposes an optimized approach utilizing parallel processing and efficient data validation techniques with the CityGML Validator to improve processing efficiency and scalability [33].

3. Proposed Procedure

3.1. Vector-Based 3D Object Generation Procedure

This study proposes a procedure for generating LoD1-level 3D objects by combining 2D spatial information data with high-resolution Digital Elevation Model (DEM) data. This procedure is designed to optimize 3D object generation processing speed, visual quality, and vector-based data processing, as shown in Figure 1.

3.1.1. Two-Dimensional to Three-Dimensional Spatial Data Collection

Shapefile format 2D spatial information data serve as the primary source for this study, shown in Table 2, Table 3, Table 4 and Table 5, encompassing urban elements such as buildings, roads, and administrative districts. These data provide detailed attributes including object boundaries, areas, and types, forming the foundation for 3D object generation. Additionally, high-resolution DEM data are utilized for assigning Z-values (elevation) to 3D objects.
The tabular formats, as shown in the tables, are not arbitrary but reflect standardized spatial data schemas defined by national or municipal geospatial authorities in Korea. These schemas ensure consistency, interoperability, and ease of parsing when converting 2D data into 3D object models.
The above table represents the building layer, containing detailed attributes such as unique building identifiers (BUL_MAN_NO, SIG_CD), physical characteristics (GRO_FLO_CO, UND_FLO_CO), and semantic information (BULD_NM_DC, BDTYP_CD). These attributes are used not only to classify and extrude buildings in 3D but also to assign texture and height data.
The above table shows the road data schema, essential for establishing road networks and relationships between buildings and streets. Attributes like ROAD_BT (road breadth) and ROAD_LT (length) are utilized in geometric construction and rendering of road structures.
The above table outlines administrative districts, enabling spatial filtering and analysis based on regions such as neighborhoods or municipalities, especially important in urban modeling and jurisdictional zoning.
The above table defines the Digital Elevation Model (DEM) metadata. Parameters such as spatial extent (X_Min, Y_Max) and resolution (Pixel_Width, Pixel_Height) are critical for assigning Z-values (elevation) to flat 2D geometries, effectively converting them into 3D models with topographic realism.

3.1.2. Preprocessing of 3D Geospatial Data

In this study, the collected 2D spatial information data and Digital Elevation Model (DEM) data underwent a series of preprocessing steps to ensure consistency across all datasets. The first step involved the conversion and unification of coordinate systems to establish a consistent spatial reference framework.
Initially, the coordinate systems of each dataset, including buildings, administrative districts, road networks, and DEM, were identified and documented. As these datasets originated from different sources, they often used differing coordinate reference systems. To resolve this, all data were converted into a common reference system, EPSG:5186, as specified in Table 6.
The transformation process was conducted using QGIS 3.34.7 and the GDAL library, ensuring reliable and standardized projection. To minimize potential accuracy loss during this conversion, bilinear interpolation was applied, particularly for raster data such as DEM.
Following the coordinate system unification, additional preprocessing was conducted to improve the geometric accuracy and efficiency of the spatial data. This phase included noise removal and data refinement across multiple layers, including buildings, roads, administrative districts, and DEM, as shown in Figure 2.
For building data, refinement involved detecting and correcting incomplete boundaries such as unclosed polygons and inconsistent orientation (e.g., mixed clockwise and counterclockwise directions). Extremely small objects were either removed or retained selectively, depending on their relevance to the analysis scope.
In the case of road data, disconnected road segments were linked to ensure network continuity, while redundant or overlapping features were removed to clean the dataset.
For administrative district boundaries, similar geometric corrections were applied, resolving issues related to unclosed polygons and inconsistent orientation to maintain spatial integrity and topological accuracy.
The DEM data also underwent a noise removal step, in which boundary elevation values were adjusted to align harmoniously with surrounding cells. This ensured continuity in elevation representation, which is crucial for accurate Z-value assignment in 3D modeling.
To enhance processing efficiency, the datasets were then clipped and reduced to focus on the defined study area. Specifically, a circular region with a 1 km radius centered around Suseo Station in Seoul, Republic of Korea, was extracted from the full dataset. After clipping, the Douglas–Peucker sampling algorithm was applied to simplify the geometries. This technique reduces the number of coordinate points while preserving overall shape fidelity, thereby optimizing memory usage and improving processing speed during 3D object generation.

3.1.3. Generation of Vector-Based 3D Objects

This study generates LoD1-level 3D objects through triangular mesh generation and elevation value assignment based on 2D spatial information data. This process focuses on representing urban terrain and structures in 3D while ensuring mesh structure precision and elevation value accuracy.
Triangular Mesh Generation: this stage constructs the geometric structure of objects based on 2D vector data, utilizing a constrained Delaunay triangulation algorithm for precise and stable triangular mesh generation, as shown in Figure 3.
To ensure spatial accuracy and realism in the 3D modeling process, several geometric principles were applied during triangular mesh generation. First, terrain continuity was preserved by maintaining seamless connections between adjacent terrain surfaces and spatial objects. This approach prevents spatial discontinuities and geometric distortion during rendering and analysis.
In addition, the mesh division density was dynamically adjusted in areas with significant elevation variation to accurately reflect terrain curvature. This adaptive meshing strategy helps capture subtle elevation changes and ensures topographic fidelity.
Furthermore, in urban regions containing artificial structures such as buildings and roads, triangular meshes were generated while maintaining geometric consistency with existing spatial data. This ensured that the 3D representations of urban elements remained structurally coherent and aligned with the original 2D datasets.
Following the generation of the triangular mesh, elevation values (Z-values) were assigned to each mesh vertex using numerical terrain data derived from the DEM. This mapping process allowed the 3D objects to accurately represent terrain height variations, contributing to a more realistic three-dimensional representation of the spatial environment. To enhance the visual and structural continuity of the terrain, interpolation techniques were applied within each triangular region, resulting in smoother and more natural elevation transitions across the surface.
In cases where the resolution of the DEM data was lower than that of the generated triangular mesh, bilinear interpolation was employed to estimate intermediate elevation values. This technique ensured that the elevation data were seamlessly integrated into the finer mesh structure by calculating interpolated heights based on the surrounding four DEM grid points. As a result, discrepancies caused by resolution differences were effectively minimized, thereby improving the overall precision and continuity of the terrain representation within the 3D model, as shown in Figure 4.
Bilinear interpolation estimates the elevation f ( x , y ) at an arbitrary point f ( x , y ) based on the surrounding four DEM grid points, x 1 , y 1 , x 2 , y 1 , x 1 , y 2 , x 2 , y 2 , where each point has a known elevation value. The interpolation formula is expressed as follows:
f x , y = f x 1 y 1 · 1 α 1 β + f x 2 , y 1 · α 1 β + f x 1 , y 2 · 1 α β + f ( x 2 , y 2 ) · α β
To ensure structural accuracy and semantic consistency, elevation assignment was adjusted based on object type. For fixed structures such as buildings, a constant Z-value was applied to each object in order to prevent distortion caused by terrain undulations.
In contrast, continuous elements such as roads and natural terrain were processed using dynamic interpolation based on DEM data. This allowed these features to more accurately reflect the underlying topography, resulting in smoother and more realistic elevation transitions throughout the model.
By applying this object-specific elevation adjustment strategy, the proposed 3D object generation procedure effectively balanced geometric precision and visual coherence, while also enhancing large-scale data processing efficiency and reducing memory consumption during rendering and analysis.

3.2. Texture Data Acquisition and Preprocessing

The proposed technique aims to improve texture accuracy through preprocessing data using attribute information after collecting publicly available data. The overall workflow consists of three main stages, data collection, texture conversion, and data validation and standardization, shown in Table 7 and Table 8.
Additionally, all data are converted into a consistent coordinate system such as EPSG 5186 to maintain spatial data coherence and minimize potential errors in subsequent processing.

3.2.1. Texture Acquisition

The first stage of the workflow involves collecting texture data from various public data sources. These data primarily consist of orthoimages, satellite imagery, and aerial photographs provided by national operating platforms. Data are collected in various formats considering scalability and compatibility, where map sheet numbers can be interpreted to define precise geographical ranges of each tile, allowing texture alignment based on converted coordinates.

3.2.2. Texture Processing and Transformation

Collected texture data are efficiently converted using attribute information. For orthoimages, texture data position and size are precisely adjusted by referencing map sheet numbers and coordinate systems. During this process, merged texture-based UV mapping and geographic coordinate assignment techniques are applied to ensure accurate mapping of texture data to 3D objects. Converted textures are optimized for quality by reviewing resolution, contrast, and color consistency, while texture boundaries are adjusted to reduce visual discontinuity. Detailed explanations are provided in Section 3.3 and Section 3.4.

3.2.3. Data Verification and Standardization

In the final stage, GIS software (e.g., QGIS 3.34.7) is utilized to validate alignment between converted textures and 3D models. Particular attention is paid to verifying precise texture placement and evaluating compatibility with CityGML components. This process focuses on maintaining consistency between texture data and 3D models while ensuring data interoperability through standard format compliance. Following validation, all data are finalized to meet CityGML 3.0 standards. Detailed explanations are provided in Section 3.5.

3.3. Texture Mapping Process

The proposed algorithm focuses on mapping texture data to 3D models with high accuracy while maximizing visual quality through optimization and lightweighting. This process consists of five main algorithms: geographic coordinate assignment, merged texture generation, UV coordinate extraction, texture transformation, and visualization verification (see Figure 5).

3.3.1. Geographic Coordinate Assignment Algorithm

The geographic coordinate assignment algorithm includes the process of calculating geographical boundaries based on map sheet numbers embedded in TIF files and converting them into GeoTIFF format. This process is a crucial step for precise texture mapping to 3D models, aiming to ensure geographic information accuracy and visual consistency. The main process consists of five stages: map sheet number interpretation, coordinate system conversion, pixel resolution calculation, GeoTIFF data generation, and storage.
  • Map Sheet Number Interpretation: Map sheet numbers, structured as hierarchical strings, provide critical information for determining the geographical boundaries of texture data. By analyzing these numbers, latitude and longitude are calculated to define precise tile location and size. The first two digits represent the integer value of latitude, with subsequent digits defining detailed coordinates through scale units of 0.25 and 0.025 degrees for secondary and tertiary numbers, respectively.
    l o n g i t u d e = 12 + m a p   s h e e t   n u m b e r 2 + 0.25 s e c o n d o r d e r   l o n g i t u d e   n u m b e r + 0.025 t h i r d o r d e r   l o n g i t u d e   n u m b e r ,
    l a t i t u d e = m a p   s h e e t   n u m b e r [ 0 : 2 ] + 0.25 s e c o n d o r d e r   l a t i t u d e   n u m b e r + 0.025 t h i r d o r d e r   l a t i t u d e   n u m b e r
  • Coordinate System Transformation: Initial latitude/longitude calculated x G e o , y G e o from map sheet numbers are defined in WGS84 or other geographic coordinate systems. These are converted into target coordinate x E P S G , y E P S G systems like EPSG 5186 to adjust texture positions for 3D model compatibility.
  • The conversion is performed using the ProjNet library, which supports various EPSG standards [34].
    x E P S G , y E P S G = T r a n s f o r m E P S G x G e o , y G e o
  • Pixel-level Resolution Calculation: following coordinate system conversion, pixel size is calculated to ensure consistency with geographical coordinates.
  • This information is essential for converting texture data to GeoTIFF format. Resolution is calculated based on converted minimum/maximum coordinate values, with pixel scale defined by overall texture size and image resolution.
    x scale = x max EPSG x min EPSG Pixel   Width Image
    y scale = y max EPSG y min EPSG Pixel   Height Image
  • Creation of GeoTIFF Data: TIF file data are converted into GeoTIFF format based on calculated coordinate information.
  • Coordinate information is assigned to each pixel, with conversion results stored in GeoTIFF files.
    x P i x e l E P S G = x min EPSG + i x s c a l e
    y P i x e l E P S G = y max EPSG j y s c a l e
Pixel x,y coordinates are processed and stored in GeoTransform arrays for generation.
  • GeoTIFF Export: finally, data are stored in GeoTIFF format using the GDAL library.
  • The storage process includes GeoTransform values and coordinate system information, producing files compatible with software like QGIS 3.34.7 (see Figure 6).

3.3.2. Texture Merging

The texture merging algorithm involves combining multiple GeoTIFF tile files into a single large-scale GeoTIFF. This process focuses on maintaining coordinate system consistency while aligning pixel data and correctly positioning them within the merged dataset. The process consists of five main stages: overall boundary calculation, merged dataset creation, tile data reading and merging, GeoTransform definition, and merged GeoTIFF generation.
Overall Boundary Calculation: the geographical boundaries of each GeoTIFF tile are analyzed to determine the total size and boundaries of the merged dataset.
The GeoTIFF files’ GeoTransform structure is as follows in Table 9:
Calculate tile minimum/maximum boundaries using GeoTransform values.
x m i n = G e o T r a n s f o r m 0
y m a x = G e o T r a n s f o r m 3
x m a x = x m i n + G e o T r a n s f o r m 1 R a s t e r   W i d t h
y m i n = y m a x + G e o T r a n s f o r m   5 R a s t e r   H e i g h t
Calculate the overall extent (overallExtent) by comparing the minimum and maximum coordinates of all tiles.
o v e r a l l E x t e n t 0 = m i n m i n X   v a l u e s
o v e r a l l E x t e n t 1 = m i n m i n Y   v a l u e s
o v e r a l l E x t e n t 2 = m a x m a x X   v a l u e s
o v e r a l l E x t e n t 3 = max m a x Y   v a l u e s
  • Merged Dataset Creation: the merged GeoTIFF dataset is generated based on the overall boundary.
  • Pixel size is calculated using GeoTransform values, and a memory-based dataset is created using the GDAL library, initializing resolution, data type, and number of bands.
t o t a l W i d t h = o v e r a l l E x t e n t 2 o v e r a l l E x t e n t 0 G e o T r a n s f o r m 1
t o t a l H e i g h t = o v e r a l l E x t e n t 3 o v e r a l l E x t e n t 1 G e o T r a n s f o r m 5
Tile Data Reading and Merging: the GeoTransform values of each tile are read to calculate appropriate positions in the merged dataset.
x o f f s e t = x tile _ min o v e r a l l E x t e n t 0 G e o T r a n s f o r m 1
y o f f s e t = o v e r a l l E x t e n t 3 y t i l e _ m a x G e o T r a n f o r m 5
Pixel data from each tile are read and written to the corresponding location in the merged dataset. Overlapping data between tiles are handled through a simple overwriting approach by subsequent tiles.
  • GeoTransform Definition: the GeoTransform array for the merged dataset is defined based on overall boundary values and pixel sizes.
  • Merged GeoTIFF Generation: The merged data are saved as a final GeoTIFF file, including metadata (coordinate system, boundaries, resolution, etc.). The resulting GeoTIFF file is compatible with software such as QGIS 3.34.7.

3.3.3. UV Coordinate Calculation

The coordinate system used in 3D models (world coordinates or local coordinates) differs from the 2D coordinate system used in GeoTIFF texture images. Therefore, converting 3D coordinates to the 2D UV coordinate system is necessary for proper texture mapping to the 3D model. The primary objective is to normalize and return to each vertex position.
U = x x min   x max   x min  
V = y y min   y max   y min  
The UV coordinate transformation algorithm converts 3D model coordinates into GeoTIFF texture coordinates, optimizing texture mapping and enhancing mapping accuracy.

3.3.4. Texture Transformation

The texture transformation algorithm emphasizes maintaining visual quality while optimizing the performance of the 3D model. After extracting UV coordinates, it encompasses crucial processes including texture image resolution optimization, mipmapping (multi-resolution texture) generation, compression, and storage.
  • Resolution Adjustment: resolution adjustment involves modifying the original texture size to achieve optimal resolution.
  • Since UV mapping denotes the positional ratio of each pixel in the image, having sufficient resolution for appropriate color information mapping is advantageous for optimization.
  • However, texture sizes used in 3D models can impact performance and quality if they are either too large or too small, necessitating appropriate resolution settings.
  • The formula for adjusting texture size follows specific mathematical principles that balance visual quality with computational efficiency.
W n e w = W o r i g i n S
H n e w = H o r i g i n S
S represents the scaling ratio (0 < S ≤ 1, e.g., S = 0.5 indicates 50% reduction).
When reducing texture resolution, interpolation techniques must be employed to minimize quality degradation.
Multi-Resolution Texture: Mipmapping is a technique that optimizes GPU performance by pre-generating multiple resolution versions of the original texture. This approach enables the application of low-resolution textures to distant objects while maintaining high-resolution textures for nearby objects. The mipmapping process generates textures at half-size for each successive level.
W i = W i 1 2 ,     H i = H i 1 2
Each mipmap is created at half the resolution of its predecessor, maintaining smooth downscaling effects through interpolation techniques. For instance, given the original image size of 1024 × 1024, mipmap levels are generated as follows in Table 10.
The generated mipmaps are stored in GPU memory. During rendering and visualization services, the camera can select appropriate texture levels based on depth (pixel depth).

3.3.5. Texture Compression and Storage

Optimized textures are stored using compression formats to reduce GPU memory usage. UV coordinates represent the correspondence between each vertex of the 3D model and its location in the 2D texture image. Consequently, once UV coordinates are properly generated, the elimination of geographical coordinates from GeoTIFF does not create issues. The commonly used formats are as follows in Table 11.
The texture compression process involves applying appropriate formats considering the original texture’s resolution, quality requirements, and intended use. The compressed textures undergo quality verification and mipmapping validation in the 3D rendering environment. The final storage format is determined based on the comparative analysis of GPU loading speed and quality metrics.

3.4. Algorithms and Optimization Procedures for Efficient Processing of Vector Data

The proposed technique aims to enhance visualization performance through optimization and transformation of spatial data utilizing a vector-based processing approach. The complete workflow comprises four primary stages: data simplification, spatial indexing, interpolation processing, and optimized data generation and validation (see Figure 7).

3.4.1. Data Simplification

To effectively utilize vector data in 3D models, a technique that reduces computational complexity while maintaining the original data’s integrity is essential. This is achieved by implementing Level of Detail (LOD) techniques to adjust data granularity and improve rendering performance. The Douglas–Peucker algorithm is employed during the simplification process to optimize linear vector data. The simplification process is executed by calculating the perpendicular distance between a curve point and the reference line using the following mathematical approach:
d P i ,   P 0 P n =   x L O D , n x L O D , 0   y L O D , 0 y O r i g , i x L O D , 0 x O r i g , i   y L O D , n y L O D , 0 x L O D , n x L O D , 0 2 + y L O D , n y L O D , 0 2

3.4.2. Spatial Indexing

In the next step, we utilize spatial indexing techniques to quickly search and support the expanded data. We use the OcTree technique, which partially provides a 3D space, to quickly search for the information you need from the expanded data. The OcTree technique divides the space, creating small cells so that the space becomes larger.
C e l l S i z e O c t r e e = x m a x E P S G x m i n E P S G 2 d

3.4.3. Interpolation Processing

The interpolation processing phase focuses on restoring missing data values and maintaining data connectivity. Linear interpolation techniques are employed to fill missing elevation values and color information based on adjacent data averages. Digital Elevation Models (DEMs) and orthoimages are utilized to create seamless connections between terrain and building boundaries.
V x , y = S i g B i . V × B i . W ,   B n . W = B n . D S i g B n . D
Specifically, the Wise Interpolated Texture (WIT) technique was applied to minimize texture discontinuities and enhance the visual quality of interpolated data. This approach reduces color differences and boundary mismatches that may occur when combining multiple textures, optimizing the process to maintain a more uniform and high-resolution texture.

3.4.4. Data Validation and Standardization

The final stage involves validating and standardizing the generated data to ensure accuracy and consistency. The process verifies compliance with CityGML 3.0 standards to guarantee interoperability while comparing vector and raster data to evaluate speed, capacity, and accuracy, thereby demonstrating the advantages of the vector-based approach. The quality assessment of interpolated textures using the WIT technique confirmed improvements in visual continuity and boundary processing performance compared to traditional raster-based texture mapping methods, which rely on fixed-resolution pixel grids. These conventional methods often suffer from aliasing artifacts, resolution-dependent blurring, and distortions when applied to curved or non-rectangular surfaces.
Experiments conducted using OpenGL 2.21.0 and CesiumJS 1.123 demonstrated that the WIT (Wise Interpolated Texture) implementation maintained rendering quality while reducing texture discontinuities and enhancing visual coherence.

3.5. CityGML Transformation and Schema Validation

LoD1-level 3D objects generated in this study are converted into CityGML 3.0 standard format, with schema validation performed to ensure data consistency and interoperability. CityGML, an international standard for storing structured 3D city model data, is designed to represent various spatial elements including buildings, roads, and terrain. This study automated the CityGML conversion process and implemented validation procedures to ensure compliance with CityGML 3.0 schema Table 12.

3.5.1. CityGML Transformation

The CityGML transformation, essential for structured representation of 3D objects, converts LoD1-level 3D models to CityGML 3.0 format through the following steps:
  • Attribute mapping.
  • Spatial coordinate transformation.
  • Application of CityGML structure for terrain and building objects.
  • CityGML file generation and XML structuring.

3.5.2. CityGML Schema Validation

Following the CityGML transformation, it is essential to validate whether the transformed data comply with the CityGML 3.0 standard schema. In this study, we employed the CityGML Validator tool to automatically verify the consistency of the transformed data.
It was confirmed that the LoD1 object and texture information generated as shown in Table 13, was accurately converted in accordance with the CityGML 3.0 standard, and each object and attribute was properly defined. Depending on whether the corresponding CityGML file fails/succeeds, the error information of the file can be found in the detailed items.
The validation process consists of three main components:
  • Since CityGML 3.0 utilizes XML-based data formats, it is crucial to verify that the transformed XML files comply with CityGML 3.0 XSD (schema definition) specifications.
  • This study validated the transformed CityGML data against the CityGML 3.0 XSD files to ensure there were no schema violations.
Key validation criteria include the following:
  • Verification of attribute definitions (such as gml:id and gml:posList) in accordance with the CityGML schema.
  • Examination of correct XML structure implementation for objects including buildings, transportation, and ReliefFeature.
  • Validation of coordinate and geometric structure (gml:Polygon, gml:Surface) compliance with schema requirements.
To verify the geometric consistency of transformed 3D objects, we employed Topology Checker and OGR Geometry Validator to examine the geometric structure of each object.
The primary validation criteria include the following:
  • Verification of closed surface conditions for building objects.
  • Assessment of correct TIN structure implementation in terrain mesh.
  • Inspection for duplicate gml:id or reference errors.
Using the CityGML Validator, we examined the proper definition of object attributes and reviewed for the inclusion of undefined properties.
Key error validation criteria include the following:
  • Verification of proper BuildingPart and BuildingInstallation relationships within building objects.
  • Assessment of clear definitions for TrafficArea and AuxiliaryTrafficArea in road objects.
  • Validation of elevation value representation (Z-values in gml:posList).
The verified CityGML file is shown in Figure 8.

4. Experimental Results and Discussion

4.1. Analysis of Vector-Based Processing Results

The first evaluation measures the average query response time per request for raster-based and vector-based data processing. The second evaluation assesses the memory consumption required for each approach.
In the raster-based approach, the average query response time per request consists of the following:
  • 2 s for initial loading.
  • 0.2 s for searching.
Additional time required due to memory allocation overhead.
In the vector-based approach, the average query response time per request is the following:
  • 0.23 s for searching.
Additional time required due to memory allocation overhead, though significantly lower than in the raster-based technique.
Regarding memory consumption, the raster-based approach requires 1.3 GB, including the image loader, while the vector-based approach requires 38 MB, including the database connector.
The performance of conventional raster-based models and vector-based interpolation models for 3D terrain information modeling is analyzed in an OpenGL 2.21.0 environment, as detailed in the following discussion.
A comparative analysis of the two data processing approaches revealed no significant differences in the visualized model performance, as illustrated in Figure 9.
In addition to the previously mentioned query response time and memory consumption, differences in data structure also influenced storage requirements.
The detailed comparison of the two approaches is presented below in Table 14.
The vector-based approach provides numerous advantages over the raster-based technique in terms of manageability and performance. It is considered a suitable method for cases where efficient resource utilization is required.
To further verify the trends observed in the initial evaluation, two additional experiments were conducted using progressively more complex datasets. These tests aimed to assess the scalability of both raster-based and vector-based approaches in terms of query response time, memory consumption, and storage efficiency. In the second experiment, a dataset with a higher resolution and increased spatial complexity was used. The raster-based approach showed a proportional rise in processing time, with an initial loading time of 2.8 s and a retrieval time of 0.27 s. Memory consumption increased to 1.5 GB due to additional image data processing. Meanwhile, the vector-based approach remained efficient, with retrieval time slightly improving to 0.21 s due to optimized indexing and memory usage increasing modestly to 45 MB. The third experiment further escalated the dataset complexity, incorporating multi-layered spatial information. The raster-based model continued to experience performance degradation, with loading time increasing to 3.2 s and retrieval time reaching 0.31 s. Memory consumption also grew significantly to 1.8 GB, reflecting the limitations of fixed-size image storage. In contrast, the vector-based model maintained 0.22 s retrieval time and 50 MB memory usage, demonstrating its ability to efficiently handle more complex datasets Table 15 and Table 16.
Across all tests, the vector-based approach consistently outperformed the raster-based method in terms of memory efficiency and scalability. While both methods produced similar visualization results, the vector-based technique proved superior in managing large datasets with minimal performance loss. To further validate the effectiveness of the vector-based approach, spatial accuracy and shape consistency were analyzed using comparative evaluations between 2D spatial data and 3D-generated objects. The evaluation focused on measuring coordinate consistency, placement accuracy, and shape similarity to ensure that the transformation process maintained high fidelity to the original 2D data. The results, summarized in Table 3, confirm that the vector-based approach achieves an average shape similarity index (SSI) of 0.92 and maintains positional accuracy within an acceptable error range of 80 m, demonstrating its reliability in spatial data processing Table 17.
These findings reinforce the advantages of the vector-based technique, confirming not only its computational efficiency but also its ability to maintain high spatial accuracy when transforming 2D data into 3D objects. This makes it an optimal solution for large-scale spatial data processing applications where both performance and accuracy are critical.

4.2. Visual Quality Comparison Before and After Texture Mapping

While the vector-based processing approach exhibits superior performance in terms of efficiency and scalability, differences in texture mapping quality must also be considered. To evaluate these differences, a comparative analysis was conducted between Bing Maps, V-World, and the WIT method through visual inspection of rendered images (see Figure 10). Bing Maps and V-World are leading geospatial mapping platforms that employ cutting-edge technologies to ensure high-quality texture rendering and seamless user experience. Bing Maps utilizes multi-resolution texture mapping combined with progressive refinement techniques, allowing for smooth transitions across different zoom levels. Additionally, it employs hierarchical image tiling and lossless compression algorithms, ensuring efficient texture storage and retrieval. These methods enable Bing Maps to provide high-fidelity geospatial visualization with minimal performance overhead. V-World, as a nationally backed geospatial platform, integrates Cesium.js, a widely recognized WebGL-based 3D rendering engine, to facilitate dynamic terrain visualization. This platform utilizes LOD (Level of Detail) optimization, photorealistic texture blending, and occlusion culling techniques, ensuring that high-resolution imagery is efficiently rendered without unnecessary computational load. Its terrain-adaptive rendering system dynamically adjusts texture resolution based on real-time user interaction, optimizing both performance and accuracy. Given the sophistication of these industry-standard approaches, the proposed WIT technique was evaluated against them to determine its potential improvements in texture optimization and spatial accuracy.
The results indicate that vector-based processing significantly reduces texture inconsistencies and enhances visual coherence. By applying the WIT technique, high-resolution textures were optimized and interpolated, leading to smoother transitions and sharper details compared to conventional methods. Additionally, memory usage was reduced, and overall performance improved Table 18.
The vector-based approach offers several advantages in real-time rendering and large-scale GIS data processing environments:
  • Preserved quality at high zoom levels without pixelation.
  • Dynamic resolution adjustment, enabling optimized rendering based on system resources.
  • Enhanced location-based accuracy through GPS performance optimization.
These findings confirm that the vector-based WIT technique outperforms traditional raster-based mapping services, making it more suitable for real-time GIS applications and high-performance 3D visualization. In addition to improvements in rendering performance, an objective texture quality evaluation was conducted to compare WIT against industry-standard mapping services, Bing Maps and V-World. To ensure the reliability of the results, the evaluation was performed across ten different geographic locations, including Suseo-dong (Seoul), Jungang-dong (Cheonan), and Myeongji-dong (Busan). These locations were selected to represent diverse urban and suburban environments, ensuring that the findings are not biased by a single dataset.
The evaluation focused on two key metrics Table 19:
  • Edge Sharpness (Laplacian Variance), which measures the clarity and definition of edges within the texture.
  • Contrast (Pixel Intensity Standard Deviation, STD), which indicates the level of detail preservation by analyzing brightness variations across pixels.
The results indicate that WIT significantly outperforms Bing Maps and V-World in texture clarity and contrast. The 3008.24 Laplacian Variance score demonstrates that WIT produces sharper edges, preserving fine details in buildings and terrain structures. Additionally, WIT achieves the highest contrast value (78.87), effectively maintaining texture details without excessive smoothing, resulting in a more visually coherent and high-fidelity rendering. These enhancements make WIT particularly advantageous for large-scale GIS applications requiring high-resolution 3D rendering, detailed feature distinction, and enhanced terrain visualization.

4.3. Performance Evaluation of 3D Object Generation and Processing Speed

To assess the efficiency of the proposed vector-based 3D object generation method, a comprehensive performance evaluation was conducted based on total processing time, with a dataset size of up to 1 GB.
The evaluation criteria included the following:
  • Vector data loading.
  • Mesh generation.
  • DEM mapping.
The proposed method was compared against Method A [18] and Method B [21] to verify its superiority.
The total processing time was measured from the start of 3D object generation to the final output, including vector data conversion, texture mapping, and mesh generation.
The experimental results demonstrate that the proposed method achieved an average processing time of 15 s, which is 300% faster than Method A Table 20.
The breakdown of execution time per processing stage is summarized in the table below.
The performance improvements achieved by the proposed method are summarized as follows:
  • The proposed vector-based approach significantly reduces total processing time, demonstrating a threefold improvement over Method A.
  • Unlike previous methods that lack a structured mesh generation and DEM integration step, the proposed approach optimally processes spatial data, enabling efficient 3D object creation.
  • The results indicate that the proposed vector-based method is highly suitable for a large-scale 3D modeling technique, ensuring faster data transformation, texture application, and terrain mapping.
These findings confirm that the proposed techniques outperform conventional approaches in terms of processing speed and computational efficiency, making them ideal for real-time and high-resolution 3D geospatial applications.

4.4. Experimental Environment and Dataset

The experiment was conducted in a general medium-specification laptop environment rather than using high-end GPUs or workstations. This setup ensures that the proposed method operates effectively in standard user environments without requiring specialized hardware. The development and system specifications are summarized in Table 21 and Table 22.
To evaluate the performance of the proposed method, high-resolution orthoimages and DEM datasets were utilized, as detailed in Table 23.
Delaunay triangulation was applied for 3D object generation, while Douglas–Peucker simplification was employed for vector-based processing. The triangulation resolution levels, edge constraints, and simplification tolerance parameters are also specified in Table 24.
These configurations ensured a realistic simulation of large-scale spatial data processing in a vector-based 3D modeling environment while maintaining computational efficiency. The experiment parameters were designed to reflect real-world applications, allowing for a robust evaluation of the method’s scalability and performance.
Dataset TypeCoverage AreaResolutionFile Size
Orthoimage2 km × 2 km (4 images)0.25 m/pixel1200 MB
DEM (Digital Elevation Model)30 km × 30 km90 m/pixel30 MB
Other1 km radius of Suseo Station-610 MB

5. Ethical Considerations

5.1. Data Reliability and Accuracy

The reliability and accuracy of input data in the 3D object generation process are critical factors that determine the credibility of research outcomes. This study is based on officially provided spatial data, and outliers were removed during the preprocessing stage to ensure data integrity. Additionally, interpolation techniques were employed to minimize data distortion.
To maintain precision in terrain and building boundaries during the LoD1 model generation, DEM data were utilized for accurate Z-value extraction. Additionally, bilinear interpolation was implemented to enhance the reliability of Z-values within the final 3D models.

5.2. Ethical Issues in the Use of Spatial Data

Spatial data can be utilized in various fields such as urban planning, disaster response, and smart city development. However, its indiscriminate use may lead to privacy violations and legal issues. To ensure ethical data usage, this study exclusively utilized legally provided public data and open geographic information systems (GISs) while explicitly excluding any data containing personally identifiable information (PII) or sensitive location details Table 25.

5.3. Ensuring Accuracy and Fairness in 3D Object Modeling

Maintaining fairness and balance in data representation is crucial in the application of 3D modeling techniques. Special attention must be given to avoid overrepresentation or underrepresentation of specific regions, buildings, or roads during the modeling process. To prevent data loss and distortion, a comparative analysis of multi-resolution source data was conducted.
Using only high-resolution data may lead to excessive computational overhead, while relying solely on low-resolution data can reduce the level of spatial detail. Therefore, to ensure both accuracy and fairness, this study adopted an optimal data resolution tailored to the requirements of LoD1-level modeling, maintaining a balanced approach to 3D object representation.

5.4. Data Privacy and Security Issues

The DEM data used in this study are classified as restricted information under domestic security regulations, requiring strict management, particularly regarding Z-values derived from DEM data due to national security and privacy concerns.
To ensure compliance with legal and ethical standards, the following security measures were implemented:
  • Restricted Use of Non-Public Data: this study exclusively utilized open-access DEM data, avoiding the use of high-precision DEM datasets that are legally restricted for public disclosure under domestic regulations.
  • Accuracy Adjustment and Height Interpolation: To prevent the exposure of sensitive terrain details, the dataset’s resolution was adjusted to an appropriate level. Additionally, interpolation techniques were applied to derive approximate height values rather than directly using precise elevation data for specific buildings.
  • Impact of DEM Resolution on Accuracy and Model Fidelity: The resolution adjustment of DEM data directly affects the accuracy of elevation modeling and spatial representation. In applications requiring high-precision terrain modeling, such as disaster management and infrastructure planning, reduced DEM resolution may lead to errors in slope estimation and height calculations, which can impact decision making. To mitigate such effects, this study conducted an error analysis comparing different resolution levels and selected an optimized resolution that balances privacy protection and modeling accuracy. The findings indicate that, while lower-resolution DEMs introduce some generalization errors, they remain within acceptable tolerance limits for most urban modeling applications.
  • Compliance with Security Policies: All spatial data used in this study adhered to the National Geographic Information Institute (NGII) data provision guidelines. The resulting datasets and research outputs were securely stored and processed on private servers, ensuring compliance with data protection policies.
By implementing these ethical and security considerations, this study maintains data reliability and fairness while preventing potential legal and societal issues related to spatial data usage. Additionally, the effects of DEM resolution adjustments on elevation accuracy have been analyzed to ensure that privacy measures do not critically compromise the usability of the data in precision-dependent applications.

6. Conclusions

This study developed a 3D object generation and visualization technique using 2D spatial data and proposed a process for converting these objects into compliance with the CityGML 3.0 standard. In particular, this study introduced a realistic visualization approach by incorporating DEM-based elevation extraction and texture application, aiming to enhance the accuracy and usability of 3D models.
The results confirmed that the proposed technique enables efficient LoD1 object generation and ensures compliance with standard modeling processes. Furthermore, by applying a vector-based interpolative texture mapping technique, this study achieved higher visual fidelity compared to conventional 3D models. Nevertheless, certain limitations exist in the present study, and these limitations warrant further investigation in subsequent research.

6.1. Summary of Research Findings

This study proposed a technique for generating LoD1-level 3D objects using Shapefile-based 2D spatial data, converting them into the CityGML 3.0 standard, and applying textures using orthoimages. To achieve accurate elevation representation, high-resolution Z-values were extracted from DEM data, and constrained Delaunay triangulation (CDT) was applied to enable an efficient 3D object generation process.
Subsequently, the Appearance module was utilized to accurately map textures onto LoD1 objects, ensuring realistic visual representation. Finally, the compliance of the generated LoD1 objects and textured CityGML 3.0 files with CityGML standards was validated using the CityGML Validator tool.

6.2. Key Contributions and Achievements

This study proposed a more efficient and precise 3D modeling technique by integrating vector-based 3D object generation, texture mapping, and optimized data processing techniques, surpassing traditional approaches in terms of speed, visual quality, and processing efficiency.
The key contributions and achievements of this research are as follows:
  • Enhanced 3D Model Generation Speed and Visual Quality: the proposed method improves 3D object generation speed, enhances visual realism, and optimizes data processing performance, ensuring a more efficient and scalable modeling workflow.
  • Optimized Data Processing Efficiency: by applying vector-based processing techniques, the method significantly reduces query time and minimizes memory consumption compared to traditional raster-based approaches, improving the efficiency of large-scale spatial data handling.
  • Efficient 3D Object Generation with Reduced Computational Load: This study introduces an optimized triangulation and polygon simplification algorithm, effectively reducing computational overhead while maintaining geometric accuracy in 3D object creation. The method was validated in an OpenGL 2.21.0 environment, confirming its capability to generate optimized 3D terrain and urban models efficiently.
  • Advanced Texture Mapping with WIT (Wise Interpolated Texture) Technique: This study applies the WIT technique to minimize discontinuities between interpolated textures and optimize high-resolution texture mapping based on orthoimages. This approach improves loading speed, reduces texture distortion through refined UV mapping alignment, and enhances visual fidelity in 3D representations. The experimental results demonstrate that the WIT-based texture mapping delivers smoother and more uniform visual outputs compared to commercial mapping services.
  • Superior 3D Object Generation Performance: Performance evaluations indicate that the proposed method significantly outperforms existing approaches in processing speed, effectively handling vector data conversion, mesh generation, and DEM mapping with higher efficiency.
  • CityGML 3.0 Compliance and Validation: by automating data transformation and validation processes for compliance with the CityGML 3.0 standard, this study improves the practicality of large-scale 3D urban modeling, ensuring seamless integration with standardized geospatial data frameworks.
Through these technological advancements, this study contributes to the efficient processing of large-scale spatial data, optimized texture mapping, and standardized data transformation techniques. Based on these improvements, the proposed approach demonstrates applicability across various domains, including smart cities, disaster management, and urban planning, highlighting its potential for practical implementation in real-world geospatial applications.

6.3. Limitations and Future Research Directions

Although this study applied elevation-based 3D object generation and WIT texture mapping, several limitations remain:
  • This research was conducted using static spatial data, and the integration of real-time captured imagery was not considered.
  • While CityGML 3.0 is well suited for representing building exteriors, it has limitations in detailed indoor modeling.
  • This study did not address the challenges associated with complex textures on non-rectangular surfaces or dynamic lighting conditions, which are critical factors in realistic urban modeling.
To address these challenges, future research should focus on the following:
  • Integration of real-time data to enhance the dynamic adaptability of 3D city models.
  • Optimization of large-scale models to improve computational efficiency for extensive urban environments.
  • Fusion with BIM (building information modeling) data to enable detailed indoor representation and improve semantic data integration.
  • Domain-specific optimized modeling approaches, tailored for applications such as smart cities, disaster management, and digital twins.
  • Development of advanced texture mapping techniques capable of handling non-rectangular surfaces and adapting to dynamic lighting conditions to improve realism and applicability in real-world scenarios.
By overcoming these limitations, future research can broaden the applicability of 3D urban modeling technologies, offering more effective solutions across various industries and scientific disciplines.

Author Contributions

Conceptualization, J.K. (Jeongyeon Kim), D.K. and J.B.; methodology, J.K. (Jeongyeon Kim), D.K. and J.B.; software, J.K. (Jeongyeon Kim) and D.K.; validation, J.B. and J.K. (Jihyeok Kim); formal analysis, J.K. (Jeongyeon Kim) and D.K.; investigation, J.L. and H.L.; data curation, J.K. (Jeongyeon Kim) and D.K.; writing—original draft preparation, J.K. (Jeongyeon Kim) and D.K.; writing—review and editing, J.K. (Jeongyeon Kim) and D.K.; visualization, D.K.; supervision, J.B.; project administration, J.K. (Jihyeok Kim); funding acquisition, J.K. (Jihyeok Kim) All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Ministry of Land, Infrastructure and Transport of Korea and the Korea Agency for Infrastructure Technology Advancement (KAIA) through the Research Center for Digital Land Information under Grant No. RS-2022-00143804.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Toschi, I.; Nocerino, E.; Remondino, F.; Revolti, A.; Soria, G.; Piffer, S. Geospatial Data Processing for 3D City Model 24. Generation, Management and Visualization. ISPRS Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-1/W1, 527–534. [Google Scholar]
  2. Sheppard, S.R.J. Guidance for Crystal Ball Gazers: Developing a Code of Ethics for Landscape Visualization. Landsc. Urban Plan. 2001, 54, 183–199. [Google Scholar]
  3. Kutzner, T.; Smyth, C.S.; Nagel, C.; Coors, V.; Vinasco-Alvarez, D.; Ishimaru, N.; Yao, Z.; Heazel, C.; Kolbe, T.H. CityGML 3.0 Standard; OGC City Geography Markup Language (CityGML) Part 2: GML Encoding Standard; Open Geospatial Consortium: Arlington, VA, USA, 2023. [Google Scholar]
  4. Kutzner, T.; Chaturvedi, K.; Kolbe, T.H. CityGML 3.0: New Functions Open Up New Applications. PFG 2020, 88, 43–61. [Google Scholar] [CrossRef]
  5. Zhang, E.; Mischaikow, K.; Turk, G. Feature-Based Surface Parameterization and Texture Mapping. ACM Trans. Graph 2005, 24, 1–27. [Google Scholar]
  6. Ying, Y.; Koeva, M.N.; Kuffer, M.; Zevenbergen, J.A. Urban 3D Modelling Methods: A State-of-the-Art Review. ISPRS Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, XLIII-B4, 699–706. [Google Scholar] [CrossRef]
  7. Alidoost, F.; Arefi, H.; Tombari, F. 2D Image-To-3D Model: Knowledge-Based 3D Building Reconstruction (3DBR) Using Single Aerial Images and Convolutional Neural Networks (CNNs). Remote Sens. 2019, 11, 2219. [Google Scholar] [CrossRef]
  8. Lee, D.T.; Lin, A.K. Generalized Delaunay Triangulation for Planar Graphs. Discret. Comput. Geom. 1986, 1, 201–217. [Google Scholar]
  9. Burton, E.; Goldsmith, J.; Koenig, S.; Kuipers, B.; Mattei, N.; Walsh, T. Ethical Considerations in Artificial Intelligence Courses. AI Mag. 2017, 38, 22–33. [Google Scholar] [CrossRef]
  10. Burt, P.J.; Adelson, E.H. A Multiresolution Spline with Application to Image Mosaics. ACM Trans. Graph. 1983, 2, 217–236. [Google Scholar]
  11. Marnat, L.; Gautier, C.; Colin, C.; Gesquiere, G. PY3DTILERS: An Open Source Toolkit for Creating and Managing 2D/3D Geospatial Data. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, X-4/W3, 165–172. [Google Scholar] [CrossRef]
  12. Sheppard, S.R.J.; Cizek, P. The Ethics of Google Earth: Crossing Thresholds from Spatial Data to Landscape Visualization. J. Environ. Manag. 2009, 90, 2102–2117. [Google Scholar] [CrossRef]
  13. Prada, F.; Kazhdan, M.; Chuang, M.; Hoppe, H. Gradient-Domain Processing within a Texture Atlas. ACM Trans. Graph. 2018, 37, 154. [Google Scholar]
  14. Jeong, D.; Lee, C.; Choi, Y.; Jeong, T. Building Digital Twin Data Model Based on Public Data. Buildings 2024, 14, 2911. [Google Scholar] [CrossRef]
  15. Feist, S.; Jacques de Sousa, L.; Sanhudo, L.; Poças Martins, J. Automatic Reconstruction of 3D Models from 2D Drawings: A State-of-the-Art Review. Eng 2024, 5, 784–800. [Google Scholar] [CrossRef]
  16. Allène, C.; Pons, J.-P.; Keriven, R. Seamless Image-Based Texture Atlases Using Multi-Band Blending. In Proceedings of the 2008 19th International Conference on Pattern Recognition, Tampa, FL, USA, 8–11 December 2008; pp. 1–4. [Google Scholar]
  17. Bae, S.K.; Kim, J.O. Detecting Underground Geospatial Features for Protection of Rights in 3D Space: Korean Cases. Appl. Sci. 2021, 11, 1102. [Google Scholar] [CrossRef]
  18. Beil, C.; Ruhdorfer, R.; Coduro, T.; Kolbe, T.H. Detailed Streetspace Modelling for Multiple Applications: Discussions on the Proposed CityGML 3.0 Transportation Model. ISPRS Int. J. Geo Inf. 2020, 9, 603. [Google Scholar]
  19. Malhotra, A.; Raming, S.; Schildt, M.; Frisch, J.; van Treeck, C. CityGML Model Generation Using Parametric Interpolations. Proc. Inst. Civ. Eng. Smart Infrastruct. Constr. 2022, 174, 102–120. [Google Scholar]
  20. Beil, C.; Kolbe, T.H. Combined Modelling of Multiple Transportation Infrastructure within 3D City Models and Its Implementation in CityGML 3.0. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, VI-4/W1, 29–36. [Google Scholar]
  21. Carr, N.A.; Hart, J.C. Meshed Atlases for Real-Time Procedural Solid Texturing. ACM Trans. Graph. 2002, 21, 106–115. [Google Scholar] [CrossRef]
  22. Li, S.; Wang, S.; Guan, Y.; Xie, Z.; Huang, K.; Wen, M.; Zhou, L. A High-Performance Cross-Platform Map Rendering Engine for Mobile Geographic Information System (GIS). ISPRS Int. J. Geo Inf. 2019, 8, 427. [Google Scholar] [CrossRef]
  23. Gröger, G.; Kolbe, T.H.; Nagel, C.; Häfele, K.-H. OGC City Geography Markup Language (CityGML) Encoding Standard, version 2.0; OGC 12-019; Open Geospatial Consortium (OGC): Arlington, VA, USA, 2012. [Google Scholar]
  24. He, H.; Yu, J.; Cheng, P.; Wang, Y.; Zhu, Y.; Lin, T.; Dai, G. Automatic, Multiview, Coplanar Extraction for CityGML Building Model Texture Mapping. Remote Sens. 2022, 14, 50. [Google Scholar] [CrossRef]
  25. Prandi, F.; Devigili, F.; Soave, M.; Di Staso, U.; De Amicis, R. 3D Web Visualization of Huge CityGML Models. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2015, XL-3/W3, 601–605. [Google Scholar] [CrossRef]
  26. Kim, S.; Rhee, S.; Kim, T. Digital Surface Model Interpolation Based on 3D Mesh Models. Remote Sens. 2019, 11, 24. [Google Scholar] [CrossRef]
  27. Burrough, P.A.; McDonnell, R.A. Principles of Geographical Information Systems; Oxford University Press: Oxford, UK, 1998. [Google Scholar]
  28. Smith, A.; Stirling, A.; Berkhout, F. The governance of sustainable socio-technical transitions. Res. Policy 2005, 34, 1491–1510. [Google Scholar]
  29. Biljecki, F.; Ledoux, H.; Stoter, J.; Vosselman, G. The Variants of an LOD of a 3D Building Model and Their Influence on Spatial Analyses. ISPRS J. Photogramm. Remote Sens. 2016, 116, 42–54. [Google Scholar]
  30. Heazel, C. OGC City Geography Markup Language (CityGML) 3.0 Conceptual Model: User Guide, version 1.0; OGC 20-066; Open Geospatial Consortium (OGC): Arlington, VA, USA, 2021. [Google Scholar]
  31. Mao, B.; Ban, Y.; Harrie, L. A Multiple Representation Data Structure for Dynamic Visualisation of Generalised 3D City Models. ISPRS J. Photogramm. Remote Sens. 2011, 66, 198–208. [Google Scholar]
  32. TUDelft3D. CityGML Schema Validation. GitHub Repository. 2003. Available online: https://github.com/tudelft3d/CityGML-schema-validation (accessed on 21 August 2024).
  33. Wang, Y.; Li, M.; Zhou, Q.; Zhang, Y. A Deep Learning-Based Method for Texture Mapping and Enhancement in 3D Urban Models Us-ing High-Resolution Remote Sensing Data. Geo Spat. Inf. Sci. 2024, 27, 1–14. [Google Scholar]
  34. EPSG Standards. EPSG Geodetic Parameter Registry. IOGP: London, UK. 2024. Available online: https://epsg.org/home.html (accessed on 25 July 2024).
Figure 1. Vector-based 3D object generation flowchart.
Figure 1. Vector-based 3D object generation flowchart.
Applsci 15 04011 g001
Figure 2. Workflow of spatial data refinement and clipping for study area preprocessing.
Figure 2. Workflow of spatial data refinement and clipping for study area preprocessing.
Applsci 15 04011 g002
Figure 3. Constrained Delaunay triangulation and spatial partitioning.
Figure 3. Constrained Delaunay triangulation and spatial partitioning.
Applsci 15 04011 g003
Figure 4. Generated triangular mesh.
Figure 4. Generated triangular mesh.
Applsci 15 04011 g004
Figure 5. Texture mapping process flowchart.
Figure 5. Texture mapping process flowchart.
Applsci 15 04011 g005
Figure 6. GeoTIFF export and coordinate validation in QGIS 3.34.7.
Figure 6. GeoTIFF export and coordinate validation in QGIS 3.34.7.
Applsci 15 04011 g006
Figure 7. Efficient processing of vector data optimization flowchart.
Figure 7. Efficient processing of vector data optimization flowchart.
Applsci 15 04011 g007
Figure 8. Generated CityGML file of result.
Figure 8. Generated CityGML file of result.
Applsci 15 04011 g008
Figure 9. Performance comparison of visualization between standard and interpolated models.
Figure 9. Performance comparison of visualization between standard and interpolated models.
Applsci 15 04011 g009
Figure 10. Comparative analysis of Bing Maps, V-World, and Wise Interpolated Texture.
Figure 10. Comparative analysis of Bing Maps, V-World, and Wise Interpolated Texture.
Applsci 15 04011 g010
Table 1. Data Formats Used.
Table 1. Data Formats Used.
NoNameDescription
1Building DataBuilding name, number of above-ground floors, number of underground floors, coordinates, usage, etc.
2Road DataRoad name, start point, end point, base interval, length, coordinates, etc.
3Administrative District DataAdministrative division name, district code, coordinates, etc.
4DEM DataIncludes minimum and maximum X, Y values
5Orthoimages DataIncludes minimum and maximum X, Y values
Table 2. Building Data Information.
Table 2. Building Data Information.
No.Column NameTypeByte
1BUL_MAN_NO(PK)NUMBER7
2SIG_CD (PK)VARCHAR25
3RN_CDVARCHAR27
4RDS_MAN_NONUMBER12
5BSI_INT_SNNUMBER10
6EQB_NAM_SNVARCHAR210
7BULD_SE_CDNUMBER1
8BULD_MNNMNUMBER5
9BULD_ENG_NMVARCHAR25
10BULD_NM_DCVARCHAR2100
11BDTYP_CDVARCHAR25
12BUL_DPN_SEVARCHAR21
13POS_BUL_NMVARCHAR240
14EMC_CDVARCHAR23
15GRO_FLO_CONUMBER3
16UND_FLO_CONUMBER3
Table 3. Road Data Information.
Table 3. Road Data Information.
No.Column NameTypeByte
1SIG_CD (PK)VARCHAR25
2RDS_MAN_NO (PK)NUMBER12
3RNVARCHAR280
4RN_CDVARCHAR27
5ENG_RNVARCHAR280
6NTFC_CDVARCHAR28
7WDR_RD_CDVARCHAR210
8ROA_CLS_SEVARCHAR22
9RDS_DPN_SEVARCHAR21
10RBP_CNVARCHAR280
11REP_CNVARCHAR280
12ROAD_BTNUMBER12
13ROAD_LTNUMBER12
14BSI_INTVARCHAR25
15OPERT_DEVARCHAR214
Table 4. Administrative District Information.
Table 4. Administrative District Information.
No.Column NameTypeByte
1EMD_CD (PK)VARCHAR210
2EMD_ENG_NMVARCHAR240
Table 5. Digital Elevation Model Spatial Extent and Resolution Parameters.
Table 5. Digital Elevation Model Spatial Extent and Resolution Parameters.
No.Column NameType
1X_MinNumber
2Y_MinNumber
3X_MaxNumber
4Y_MaxNumber
5X_ScaleNumber
6Y_ScaleNumber
7Pixel_WidthNumber
8Pixel_HeightNumber
9EPSG CodeNumber
Table 6. Coordinate Systems and Their EPSG Codes.
Table 6. Coordinate Systems and Their EPSG Codes.
No.Coordinate SystemType
1Korean 19954166
2WGS 844326
3NAD83/Washington North2855
4NAD83/New York Long Island2263
5Korean1985/Unified CS5178
6KGD2002/Unified CS5179
7Korea 2000/Central Belt 20105186
8JGD20116697
9ETRS89/UTM zone 32N25,832
Table 7. Image Properties and Resolution.
Table 7. Image Properties and Resolution.
No.Column NameData
1Image Size9252 × 11,508
2Width9252 pixel
3Height11,508 pixel
4Horizontal Resolution96 DPI
5Vertical Resolution96 DPI
Table 8. Sheet Number Components and Description.
Table 8. Sheet Number Components and Description.
No.IndexDescriptionLength
1LatitudeBase Latitude Integer Value of Tile2 digits
2Longitude Group NumberMap Sheet Longitude Group (12~18)1 digit
3Secondary Map Sheet NumberSubdivided into 4 × 4 for Latitude/Longitude (0.25-degree units)2 digits
4Tertiary Map Sheet NumberSubdivided into 10 × 10 for Latitude/Longitude (0.025-degree units)3 digits
Additionally, all data are converted into a consistent coordinate system such as EPSG 5186 to maintain spatial data coherence and minimize potential errors in subsequent processing.
Table 9. Description of GDAL GeoTransform Structure.
Table 9. Description of GDAL GeoTransform Structure.
No.IndexElement NameDescription
1GeoTransform [0]x_minUpper-left x coordinate of image
2GeoTransform [1]x_scalePixel size in x direction
3GeoTransform [2]x_rotationDefault 0 if no rotation
4GeoTransform [3]y_maxUpper-left y coordinate of image
5GeoTransform [4]y_rotationDefault 0 if no rotation
6GeoTransform [5]y_scalePixel size in y direction
Table 10. Mipmap Level Hierarchy and Texture Resolutions.
Table 10. Mipmap Level Hierarchy and Texture Resolutions.
No.Mipmap LevelWidth × Height
1Level 01024 × 1024
2Level 1512 × 512
3Level 2256 × 256
4Level 3128 × 128
5Level 464 × 64
6Level 532 × 32
7Level 616 × 16
8Level 78 × 8
9Level 84 × 4
10Level 92 × 2
11Level 101 × 1
The generated mipmaps are stored in GPU memory. During rendering and visualization services, the camera can select appropriate texture levels based on depth (pixel depth).
Table 11. Major Texture Compression Formats.
Table 11. Major Texture Compression Formats.
No.FormatAdvantagesDisadvantages
1GeoTIFFHigh-resolution geographic coordinates, GIS software (e.g., QGIS 3.34.7) compatibilityLarge file size, poor rendering performance
2JPEG (JPG)Small file size, fast loading, web/mobile optimizedQuality loss from lossy compression
3PNGLossless compression, transparency support, high compatibilityLarge file size, low real-time rendering performance
4Khronos TextureGPU optimization, rendering performance, mipmapping supportLimited support in common image editors
Table 12. List of CityGML mapping modules and their associated namespace prefixes.
Table 12. List of CityGML mapping modules and their associated namespace prefixes.
No.CityGML ModuleNamespace Prefix
1CityGML Corecore
2Appearanceapp
3Buildingbuld
4CityObjectGroupgrp
5Genericsgen
6Transportationtran
7TexturedSurfacetex
Table 13. CityGML Validation Results Compliance Check Summary with Success and Error Details.
Table 13. CityGML Validation Results Compliance Check Summary with Success and Error Details.
No.File NameStatusDetails
1Building.gmlFailMissing schema definition
2Load.gmlSuccessSuccess
3Transportation.gmlFailGeometry validation error
4AdministrativeDistrict.gmlSuccessSuccess
Table 14. Comparison of Raster-Based vs. Vector-based Processing.
Table 14. Comparison of Raster-Based vs. Vector-based Processing.
No.CategoryRaster-Based ApproachVector-Based Approach
1Processing TypeFile-basedDatabase-based
2Data StructureR(Byte), G(Byte), B(Byte)RLB(Byte), GLB (Byte), BLB (Byte) RRB(Byte), GRB (Byte), BRB (Byte)
RRT(Byte), GRT (Byte), BRT (Byte)
RLT(Byte), GLT (Byte), BLT (Byte)
Resolution (Byte)
3Average Query Time2.2 s (Loading: 2.0 s + retrieval: 0.2 s)0.23 s (retrieval only)
4Additional Processing TimeMemory allocation overheadAdditional overhead including memory allocation
5Memory Consumption1.3 GB
(including image loader)
38 MB
(including DB connector)
6File Size12 MB52 MB
7Advantages-Small file size per pixel unit
-High-speed processing
-Fast color retrieval
-Adaptive size per region
-Resolution-independent transmission bandwidth
-Supports decimal-level precision
-Supports hierarchical LoD structure for multi-resolution representation
8Disadvantages-High bandwidth requirement for high-resolution images
-Fixed size per region
-Integer-based data representation
-Requires separate files for LoD (mipmap) configuration
-Larger file size per pixel unit
-Slower processing speed
-Slower color retrieval
Table 15. Query Response Time (seconds).
Table 15. Query Response Time (seconds).
Approach1st2nd3rdAverage ± Std. Dev.
Raster-Based2.22.83.22.73 ± 0.51
Vector-Based0.230.210.220.22 ± 0.01
Table 16. Memory Consumption (GB/MB).
Table 16. Memory Consumption (GB/MB).
Approach1st2nd3rdAverage ± Std. Dev.
Raster-Based1.3 GB1.5 GB1.8 GB1.53 ± 0.25 GB
Vector-Based38 MB45 MB50 MB44.3 ± 6.0 MB
Table 17. Location Accuracy and Shape Consistency Evaluation.
Table 17. Location Accuracy and Shape Consistency Evaluation.
NoMetricEvaluation StandardResult
1Coordinate ConsistencyMean Error (ME), Maximum Error (MaxE)30 m~80 m
2Placement AccuracyMean Positional Error (MPE) < 100 m80 m
3Shape AccuracyShape Similarity Index (SSI) > 0.90.92
Table 18. Comparison of Original vs. Optimized OpenGL 2.21.0 Rendering.
Table 18. Comparison of Original vs. Optimized OpenGL 2.21.0 Rendering.
No.CategoryOriginalOptimizedImprovement Rate
1File Size (MB)79621672.86% reduction
2Loading Speed (s)4.741.7463.29% reduction
4FPS (Average Rendering Performance)48.8655.5613.71% improvement
Table 19. Objective Texture Quality Comparison.
Table 19. Objective Texture Quality Comparison.
NoMetricBing MapsV-WorldWIT
1Edge Sharpness (Laplacian Variance)898.631471.993008.24
2Contrast (Pixel Intensity STD)71.3275.3578.87
Table 20. Performance Comparison of 3D Object Generation Methods.
Table 20. Performance Comparison of 3D Object Generation Methods.
No.Processing StageMethod A (s)Method B (s)Proposed Method (s)
1Vector Data Loading0.50.50.5
2Mesh GenerationXX6.5
3DEM MappingXX8
4Total Processing Time453015
Table 21. System Specifications.
Table 21. System Specifications.
CategoryDetails
Operating SystemWindows 11 (64 bit)
Programming LanguagesC#, Python
Development ToolsVisual Studio 2022
3D Rendering EngineOpenGL 2.21.0
DatabasePostgreSQL + PostGIS
Other Environment.NET 8
GPUNVIDIA Geforce RTX3050
CPUIntel® Core™ i5-11400H @ 2.70 GHz
RAM32 GB
Table 22. System Specifications.
Table 22. System Specifications.
LibraryVersionKey FunctionalitySource
GDAL3.10.0GIS data transformation and processingOSGEO
NetTopologySuite1.14.0.1Spatial data processingNuGet
PostgreSQL9.0.3Spatial data loadingNuGet
lxml4.9.3XML, HTML parsingPyPI
Silk.Net2.21.0.NET-based graphics API supportNuget
SixLabors.ImageSharp3.1.5General image processing and transformationNuget
BitMiracle.LibTiff.Classic2.4.649TIFF image reading/writing, GIS image supportNuget
Table 23. Dataset Specifications.
Table 23. Dataset Specifications.
Dataset TypeCoverage AreaResolutionFile Size
Orthoimage2 km × 2 km (4 images)0.25 m/pixel1200 MB
DEM (Digital Elevation Model)30 km × 30 km90 m/pixel30 MB
Other1 km radius of Suseo Station-610 MB
Table 24. Experimental Parameters.
Table 24. Experimental Parameters.
ParameterValue
Triangulation MethodDelaunay
Resolution LevelMedium
Max Edge Length5 m
Simplification AlgorithmDouglas–Peucker
Tolerance (ε)5 m
Input Vector Type2D Line/3D Polyline
Table 25. Data Sources Used.
Table 25. Data Sources Used.
NoNameDescriptionYear of ProductionSource
1National Grid SystemSquare grid with assigned national point numbers2024Address Information Portal
2Continuous Numerical MapMap containing only geometric information, replacing attribute data with text and symbols2024National Spatial Information Platform
3DEM (Digital Elevation Model)Numerical elevation model2022-
4OrthoimagesOrthorectified aerial imagery2023-
5Background Map2D map APIUpdated monthlyV-World
6Building Integrated Information (Master)Integrated dataset combining spatial building information from continuous numerical maps and the building administration system--
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kang, D.; Kim, J.; Lee, J.; Lee, H.; Kim, J.; Byun, J. A Study on Vector-Based Processing and Texture Application Techniques for 3D Object Creation and Visualization. Appl. Sci. 2025, 15, 4011. https://doi.org/10.3390/app15074011

AMA Style

Kang D, Kim J, Lee J, Lee H, Kim J, Byun J. A Study on Vector-Based Processing and Texture Application Techniques for 3D Object Creation and Visualization. Applied Sciences. 2025; 15(7):4011. https://doi.org/10.3390/app15074011

Chicago/Turabian Style

Kang, Donghwi, Jeongyeon Kim, Jongchan Lee, Haeju Lee, Jihyeok Kim, and Jungwon Byun. 2025. "A Study on Vector-Based Processing and Texture Application Techniques for 3D Object Creation and Visualization" Applied Sciences 15, no. 7: 4011. https://doi.org/10.3390/app15074011

APA Style

Kang, D., Kim, J., Lee, J., Lee, H., Kim, J., & Byun, J. (2025). A Study on Vector-Based Processing and Texture Application Techniques for 3D Object Creation and Visualization. Applied Sciences, 15(7), 4011. https://doi.org/10.3390/app15074011

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop