Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Keywords = voxelisation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 9459 KB  
Article
Non-Uniform Voxelisation for Point Cloud Compression
by Bert Van hauwermeiren, Leon Denis and Adrian Munteanu
Sensors 2025, 25(3), 865; https://doi.org/10.3390/s25030865 - 31 Jan 2025
Viewed by 1081
Abstract
Point cloud compression is essential for the efficient storage and transmission of 3D data in various applications, such as virtual reality, autonomous driving, and 3D modelling. Most existing compression methods employ voxelisation, all of which uniformly partition 3D space into voxels for more [...] Read more.
Point cloud compression is essential for the efficient storage and transmission of 3D data in various applications, such as virtual reality, autonomous driving, and 3D modelling. Most existing compression methods employ voxelisation, all of which uniformly partition 3D space into voxels for more efficient compression. However, uniform voxelisation may not capture the underlying geometry of complex scenes effectively. In this paper, we propose a novel non-uniform voxelisation technique for point cloud geometry compression. Our method adaptively adjusts voxel sizes based on local point density, preserving geometric details while enabling more accurate reconstructions. Through comprehensive experiments on the well-known benchmark datasets ScanNet, ModelNet and ShapeNet, we demonstrate that our approach achieves better compression ratios and reconstruction quality in comparison to traditional uniform voxelisation methods. The results highlight the potential of non-uniform voxelisation as a viable and effective alternative, offering improved performance for point cloud geometry compression in a wide range of real-world scenarios. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

21 pages, 5253 KB  
Article
Using Voxelisation-Based Data Analysis Techniques for Porosity Prediction in Metal Additive Manufacturing
by Abraham George, Marco Trevisan Mota, Conor Maguire, Ciara O’Callaghan, Kevin Roche and Nikolaos Papakostas
Appl. Sci. 2024, 14(11), 4367; https://doi.org/10.3390/app14114367 - 22 May 2024
Cited by 3 | Viewed by 1867
Abstract
Additive manufacturing workflows generate large amounts of data in each phase, which can be very useful for monitoring process performance and predicting the quality of the finished part if used correctly. In this paper, a framework is presented that utilises machine learning methods [...] Read more.
Additive manufacturing workflows generate large amounts of data in each phase, which can be very useful for monitoring process performance and predicting the quality of the finished part if used correctly. In this paper, a framework is presented that utilises machine learning methods to predict porosity defects in printed parts. Data from process settings, in-process sensor readings, and post-process computed tomography scans are first aligned and discretised using a voxelisation approach to create a training dataset. A multi-step classification system is then proposed to classify the presence and type of porosity in a voxel, which can then be utilised to find the distribution of porosity within the build volume. Titanium parts were printed using a laser powder bed fusion system. Two discretisation techniques based on voxelisation were utilised: a defect-centric and a uniform discretisation method. Different machine learning models, feature sets, and other parameters were also tested. Promising results were achieved in identifying porous voxels; however, the accuracy of the classification requires improvement before being applied industrially. The potential of the voxelisation-based framework for this application and its ability to incorporate data from different stages of the additive manufacturing workflow as well as different machine learning models was clearly demonstrated. Full article
(This article belongs to the Special Issue Computer Vision in Automatic Detection and Identification)
Show Figures

Figure 1

22 pages, 3205 KB  
Review
Voxelisation Algorithms and Data Structures: A Review
by Mitko Aleksandrov, Sisi Zlatanova and David J. Heslop
Sensors 2021, 21(24), 8241; https://doi.org/10.3390/s21248241 - 9 Dec 2021
Cited by 39 | Viewed by 11277
Abstract
Voxel-based data structures, algorithms, frameworks, and interfaces have been used in computer graphics and many other applications for decades. There is a general necessity to seek adequate digital representations, such as voxels, that would secure unified data structures, multi-resolution options, robust validation procedures [...] Read more.
Voxel-based data structures, algorithms, frameworks, and interfaces have been used in computer graphics and many other applications for decades. There is a general necessity to seek adequate digital representations, such as voxels, that would secure unified data structures, multi-resolution options, robust validation procedures and flexible algorithms for different 3D tasks. In this review, we evaluate the most common properties and algorithms for voxelisation of 2D and 3D objects. Thus, many voxelisation algorithms and their characteristics are presented targeting points, lines, triangles, surfaces and solids as geometric primitives. For lines, we identify three groups of algorithms, where the first two achieve different voxelisation connectivity, while the third one presents voxelisation of curves. We can say that surface voxelisation is a more desired voxelisation type compared to solid voxelisation, as it can be achieved faster and requires less memory if voxels are stored in a sparse way. At the same time, we evaluate in the paper the available voxel data structures. We split all data structures into static and dynamic grids considering the frequency to update a data structure. Static grids are dominated by SVO-based data structures focusing on memory footprint reduction and attributes preservation, where SVDAG and SSVDAG are the most advanced methods. The state-of-the-art dynamic voxel data structure is NanoVDB which is superior to the rest in terms of speed as well as support for out-of-core processing and data management, which is the key to handling large dynamically changing scenes. Overall, we can say that this is the first review evaluating the available voxelisation algorithms for different geometric primitives as well as voxel data structures. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

24 pages, 35800 KB  
Article
Integration of Remote Sensing Data into a Composite Voxel Model for Environmental Performance Analysis of Terraced Vineyards in Tuscany, Italy
by Jakub Tyc, Defne Sunguroğlu Hensel, Erica Isabella Parisi, Grazia Tucci and Michael Ulrich Hensel
Remote Sens. 2021, 13(17), 3483; https://doi.org/10.3390/rs13173483 - 2 Sep 2021
Cited by 10 | Viewed by 3867
Abstract
Understanding socio-ecological systems and the discovery, recovery and adaptation of land knowledge are key challenges for sustainable land use. The analysis of sustainable agricultural systems and practices, for instance, requires interdisciplinary and transdisciplinary research and coordinated data acquisition, data integration and analysis. However, [...] Read more.
Understanding socio-ecological systems and the discovery, recovery and adaptation of land knowledge are key challenges for sustainable land use. The analysis of sustainable agricultural systems and practices, for instance, requires interdisciplinary and transdisciplinary research and coordinated data acquisition, data integration and analysis. However, datasets, which are acquired using remote sensing, geospatial analysis and simulation techniques, are often limited by narrow disciplinary boundaries and therefore fall short in enabling a holistic approach across multiple domains and scales. In this work, we demonstrate a new workflow for interdisciplinary data acquisition and integration, focusing on terraced vineyards in Tuscany, Italy. We used multi-modal data acquisition and performed data integration via a voxelised point cloud that we term a composite voxel model. The latter facilitates a multi-domain and multi-scale data-integrated approach for advancing the discovery and recovery of land knowledge. This approach enables integration, correlation and analysis of data pertaining to different domains and scales in a single data structure. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Graphical abstract

18 pages, 5859 KB  
Article
An Efficient Approach to Automatic Construction of 3D Watertight Geometry of Buildings Using Point Clouds
by Yuanzhi Cai and Lei Fan
Remote Sens. 2021, 13(10), 1947; https://doi.org/10.3390/rs13101947 - 17 May 2021
Cited by 20 | Viewed by 4061
Abstract
Recent years have witnessed an increasing use of 3D models in general and 3D geometric models specifically of built environment for various applications, owing to the advancement of mapping techniques for accurate 3D information. Depending on the application scenarios, there exist various types [...] Read more.
Recent years have witnessed an increasing use of 3D models in general and 3D geometric models specifically of built environment for various applications, owing to the advancement of mapping techniques for accurate 3D information. Depending on the application scenarios, there exist various types of approaches to automate the construction of 3D building geometry. However, in those studies, less attention has been paid to watertight geometries derived from point cloud data, which are of use to the management and the simulations of building energy. To this end, an efficient reconstruction approach was introduced in this study and involves the following key steps. The point cloud data are first voxelised for the ray-casting analysis to obtain the 3D indoor space. By projecting it onto a horizontal plane, an image representing the indoor area is obtained and is used for the room segmentation. The 2D boundary of each room candidate is extracted using new grammar rules and is extruded using the room height to generate 3D models of individual room candidates. The room connection analyses are applied to the individual models obtained to determine the locations of doors and the topological relations between adjacent room candidates for forming an integrated and watertight geometric model. The approach proposed was tested using the point cloud data representing six building sites of distinct spatial confirmations of rooms, corridors and openings. The experimental results showed that accurate watertight building geometries were successfully created. The average differences between the point cloud data and the geometric models obtained were found to range from 12 to 21 mm. The maximum computation time taken was less than 5 min for the point cloud of approximately 469 million data points, more efficient than the typical reconstruction methods in the literature. Full article
Show Figures

Figure 1

19 pages, 7428 KB  
Article
A Comparative Study about Data Structures Used for Efficient Management of Voxelised Full-Waveform Airborne LiDAR Data during 3D Polygonal Model Creation
by Milto Miltiadou, Neill D. F. Campbell, Darren Cosker and Michael G. Grant
Remote Sens. 2021, 13(4), 559; https://doi.org/10.3390/rs13040559 - 4 Feb 2021
Cited by 5 | Viewed by 5995
Abstract
In this paper, we investigate the performance of six data structures for managing voxelised full-waveform airborne LiDAR data during 3D polygonal model creation. While full-waveform LiDAR data has been available for over a decade, extraction of peak points is the most widely used [...] Read more.
In this paper, we investigate the performance of six data structures for managing voxelised full-waveform airborne LiDAR data during 3D polygonal model creation. While full-waveform LiDAR data has been available for over a decade, extraction of peak points is the most widely used approach of interpreting them. The increased information stored within the waveform data makes interpretation and handling difficult. It is, therefore, important to research which data structures are more appropriate for storing and interpreting the data. In this paper, we investigate the performance of six data structures while voxelising and interpreting full-waveform LiDAR data for 3D polygonal model creation. The data structures are tested in terms of time efficiency and memory consumption during run-time and are the following: (1) 1D-Array that guarantees coherent memory allocation, (2) Voxel Hashing, which uses a hash table for storing the intensity values (3) Octree (4) Integral Volumes that allows finding the sum of any cuboid area in constant time, (5) Octree Max/Min, which is an upgraded octree and (6) Integral Octree, which is proposed here and it is an attempt to combine the benefits of octrees and Integral Volumes. In this paper, it is shown that Integral Volumes is the more time efficient data structure but it requires the most memory allocation. Furthermore, 1D-Array and Integral Volumes require the allocation of coherent space in memory including the empty voxels, while Voxel Hashing and the octree related data structures do not require to allocate memory for empty voxels. These data structures, therefore, and as shown in the test conducted, allocate less memory. To sum up, there is a need to investigate how the LiDAR data are stored in memory. Each tested data structure has different benefits and downsides; therefore, each application should be examined individually. Full article
(This article belongs to the Special Issue Lidar Remote Sensing in 3D Object Modelling)
Show Figures

Graphical abstract

22 pages, 3451 KB  
Article
Detecting Dead Standing Eucalypt Trees from Voxelised Full-Waveform Lidar Using Multi-Scale 3D-Windows for Tackling Height and Size Variations
by Milto Miltiadou, Athos Agapiou, Susana Gonzalez Aracil and Diofantos G. Hadjimitsis
Forests 2020, 11(2), 161; https://doi.org/10.3390/f11020161 - 31 Jan 2020
Cited by 7 | Viewed by 3874
Abstract
In southern Australia, many native mammals and birds rely on hollows for sheltering, while hollows are more likely to exist on dead trees. Therefore, detection of dead trees could be useful in managing biodiversity. Detecting dead standing (snags) versus dead fallen trees (Coarse [...] Read more.
In southern Australia, many native mammals and birds rely on hollows for sheltering, while hollows are more likely to exist on dead trees. Therefore, detection of dead trees could be useful in managing biodiversity. Detecting dead standing (snags) versus dead fallen trees (Coarse Woody Debris—CWD) is a very different task from a classification perspective. This study focuses on improving detection of dead standing eucalypt trees from full-waveform LiDAR. Eucalypt trees have irregular shapes making delineation of them challenging. Additionally, since the study area is a native forest, trees significantly vary in terms of height, density and size. Therefore, we need methods that will be resistant to those challenges. Previous study showed that detection of dead standing trees without tree delineation is possible. This was achieved by using single size 3D-windows to extract structural features from voxelised full-waveform LiDAR and characterise dead (positive samples) and live (negative samples) trees for training a classifier. This paper adds on by proposing the usage of multi-scale 3D-windows for tackling height and size variations of trees. Both the single 3D-windows approach and the new multi-scale 3D-windows approach were implemented for comparison purposes. The accuracy of the results was calculated using the precision and recall parameters and it was proven that the multi-scale 3D-windows approach performs better than the single size 3D-windows approach. This open ups possibilities for applying the proposed approach on other native forest related applications. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

20 pages, 7601 KB  
Article
Individual Rubber Tree Segmentation Based on Ground-Based LiDAR Data and Faster R-CNN of Deep Learning
by Jiamin Wang, Xinxin Chen, Lin Cao, Feng An, Bangqian Chen, Lianfeng Xue and Ting Yun
Forests 2019, 10(9), 793; https://doi.org/10.3390/f10090793 - 11 Sep 2019
Cited by 75 | Viewed by 6303
Abstract
Rubber trees in southern China are often impacted by natural disturbances that can result in a tilted tree body. Accurate crown segmentation for individual rubber trees from scanned point clouds is an essential prerequisite for accurate tree parameter retrieval. In this paper, three [...] Read more.
Rubber trees in southern China are often impacted by natural disturbances that can result in a tilted tree body. Accurate crown segmentation for individual rubber trees from scanned point clouds is an essential prerequisite for accurate tree parameter retrieval. In this paper, three plots of different rubber tree clones, PR107, CATAS 7-20-59, and CATAS 8-7-9, were taken as the study subjects. Through data collection using ground-based mobile light detection and ranging (LiDAR), a voxelisation method based on the scanned tree trunk data was proposed, and deep images (i.e., images normally used for deep learning) were generated through frontal and lateral projection transform of point clouds in each voxel with a length of 8 m and a width of 3 m. These images provided the training and testing samples for the faster region-based convolutional neural network (Faster R-CNN) of deep learning. Consequently, the Faster R-CNN combined with the generated training samples comprising 802 deep images with pre-marked trunk locations was trained to automatically recognize the trunk locations in the testing samples, which comprised 359 deep images. Finally, the point clouds for the lower parts of each trunk were extracted through back-projection transform from the recognized trunk locations in the testing samples and used as the seed points for the region’s growing algorithm to accomplish individual rubber tree crown segmentation. Compared with the visual inspection results, the recognition rate of our method reached 100% for the deep images of the testing samples when the images contained one or two trunks or the trunk information was slightly occluded by leaves. For the complicated cases, i.e., multiple trunks or overlapping trunks in one deep image or a trunk appearing in two adjacent deep images, the recognition accuracy of our method was greater than 90%. Our work represents a new method that combines a deep learning framework with point cloud processing for individual rubber tree crown segmentation based on ground-based mobile LiDAR scanned data. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Graphical abstract

Back to TopTop