Processing math: 100%
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (42)

Search Parameters:
Keywords = three-dimensional science learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 11753 KiB  
Article
A Transformer-Based Approach for Efficient Geometric Feature Extraction from Vector Shape Data
by Longfei Cui, Xinyu Niu, Haizhong Qian, Xiao Wang and Junkui Xu
Appl. Sci. 2025, 15(5), 2383; https://doi.org/10.3390/app15052383 - 23 Feb 2025
Viewed by 774
Abstract
The extraction of shape features from vector elements is essential in cartography and geographic information science, supporting a range of intelligent processing tasks. Traditional methods rely on different machine learning algorithms tailored to specific types of line and polygon elements, limiting their general [...] Read more.
The extraction of shape features from vector elements is essential in cartography and geographic information science, supporting a range of intelligent processing tasks. Traditional methods rely on different machine learning algorithms tailored to specific types of line and polygon elements, limiting their general applicability. This study introduces a novel approach called “Pre-Trained Shape Feature Representations from Transformers (PSRT)”, which utilizes transformer encoders designed with three self-supervised pre-training tasks: coordinate masking prediction, coordinate offset correction, and coordinate sequence rearrangement. This approach enables the extraction of general shape features applicable to both line and polygon elements, generating high-dimensional embedded feature vectors. These vectors facilitate downstream tasks like shape classification, pattern recognition, and cartographic generalization. Our experimental results show that PSRT can extract vector shape features effectively without needing labeled samples and is adaptable to various types of vector features. Compared to the methods without pre-training, PSRT enhances training efficiency by over five times and improves accuracy by 5–10% in tasks such as line element matching and polygon shape classification. This innovative approach offers a more unified, efficient solution for processing vector shape data across different applications. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

25 pages, 9167 KiB  
Review
Modeling LiDAR-Derived 3D Structural Metric Estimates of Individual Tree Aboveground Biomass in Urban Forests: A Systematic Review of Empirical Studies
by Ruonan Li, Lei Wang, Yalin Zhai, Zishan Huang, Jia Jia, Hanyu Wang, Mengsi Ding, Jiyuan Fang, Yunlong Yao, Zhiwei Ye, Siqi Hao and Yuwen Fan
Forests 2025, 16(3), 390; https://doi.org/10.3390/f16030390 - 22 Feb 2025
Viewed by 830
Abstract
The aboveground biomass (AGB) of individual trees is a critical indicator for assessing urban forest productivity and carbon storage. In the context of global warming, it plays a pivotal role in understanding urban forest carbon sequestration and regulating the global carbon cycle. Recent [...] Read more.
The aboveground biomass (AGB) of individual trees is a critical indicator for assessing urban forest productivity and carbon storage. In the context of global warming, it plays a pivotal role in understanding urban forest carbon sequestration and regulating the global carbon cycle. Recent advances in light detection and ranging (LiDAR) have enabled the detailed characterization of three-dimensional (3D) structures, significantly enhancing the accuracy of individual tree AGB estimation. This review examines studies that use LiDAR-derived 3D structural metrics to model and estimate individual tree AGB, identifying key metrics that influence estimation accuracy. A bibliometric analysis of 795 relevant articles from the Web of Science Core Collection was conducted using R Studio (version 4.4.1) and VOSviewer 1.6.20 software, followed by an in-depth review of 80 papers focused on urban forests, published after 2010 and selected from the first and second quartiles of the Chinese Academy of Sciences journal ranking. The results show the following: (1) Dalponte2016 and watershed are more widely used among 2D raster-based algorithms, and 3D point cloud-based segmentation algorithms offer greater potential for innovation; (2) tree height and crown volume are important 3D structural metrics for individual tree AGB estimation, and biomass indices that integrate these parameters can further improve accuracy and applicability; (3) machine learning algorithms such as Random Forest and deep learning consistently outperform parametric methods, delivering stable AGB estimates; (4) LiDAR data sources, point cloud density, and forest types are important factors that significantly affect the accuracy of individual tree AGB estimation. Future research should emphasize deep learning applications for improving point cloud segmentation and 3D structure extraction accuracy in complex forest environments. Additionally, optimizing multi-sensor data fusion strategies to address data matching and resolution differences will be crucial for developing more accurate and widely applicable AGB estimation models. Full article
(This article belongs to the Section Urban Forestry)
Show Figures

Figure 1

25 pages, 9659 KiB  
Article
Water Quality by Spectral Proper Orthogonal Decomposition and Deep Learning Algorithms
by Shaogeng Zhang, Junqiang Lin, Youkun Li, Boran Zhu, Di Zhang, Qidong Peng and Tiantian Jin
Sustainability 2025, 17(1), 114; https://doi.org/10.3390/su17010114 - 27 Dec 2024
Cited by 1 | Viewed by 896
Abstract
Water quality plays a pivotal role in human health and environmental sustainability. However, traditional water quality prediction models are limited by high model complexity and long computation time, whereas AI models often struggle with high-dimensional time series and lack physical interpretability. This paper [...] Read more.
Water quality plays a pivotal role in human health and environmental sustainability. However, traditional water quality prediction models are limited by high model complexity and long computation time, whereas AI models often struggle with high-dimensional time series and lack physical interpretability. This paper proposes a two-dimensional water quality surrogate model that couples physical numerical models and AI. The model employs physical simulation results as input, applies spectral proper orthogonal decomposition to reduce the dimensionality of the simulation results, utilizes a long short-term memory neural network for matrix forecasting, and reconstructs the two-dimensional concentration field. The simulation and predictive performance of the surrogate model were systematically evaluated through four design scenarios and three sampling dataset lengths, with a particular focus on the convection–diffusion zone and high-concentration zone. The results indicated that the model achieves high prediction accuracy for up to 7 h into the future, with sampling dataset lengths ranging from 20 to 80 h. Specifically, the model achieved an average R2 of 0.92, a MAE of 0.38, and a MAPE of 1.77%, demonstrating its suitability for short-term water quality predictions. The methodology and findings of this study demonstrate the significant potential of integrating spectral proper orthogonal decomposition and deep learning for water quality prediction. By overcoming the limitations of traditional models, the proposed surrogate model provides high-accuracy predictions with enhanced physical interpretability, even in complex, dynamic environments. This work offers a practical tool for rapid responses to water pollution incidents and supports improved watershed water quality management by effectively capturing pollutant diffusion dynamics. Furthermore, the model’s scalability and adaptability make it a valuable resource for addressing intelligent management in environmental science. Full article
Show Figures

Figure 1

15 pages, 3259 KiB  
Article
Towards Sustainable Material Design: A Comparative Analysis of Latent Space Representations in AI Models
by Ulises Martin Casado, Facundo Ignacio Altuna and Luis Alejandro Miccio
Sustainability 2024, 16(23), 10681; https://doi.org/10.3390/su162310681 - 5 Dec 2024
Viewed by 1256
Abstract
In this study, we employed machine learning techniques to improve sustainable materials design by examining how various latent space representations affect the AI performance in property predictions. We compared three fingerprinting methodologies: (a) neural networks trained on specific properties, (b) encoder–decoder architectures, and [...] Read more.
In this study, we employed machine learning techniques to improve sustainable materials design by examining how various latent space representations affect the AI performance in property predictions. We compared three fingerprinting methodologies: (a) neural networks trained on specific properties, (b) encoder–decoder architectures, and c) traditional Morgan fingerprints. Their encoding quality was quantitatively compared by using these fingerprints as inputs for a simple regression model (Random Forest) to predict glass transition temperatures (Tg), a critical parameter in determining material performance. We found that the task-specific neural networks achieved the highest accuracy, with a mean absolute percentage error (MAPE) of 10% and an R2 of 0.9, significantly outperforming encoder–decoder models (MAPE: 19%, R2: 0.76) and Morgan fingerprints (MAPE: 24%, R2: 0.6). In addition, we used dimensionality reduction techniques, such as principal component analysis (PCA) and t-distributed stochastic neighbour embedding (t-SNE), to gain insights on the models’ abilities to learn relevant molecular features to Tg. By offering a more profound understanding of how chemical structures influence AI-based property predictions, this approach enables the efficient identification of high-performing materials in applications that range from water decontamination to polymer recyclability with minimum experimental effort, promoting a circular economy in materials science. Full article
Show Figures

Figure 1

16 pages, 4572 KiB  
Article
Latent Space Representation of Human Movement: Assessing the Effects of Fatigue
by Thomas Rousseau, Gentiane Venture and Vincent Hernandez
Sensors 2024, 24(23), 7775; https://doi.org/10.3390/s24237775 - 4 Dec 2024
Viewed by 1209
Abstract
Fatigue plays a critical role in sports science, significantly affecting recovery, training effectiveness, and overall athletic performance. Understanding and predicting fatigue is essential to optimize training, prevent overtraining, and minimize the risk of injuries. The aim of this study is to leverage Human [...] Read more.
Fatigue plays a critical role in sports science, significantly affecting recovery, training effectiveness, and overall athletic performance. Understanding and predicting fatigue is essential to optimize training, prevent overtraining, and minimize the risk of injuries. The aim of this study is to leverage Human Activity Recognition (HAR) through deep learning methods for dimensionality reduction. The use of Adversarial AutoEncoders (AAEs) is explored to assess and visualize fatigue in a two-dimensional latent space, focusing on both semi-supervised and conditional approaches. By transforming complex time-series data into this latent space, the objective is to evaluate motor changes associated with fatigue within the participants’ motor control by analyzing shifts in the distribution of data points and providing a visual representation of these effects. It is hypothesized that increased fatigue will cause significant changes in point distribution, which will be analyzed using clustering techniques to identify fatigue-related patterns. The data were collected using a Wii Balance Board and three Inertial Measurement Units, which were placed on the hip and both forearms (distal part, close to the wrist) to capture dynamic and kinematic information. The participants followed a fatigue-inducing protocol that involved repeating sets of 10 repetitions of four different exercises (Squat, Right Lunge, Left Lunge, and Plank Jump) until exhaustion. Our findings indicate that the AAE models are effective in reducing data dimensionality, allowing for the visualization of fatigue’s impact within a 2D latent space. The latent space representation provides insights into motor control variations, revealing patterns that can be used to monitor fatigue levels and optimize training or rehabilitation programs. Full article
Show Figures

Figure 1

13 pages, 1586 KiB  
Article
Usable STEM: Student Outcomes in Science and Engineering Associated with the Iterative Science and Engineering Instructional Model
by Nancy B. Songer, Julia E. Calabrese, Holly Cordner and Daniel Aina
Educ. Sci. 2024, 14(11), 1255; https://doi.org/10.3390/educsci14111255 - 16 Nov 2024
Cited by 1 | Viewed by 916
Abstract
While our world consistently presents complicated, interdisciplinary problems with STEM foundations, most pre-university curricula do not encourage drawing on multidisciplinary knowledge in the sciences and engineering to create solutions. We developed an instructional approach, Iterative Science and Engineering (ISE), that cycles through scientific [...] Read more.
While our world consistently presents complicated, interdisciplinary problems with STEM foundations, most pre-university curricula do not encourage drawing on multidisciplinary knowledge in the sciences and engineering to create solutions. We developed an instructional approach, Iterative Science and Engineering (ISE), that cycles through scientific investigation and engineering design and culminates in constructing a solution to a local environmental challenge. Next, we created, revised, and evaluated a six-week ISE curricular program, Invasive Insects, culminating in 6th–9th-grade students building traps to mitigate local invasive insect populations. Over three Design-Based Research (DBR) cycles, we gathered and analyzed identical pre and post-test data from 554 adolescents to address the research question: what three-dimensional (3D) science and engineering knowledge do adolescents demonstrate over three DBR cycles associated with a curricular program following the Iterative Science and Engineering instructional approach? Results document students’ significant statistical improvements, with differential outcomes in different cycles. For example, most students demonstrated significant learning of 3D science and engineering argument construction in all cycles—still, students only significantly improved engineering design when they performed guided reflection on their designs and physically built a second trap. Our results suggest that the development, refinement, and empirical evaluation of an ISE curricular program led to students’ design, building, evaluation, and sharing of their learning of mitigating local invasive insect populations. To address complex, interdisciplinary challenges, we must provide opportunities for fluid and iterative STEM learning through scientific investigation and engineering design cycles. Full article
(This article belongs to the Special Issue Advancing Science Learning through Design-Based Learning)
Show Figures

Figure 1

22 pages, 1961 KiB  
Review
The Impact of Climate Change and Urbanization on Compound Flood Risks in Coastal Areas: A Comprehensive Review of Methods
by Xuejing Ruan, Hai Sun, Wenchi Shou and Jun Wang
Appl. Sci. 2024, 14(21), 10019; https://doi.org/10.3390/app142110019 - 2 Nov 2024
Cited by 4 | Viewed by 4843
Abstract
Many cities worldwide are increasingly threatened by compound floods resulting from the interaction of multiple flood drivers. Simultaneously, rapid urbanization in coastal areas, which increases the proportion of impervious surfaces, has made the mechanisms and simulation methods of compound flood disasters more complex. [...] Read more.
Many cities worldwide are increasingly threatened by compound floods resulting from the interaction of multiple flood drivers. Simultaneously, rapid urbanization in coastal areas, which increases the proportion of impervious surfaces, has made the mechanisms and simulation methods of compound flood disasters more complex. This study employs a comprehensive literature review to analyze 64 articles on compound flood risk under climate change from the Web of Science Core Collection from 2014 to 2024. The review identifies methods for quantifying the impact of climate change factors such as sea level rise, storm surges, and extreme rainfall, as well as urbanization factors like land subsidence, impervious surfaces, and drainage systems on compound floods. Four commonly used quantitative methods for studying compound floods are discussed: statistical models, numerical models, machine learning models, and coupled models. Due to the complex structure and high computational demand of three-dimensional joint probability statistical models, along with the increasing number of flood drivers complicating the grid interfaces and frameworks for coupling different numerical models, most current research focuses on the superposition of two disaster-causing factors. The joint impact of three or more climate change-driving factors on compound flood disasters is emerging as a significant future research trend. Furthermore, urbanization factors are often overlooked in compound flood studies and should be considered when establishing models. Future research should focus on exploring coupled numerical models, statistical models, and machine learning models to better simulate, predict, and understand the mechanisms, evolution processes, and disaster ranges of compound floods under climate change. Full article
Show Figures

Figure 1

22 pages, 7126 KiB  
Article
Exploring Downscaling in High-Dimensional Lorenz Models Using the Transformer Decoder
by Bo-Wen Shen
Mach. Learn. Knowl. Extr. 2024, 6(4), 2161-2182; https://doi.org/10.3390/make6040107 - 25 Sep 2024
Viewed by 2170
Abstract
This paper investigates the feasibility of downscaling within high-dimensional Lorenz models through the use of machine learning (ML) techniques. This study integrates atmospheric sciences, nonlinear dynamics, and machine learning, focusing on using large-scale atmospheric data to predict small-scale phenomena through ML-based empirical models. [...] Read more.
This paper investigates the feasibility of downscaling within high-dimensional Lorenz models through the use of machine learning (ML) techniques. This study integrates atmospheric sciences, nonlinear dynamics, and machine learning, focusing on using large-scale atmospheric data to predict small-scale phenomena through ML-based empirical models. The high-dimensional generalized Lorenz model (GLM) was utilized to generate chaotic data across multiple scales, which was subsequently used to train three types of machine learning models: a linear regression model, a feedforward neural network (FFNN)-based model, and a transformer-based model. The linear regression model uses large-scale variables to predict small-scale variables, serving as a foundational approach. The FFNN and transformer-based models add complexity, incorporating multiple hidden layers and self-attention mechanisms, respectively, to enhance prediction accuracy. All three models demonstrated robust performance, with correlation coefficients between the predicted and actual small-scale variables exceeding 0.9. Notably, the transformer-based model, which yielded better results than the others, exhibited strong performance in both control and parallel runs, where sensitive dependence on initial conditions (SDIC) occurs during the validation period. This study highlights several key findings and areas for future research: (1) a set of large-scale variables, analogous to multivariate analysis, which retain memory of their connections to smaller scales, can be effectively leveraged by trained empirical models to estimate irregular, chaotic small-scale variables; (2) modern machine learning techniques, such as FFNN and transformer models, are effective in capturing these downscaling processes; and (3) future research could explore both downscaling and upscaling processes within a triple-scale system (e.g., large-scale tropical waves, medium-scale hurricanes, and small-scale convection processes) to enhance the prediction of multiscale weather and climate systems. Full article
(This article belongs to the Topic Big Data Intelligence: Methodologies and Applications)
Show Figures

Figure 1

26 pages, 5826 KiB  
Article
An Efficient Task Implementation Modeling Framework with Multi-Stage Feature Selection and AutoML: A Case Study in Forest Fire Risk Prediction
by Ye Su, Longlong Zhao, Hongzhong Li, Xiaoli Li, Jinsong Chen and Yuankai Ge
Remote Sens. 2024, 16(17), 3190; https://doi.org/10.3390/rs16173190 - 29 Aug 2024
Cited by 1 | Viewed by 1377
Abstract
As data science advances, automated machine learning (AutoML) gains attention for lowering barriers, saving time, and enhancing efficiency. However, with increasing data dimensionality, AutoML struggles with large-scale feature sets. Effective feature selection is crucial for efficient AutoML in multi-task applications. This study proposes [...] Read more.
As data science advances, automated machine learning (AutoML) gains attention for lowering barriers, saving time, and enhancing efficiency. However, with increasing data dimensionality, AutoML struggles with large-scale feature sets. Effective feature selection is crucial for efficient AutoML in multi-task applications. This study proposes an efficient modeling framework combining a multi-stage feature selection (MSFS) algorithm and AutoSklearn, a robust and efficient AutoML framework, to address high-dimensional data challenges. The MSFS algorithm includes three stages: mutual information gain (MIG), recursive feature elimination with cross-validation (RFECV), and a voting aggregation mechanism, ensuring comprehensive consideration of feature correlation, importance, and stability. Based on multi-source and time series remote sensing data, this study pioneers the application of AutoSklearn for forest fire risk prediction. Using this case study, we compare MSFS with five other feature selection (FS) algorithms, including three single FS algorithms and two hybrid FS algorithms. Results show that MSFS selects half of the original features (12/24), effectively handling collinearity (eliminating 11 out of 13 collinear feature groups) and increasing AutoSklearn’s success rate by 15%, outperforming two FS algorithms with the same number of features by 7% and 5%. Among the six FS algorithms and non-FS, MSFS demonstrates the highest prediction performance and stability with minimal variance (0.09%) across five evaluation metrics. MSFS efficiently filters redundant features, enhancing AutoSklearn’s operational efficiency and generalization ability in high-dimensional tasks. The MSFS–AutoSklearn framework significantly improves AutoML’s production efficiency and prediction accuracy, facilitating the efficient implementation of various real-world tasks and the wider application of AutoML. Full article
Show Figures

Figure 1

83 pages, 2747 KiB  
Review
Mathematical Tools for Simulation of 3D Bioprinting Processes on High-Performance Computing Resources: The State of the Art
by Luisa Carracciuolo and Ugo D’Amora
Appl. Sci. 2024, 14(14), 6110; https://doi.org/10.3390/app14146110 - 13 Jul 2024
Cited by 2 | Viewed by 1625
Abstract
Three-dimensional (3D) bioprinting belongs to the wide family of additive manufacturing techniques and employs cell-laden biomaterials. In particular, these materials, named “bioink”, are based on cytocompatible hydrogel compositions. To be printable, a bioink must have certain characteristics before, during, and after [...] Read more.
Three-dimensional (3D) bioprinting belongs to the wide family of additive manufacturing techniques and employs cell-laden biomaterials. In particular, these materials, named “bioink”, are based on cytocompatible hydrogel compositions. To be printable, a bioink must have certain characteristics before, during, and after the printing process. These characteristics include achievable structural resolution, shape fidelity, and cell survival. In previous centuries, scientists have created mathematical models to understand how physical systems function. Only recently, with the quick progress of computational capabilities, high-fidelity and high-efficiency “computational simulation” tools have been developed based on such models and used as a proxy for real-world learning. Computational science, or “in silico” experimentation, is the term for this novel strategy that supplements pure theory and experiment. Moreover, a certain level of complexity characterizes the architecture of contemporary powerful computational resources, known as high-performance computing (HPC) resources, also due to the great heterogeneity of its structure. Lately, scientists and engineers have begun to develop and use computational models more extensively to also better understand the bioprinting process, rather than solely relying on experimental research, due to the large number of possible combinations of geometrical parameters and material properties, as well as the abundance of available bioprinting methods. This requires a new effort in designing and implementing computational tools capable of efficiently and effectively exploiting the potential of new HPC computing systems available in the Exascale Era. The final goal of this work is to offer an overview of the models, methods, and techniques that can be used for “in silico” experimentation of the physicochemical processes underlying the process of 3D bioprinting of cell-laden materials thanks to the use of up-to-date HPC resources. Full article
Show Figures

Figure 1

21 pages, 928 KiB  
Article
Developing and Validating an Instrument for Assessing Learning Sciences Competence of Doctoral Students in Education in China
by Xin Wang, Baohui Zhang and Hongying Gao
Sustainability 2024, 16(13), 5607; https://doi.org/10.3390/su16135607 - 30 Jun 2024
Viewed by 2041
Abstract
Learning sciences competence refers to a necessary professional competence for educators, which is manifested in their deep understanding of learning sciences knowledge, positive attitudes, and scientific thinking and skills in conducting teaching practice and research. It is of paramount importance for doctoral students [...] Read more.
Learning sciences competence refers to a necessary professional competence for educators, which is manifested in their deep understanding of learning sciences knowledge, positive attitudes, and scientific thinking and skills in conducting teaching practice and research. It is of paramount importance for doctoral students in education to develop their competence in the field of learning sciences. This will enhance their abilities to teach and conduct research, and guide their educational research and practice toward greater sustainability. In order to address the shortcomings of current assessment instruments, we constructed a theoretical model for assessing learning sciences competence based on the PISA 2025 framework and Piaget’s theory of knowledge. A three-dimensional assessment framework was designed, along with an initial instrument. Furthermore, the “Delphi method based on large language models (LLM)” was employed to conduct two rounds of expert consultations with the objective of testing and refining the instrument. Throughout this process, we developed a set of guidelines for engaging AI experts to improve interactions with LLM, including an invitation letter to AI experts, the main body of the questionnaire, and the general inquiry about AI experts’ perspectives. In analyzing the results of the Delphi method, we used the “threshold method” to identify and refine the questionnaire items that performed sub-optimally. This resulted in the final assessment instrument for evaluating learning sciences competence among doctoral students in education. The assessment instrument encompasses three dimensions: the knowledge of learning sciences, application of learning sciences, and attitude towards learning sciences, with a total of 40 items. These items integrate Likert scales and scenario-based questions. Furthermore, the study examined potential limitations in the item design, question type selection, and method application of the assessment instrument. The design and development of the assessment instrument provide valuable references for the standardized monitoring and sustainability development of the learning sciences competence of doctoral students in education. Full article
Show Figures

Figure 1

19 pages, 10865 KiB  
Article
Organ Segmentation and Phenotypic Trait Extraction of Cotton Seedling Point Clouds Based on a 3D Lightweight Network
by Jiacheng Shen, Tan Wu, Jiaxu Zhao, Zhijing Wu, Yanlin Huang, Pan Gao and Li Zhang
Agronomy 2024, 14(5), 1083; https://doi.org/10.3390/agronomy14051083 - 20 May 2024
Cited by 6 | Viewed by 1646
Abstract
Cotton is an important economic crop; therefore, enhancing cotton yield and cultivating superior varieties are key research priorities. The seedling stage, a critical phase in cotton production, significantly influences the subsequent growth and yield of the crop. Therefore, breeding experts often choose to [...] Read more.
Cotton is an important economic crop; therefore, enhancing cotton yield and cultivating superior varieties are key research priorities. The seedling stage, a critical phase in cotton production, significantly influences the subsequent growth and yield of the crop. Therefore, breeding experts often choose to measure phenotypic parameters during this period to make breeding decisions. Traditional methods of phenotypic parameter measurement require manual processes, which are not only tedious and inefficient but can also damage the plants. To effectively, rapidly, and accurately extract three-dimensional phenotypic parameters of cotton seedlings, precise segmentation of phenotypic organs must first be achieved. This paper proposes a neural network-based segmentation algorithm for cotton seedling organs, which, compared to the average precision of 75.4% in traditional unsupervised learning, achieves an average precision of 96.67%, demonstrating excellent segmentation performance. The segmented leaf and stem point clouds are used for the calculation of phenotypic parameters such as stem length, leaf length, leaf width, and leaf area. Comparisons with actual measurements yield coefficients of determination R2 of 91.97%, 90.97%, 92.72%, and 95.44%, respectively. The results indicate that the algorithm proposed in this paper can achieve precise segmentation of stem and leaf organs, and can efficiently and accurately extract three-dimensional phenotypic structural information of cotton seedlings. In summary, this study not only made significant progress in the precise segmentation of cotton seedling organs and the extraction of three-dimensional phenotypic structural information, but the algorithm also demonstrates strong applicability to different varieties of cotton seedlings. This provides new perspectives and methods for plant researchers and breeding experts, contributing to the advancement of the plant phenotypic computation field and bringing new breakthroughs and opportunities to the field of plant science research. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

24 pages, 2530 KiB  
Article
A Two-Layer Self-Organizing Map with Vector Symbolic Architecture for Spatiotemporal Sequence Learning and Prediction
by Thimal Kempitiya, Damminda Alahakoon, Evgeny Osipov, Sachin Kahawala and Daswin De Silva
Biomimetics 2024, 9(3), 175; https://doi.org/10.3390/biomimetics9030175 - 13 Mar 2024
Cited by 1 | Viewed by 1973
Abstract
We propose a new nature- and neuro-science-inspired algorithm for spatiotemporal learning and prediction based on sequential recall and vector symbolic architecture. A key novelty is the learning of spatial and temporal patterns as decoupled concepts where the temporal pattern sequences are constructed using [...] Read more.
We propose a new nature- and neuro-science-inspired algorithm for spatiotemporal learning and prediction based on sequential recall and vector symbolic architecture. A key novelty is the learning of spatial and temporal patterns as decoupled concepts where the temporal pattern sequences are constructed using the learned spatial patterns as an alphabet of elements. The decoupling, motivated by cognitive neuroscience research, provides the flexibility for fast and adaptive learning with dynamic changes to data and concept drift and as such is better suited for real-time learning and prediction. The algorithm further addresses several key computational requirements for predicting the next occurrences based on real-life spatiotemporal data, which have been found to be challenging with current state-of-the-art algorithms. Firstly, spatial and temporal patterns are detected using unsupervised learning from unlabeled data streams in changing environments; secondly, vector symbolic architecture (VSA) is used to manage variable-length sequences; and thirdly, hyper dimensional (HD) computing-based associative memory is used to facilitate the continuous prediction of the next occurrences in sequential patterns. The algorithm has been empirically evaluated using two benchmark and three time-series datasets to demonstrate its advantages compared to the state-of-the-art in spatiotemporal unsupervised sequence learning where the proposed ST-SOM algorithm is able to achieve 45% error reduction compared to HTM algorithm. Full article
(This article belongs to the Special Issue Nature-Inspired Computer Algorithms: 2nd Edition)
Show Figures

Figure 1

23 pages, 23053 KiB  
Article
Deep Learning Investigation of Mercury’s Explosive Volcanism
by Mireia Leon-Dasi, Sebastien Besse and Alain Doressoundiram
Remote Sens. 2023, 15(18), 4560; https://doi.org/10.3390/rs15184560 - 16 Sep 2023
Cited by 2 | Viewed by 1580
Abstract
The remnants of explosive volcanism on Mercury have been observed in the form of vents and pyroclastic deposits, termed faculae, using data from the Mercury Atmospheric and Surface Composition Spectrometer (MASCS) onboard the Mercury surface, space environment, geochemistry, and ranging (MESSENGER) spacecraft. Although [...] Read more.
The remnants of explosive volcanism on Mercury have been observed in the form of vents and pyroclastic deposits, termed faculae, using data from the Mercury Atmospheric and Surface Composition Spectrometer (MASCS) onboard the Mercury surface, space environment, geochemistry, and ranging (MESSENGER) spacecraft. Although these features present a wide variety of sizes, shapes, and spectral properties, the large number of observations and the lack of high-resolution hyperspectral images complicates their detailed characterisation. We investigate the application of unsupervised deep learning to explore the diversity and constrain the extent of the Hermean pyroclastic deposits. We use a three-dimensional convolutional autoencoder (3DCAE) to extract the spectral and spatial attributes that characterise these features and to create cluster maps constructing a unique framework to compare different deposits. From the cluster maps we define the boundaries of 55 irregular deposits covering 110 vents and compare the results with previous radius and surface estimates. We find that the network is capable of extracting spatial information such as the border of the faculae, and spectral information to altogether highlight the pyroclastic deposits from the background terrain. Overall, we find the 3DCAE an effective technique to analyse sparse observations in planetary sciences. Full article
(This article belongs to the Section Satellite Missions for Earth and Planetary Exploration)
Show Figures

Figure 1

13 pages, 1411 KiB  
Article
On the Nature and Utility of Crosscutting Concepts
by Jeffrey Carl Nordine and Okhee Lee
Educ. Sci. 2023, 13(7), 640; https://doi.org/10.3390/educsci13070640 - 22 Jun 2023
Cited by 2 | Viewed by 3382
Abstract
The crosscutting concepts (CCCs) are a collection of ideas that span the science and engineering disciplines. While various standards documents have identified similar sets of ideas in the past, calls for their explicit inclusion into science and engineering instruction began in earnest only [...] Read more.
The crosscutting concepts (CCCs) are a collection of ideas that span the science and engineering disciplines. While various standards documents have identified similar sets of ideas in the past, calls for their explicit inclusion into science and engineering instruction began in earnest only about a decade ago. When these calls began, the research base on the teaching and learning of the CCCs was limited; in the intervening years, educators have debated whether and how the CCCs are useful for supporting science and engineering learners. In this article, we summarize a recent scholarship that has clarified the role of CCCs in supporting science and engineering learning. Then, we highlight two exemplary curricular units (one elementary and one secondary) that showcase CCC-informed instruction. Based upon these research and development efforts, we identify three core messages: (1) CCCs provide learners with a set of complementary lenses on phenomena, (2) CCCs are powerful tools for broadening access to science and engineering, and (3) practitioner innovations play an especially important role in the time-sensitive work of establishing a more robust research base for how CCCs can strengthen science and engineering teaching and learning. Full article
Show Figures

Figure 1

Back to TopTop