Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (570)

Search Parameters:
Keywords = crowdsourced data

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 39262 KB  
Article
Exploration of Differences in Housing Price Determinants Based on Street View Imagery and the Geographical-XGBoost Model: Improving Quality of Life for Residents and Through-Travelers
by Shengbei Zhou, Qian Ji, Longhao Zhang, Jun Wu, Pengbo Li and Yuqiao Zhang
ISPRS Int. J. Geo-Inf. 2025, 14(10), 391; https://doi.org/10.3390/ijgi14100391 - 9 Oct 2025
Viewed by 259
Abstract
Street design quality and socio-economic factors jointly influence housing prices, but their intertwined effects and spatial variations remain under-quantified. Housing prices not only reflect residents’ neighborhood experiences but also stem from the spillover value of public streets perceived and used by different users. [...] Read more.
Street design quality and socio-economic factors jointly influence housing prices, but their intertwined effects and spatial variations remain under-quantified. Housing prices not only reflect residents’ neighborhood experiences but also stem from the spillover value of public streets perceived and used by different users. This study takes Tianjin as a case and views the street environment as an immediate experience proxy for through-travelers, combining street view images and crowdsourced perception data to extract both subjective and objective indicators of the street environment, and integrating neighborhood and location characteristics. We use Geographical-XGBoost to evaluate the relative contributions of multiple factors to housing prices and their spatial variations. The results show that incorporating both subjective and objective street information into the Hedonic Pricing Model (HPM) improves its explanatory power, while local modeling with G-XGBoost further reveals significant heterogeneity in the strength and direction of effects across different locations. The results indicate that incorporating both subjective and objective street information into the HPM enhances explanatory power, while local modeling with G-XGBoost reveals significant heterogeneity in the strength and direction of effects across different locations. Street greening, educational resources, and transportation accessibility are consistently associated with higher housing prices, but their strength varies by location. Core urban areas exhibit a “counterproductive effect” in terms of complexity and recognizability, while peripheral areas show a “barely acceptable effect,” which may increase cognitive load and uncertainty for through-travelers. In summary, street environments and socio-economic conditions jointly influence housing prices via a “corridor-side–community-side” dual-pathway: the former (enclosure, safety, recognizability) corresponds to immediate improvements for through-travelers, while the latter (education and public services) corresponds to long-term improvements for residents. Therefore, core urban areas should control design complexity and optimize human-scale safety cues, while peripheral areas should focus on enhancing public services and transportation, and meeting basic quality thresholds with green spaces and open areas. Urban renewal within a 15 min walking radius of residential areas is expected to collaboratively improve daily travel experiences and neighborhood quality for both residents and through-travelers, supporting differentiated housing policy development and enhancing overall quality of life. Full article
Show Figures

Figure 1

20 pages, 2710 KB  
Article
Evaluation of Urban Transport Quality Management Based on Crowdsourcing Data for the Implementation of Municipal Energy and Resource Conservation Policies
by Justyna Lemke, Tomasz Dudek, Artur Kujawski and Tygran Dzhuguryan
Energies 2025, 18(19), 5260; https://doi.org/10.3390/en18195260 - 3 Oct 2025
Viewed by 303
Abstract
One of the key challenges for city authorities is to ensure an adequate quality of life for residents while promoting sustainable urban development. Achieving this balance is closely related to transport management which strongly affects urban quality of life, energy consumption, and resource [...] Read more.
One of the key challenges for city authorities is to ensure an adequate quality of life for residents while promoting sustainable urban development. Achieving this balance is closely related to transport management which strongly affects urban quality of life, energy consumption, and resource savings. The aim of this article is to propose a new approach of assessing urban transport management quality, with a view to implement urban energy and resource-saving policies. The assessment procedure is based on the Six Sigma methodology and is illustrated using the example of the city of Szczecin for three selected routes. Travel data were obtained based on actual vehicle traffic using crowdsourcing methods. The capacity processes were assessed based on the potential capacity index and the actual capacity index, which characterise deviations in urban traffic from the best way to save energy and resources. Customer specification limits were set based on surveys assessing residents’ expectations regarding car travel times on the analysed routes. The results show that the methodology proposed in the article can be successfully used to assess urban transport management and to identify areas in need of improvement for sustainable transport panning. Full article
Show Figures

Figure 1

21 pages, 527 KB  
Article
Block-CITE: A Blockchain-Based Crowdsourcing Interactive Trust Evaluation
by Jiaxing Li, Lin Jiang, Haoxian Liang, Tao Peng, Shaowei Wang and Huanchun Wei
AI 2025, 6(10), 245; https://doi.org/10.3390/ai6100245 - 1 Oct 2025
Viewed by 288
Abstract
Industrial trademark examination enables users to apply for and manage their trademarks efficiently, promoting industrial and commercial economic development. However, there still exist many challenges, e.g., how to customize a blockchain-based crowdsourcing method for interactive trust evaluation, how to decentralize the functionalities of [...] Read more.
Industrial trademark examination enables users to apply for and manage their trademarks efficiently, promoting industrial and commercial economic development. However, there still exist many challenges, e.g., how to customize a blockchain-based crowdsourcing method for interactive trust evaluation, how to decentralize the functionalities of a centralized entity to nodes in a blockchain network instead of removing the entity directly, how to design a protocol for the method and prove its security, etc. In order to overcome these challenges, in this paper, we propose the Blockchain-based Crowdsourcing Interactive Trust Evaluation (Block-CITE for short) method to improve the efficiency and security of the current industrial trademark management schemes. Specifically, Block-CITE adopts a dual-blockchain structure and a crowdsourcing technique to record operations and store relevant data in a decentralized way. Furthermore, Block-CITE customizes a protocol for blockchain-based crowdsourced industrial trademark examination and algorithms of smart contracts to run the protocol automatically. In addition, Block-CITE analyzes the threat model and proves the security of the protocol. Security analysis shows that Block-CITE is able to defend against the malicious entities and attacks in the blockchain network. Experimental analysis shows that Block-CITE has a higher transaction throughput and lower network latency and storage overhead than the baseline methods. Full article
Show Figures

Figure 1

15 pages, 2961 KB  
Article
Evaluating GeoAI-Generated Data for Maintaining VGI Maps
by Lasith Niroshan and James D. Carswell
Land 2025, 14(10), 1978; https://doi.org/10.3390/land14101978 - 1 Oct 2025
Viewed by 314
Abstract
Geospatial Artificial Intelligence (GeoAI) offers a scalable solution for automating the generation and updating of volunteered geographic information (VGI) maps—addressing the limitations of manual contributions to crowd-source mapping platforms such as OpenStreetMap (OSM). This study evaluates the accuracy of GeoAI-generated buildings specifically, using [...] Read more.
Geospatial Artificial Intelligence (GeoAI) offers a scalable solution for automating the generation and updating of volunteered geographic information (VGI) maps—addressing the limitations of manual contributions to crowd-source mapping platforms such as OpenStreetMap (OSM). This study evaluates the accuracy of GeoAI-generated buildings specifically, using two Generative Adversarial Network (GAN) models. These are OSM-GAN—trained on OSM vector data and Google Earth imagery—and OSi-GAN—trained on authoritative “ground truth” Ordnance Survey Ireland (OSi) vector data and aerial orthophotos. Altogether, we assess map feature completeness, shape accuracy, and positional accuracy and conduct qualitative visual evaluations using live OSM database features and OSi map data as a benchmark. The results show that OSi-GAN achieves higher completeness (88.2%), while OSM-GAN provides more consistent shape fidelity (mean HD: 3.29 m; σ = 2.46 m) and positional accuracy (mean centroid distance: 1.02 m) compared to both OSi-GAN and the current OSM map. The OSM dataset exhibits moderate average deviation (mean HD 5.33 m) but high variability, revealing inconsistencies in crowd-source mapping. These empirical results demonstrate the potential of GeoAI to augment manual VGI mapping workflows to support timely downstream applications in urban planning, disaster response, and many other location-based services (LBSs). The findings also emphasize the need for robust Quality Assurance (QA) frameworks to address “AI slop” and ensure the reliability and consistency of GeoAI-generated data. Full article
Show Figures

Figure 1

23 pages, 881 KB  
Article
From Digital Services to Sustainable Ones: Novel Industry 5.0 Environments Enhanced by Observability
by Andrea Sabbioni, Antonio Corradi, Stefano Monti and Carlos Roberto De Rolt
Information 2025, 16(9), 821; https://doi.org/10.3390/info16090821 - 22 Sep 2025
Viewed by 410
Abstract
The rapid evolution of Information Technologies is deeply transforming manufacturing, demanding innovative and enhanced production paradigms. Industry 5.0 further advances that transformation by fostering a more resilient, sustainable, and human-centric industrial ecosystem, built on the seamless integration of all value chains. This shift [...] Read more.
The rapid evolution of Information Technologies is deeply transforming manufacturing, demanding innovative and enhanced production paradigms. Industry 5.0 further advances that transformation by fostering a more resilient, sustainable, and human-centric industrial ecosystem, built on the seamless integration of all value chains. This shift requires the timely collection and intelligent analysis of vast, heterogeneous data sources, including IoT devices, digital services, crowdsourcing platforms, and last but not least important human input, which is essential to drive innovation. With sustainability as a key priority, pervasive monitoring not only enables optimization to reduce greenhouse gas emissions but also plays a strategic role across the manufacturing economy. This work introduces Observability platform for Industry 5.0 (ObsI5), a novel observability framework specifically designed to support Industry 5.0 environments. ObsI5 extends cloud-native observability tools, originally developed for IT service monitoring, into manufacturing infrastructures, enabling the collection, analysis, and control of data across both IT and OT domains. Our solution integrates human contributions as active data sources and leverages standard observability practices to extract actionable insights from all available resources. We validate ObsI5 through a dedicated test bed, demonstrating its ability to meet the strict requirements of Industry 5.0 in terms of timeliness, security, and modularity. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

32 pages, 3201 KB  
Article
Real-Time Urban Congestion Monitoring in Jeddah, Saudi Arabia, Using the Google Maps API: A Data-Driven Framework for Middle Eastern Cities
by Ghada Ragheb Elnaggar, Shireen Al-Hourani and Rimal Abutaha
Sustainability 2025, 17(18), 8194; https://doi.org/10.3390/su17188194 - 11 Sep 2025
Viewed by 1688
Abstract
Rapid urban growth in Middle Eastern cities has intensified congestion-related challenges, yet traffic data-based decision making remains limited. This study leverages crowd-sourced travel time data from the Google Maps API to evaluate temporal and spatial patterns of congestion across multiple strategic routes in [...] Read more.
Rapid urban growth in Middle Eastern cities has intensified congestion-related challenges, yet traffic data-based decision making remains limited. This study leverages crowd-sourced travel time data from the Google Maps API to evaluate temporal and spatial patterns of congestion across multiple strategic routes in Jeddah, Saudi Arabia, a coastal metropolis with a complex road network characterized by narrow, high-traffic corridors and limited public transit. A real-time Congestion Index quantifies traffic flow, incorporating free-flow speed benchmarking, dynamic profiling, and temporal classification to pinpoint congestion hotspots. The analysis identifies consistent peak congestion windows and route-specific delays that are critical for travel behavior modeling. In addition to congestion monitoring, the framework contributes to urban sustainability by supporting reductions in traffic-related emissions, enhancing mobility equity, and improving economic efficiency through data-driven transport management. To our knowledge, this is the first study to systematically use the validated, real-time Google Maps API to quantify route-specific congestion in a Middle Eastern urban context. The approach provides a scalable and replicable framework for evaluating urban mobility in other data-sparse cities, especially in contexts where traditional traffic sensors or GPS tracking are unavailable. The findings support evidence-based transport policy and demonstrate the utility of publicly accessible traffic data for smart city integration, real-time traffic monitoring, and assisting transport authorities in enhancing urban mobility. Full article
Show Figures

Figure 1

16 pages, 2233 KB  
Article
Research on Fingerprint Map Construction and Real-Time Update Method Based on Indoor Landmark Points
by Yaning Zhu and Yihua Cheng
Sensors 2025, 25(17), 5473; https://doi.org/10.3390/s25175473 - 3 Sep 2025
Viewed by 591
Abstract
WIFI base stations have full indoor coverage, and the inertial navigation system (INS) is independent and autonomous, with high short-term positioning accuracy. However, errors accumulate over time, and an INS/WIFI combination has become the mainstream research direction regarding indoor positioning technology. The accuracy [...] Read more.
WIFI base stations have full indoor coverage, and the inertial navigation system (INS) is independent and autonomous, with high short-term positioning accuracy. However, errors accumulate over time, and an INS/WIFI combination has become the mainstream research direction regarding indoor positioning technology. The accuracy of WIFI fingerprint maps deteriorates significantly with changes in the environment or time, and there is an urgent need to solve the problem of automatic real-time updating of fingerprint maps. This article addresses the issue that the existing real-time acquisition technology for fingerprint point locations has severely restricted the real-time updating of fingerprint maps. For the first time, landmark points are introduced into the fingerprint map, and landmark point fingerprints are defined to construct a new fingerprint map database structure. A method for automatic recognition of landmark points (turning points) based on inertial technology is proposed, which achieves automatic and accurate collection of landmark point fingerprints and improves the reliability of crowdsourcing data. Real-time automatic monitoring of fingerprint signal fluctuations at landmark points and construction of error models have achieved real-time and accurate updates of fingerprint maps. Real scene experiments have shown that the proposed solution significantly improves the long-term stability and reliability of fingerprint maps. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

25 pages, 4433 KB  
Article
Mathematical Analysis and Performance Evaluation of CBAM-DenseNet121 for Speech Emotion Recognition Using the CREMA-D Dataset
by Zineddine Sarhani Kahhoul, Nadjiba Terki, Ilyes Benaissa, Khaled Aldwoah, E. I. Hassan, Osman Osman and Djamel Eddine Boukhari
Appl. Sci. 2025, 15(17), 9692; https://doi.org/10.3390/app15179692 - 3 Sep 2025
Viewed by 656
Abstract
Emotion recognition from speech is essential for human–computer interaction (HCI) and affective computing, with applications in virtual assistants, healthcare, and education. Although deep learning has made significant advancements in Automatic Speech Emotion Recognition (ASER), the challenge still exists in the task given variation [...] Read more.
Emotion recognition from speech is essential for human–computer interaction (HCI) and affective computing, with applications in virtual assistants, healthcare, and education. Although deep learning has made significant advancements in Automatic Speech Emotion Recognition (ASER), the challenge still exists in the task given variation in speakers, subtle emotional expressions, and environmental noise. Practical deployment in this context depends on a strong, fast, scalable recognition system. This work introduces a new framework combining DenseNet121, especially fine-tuned for the crowd-sourced emotional multimodal actors dataset (CREMA-D), with the convolutional block attention module (CBAM). While DenseNet121’s effective feature propagation captures rich, hierarchical patterns in the speech data, CBAM improves the focus of the model on emotionally significant elements by applying both spatial and channel-wise attention. Furthermore, enhancing the input spectrograms and strengthening resistance against environmental noise is an advanced preprocessing pipeline including log-Mel spectrogram transformation and normalization. The proposed model demonstrates superior performance. To make sure the evaluation is strong even if there is a class imbalance, we point out important metrics like an Unweighted Average Recall (UAR) of 71.01% and an F1 score of 71.25%. The model also gets a test accuracy of 71.26% and a precision of 71.30%. These results establish the model as a promising solution for real-world speech emotion detection, highlighting its strong generalization capabilities, computational efficiency, and focus on emotion-specific features compared to recent work. The improvements demonstrate practical flexibility, enabling the integration of established image recognition techniques and allowing for substantial adaptability in various application contexts. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

31 pages, 14731 KB  
Article
The CUGA Method: A Reliable Framework for Identifying Public Urban Green Spaces in Metropolitan Regions
by Borja Ruiz-Apilánez and Francesco Pilla
Land 2025, 14(9), 1751; https://doi.org/10.3390/land14091751 - 29 Aug 2025
Viewed by 696
Abstract
This study addresses the challenge of reliably identifying Public Urban Green Spaces (PUGS) in metropolitan areas, a key requirement for advancing equitable access to green infrastructure and monitoring progress toward SDG 11.7 and WHO recommendations. In the absence of consistent local datasets, we [...] Read more.
This study addresses the challenge of reliably identifying Public Urban Green Spaces (PUGS) in metropolitan areas, a key requirement for advancing equitable access to green infrastructure and monitoring progress toward SDG 11.7 and WHO recommendations. In the absence of consistent local datasets, we propose the Candidate Urban Green Area (CUGA) method, which integrates OpenStreetMap and Copernicus Urban Atlas data through a structured, transparent workflow. The method applies spatial and functional filters to isolate green spaces that are publicly accessible, meet minimum size and usability criteria, and are embedded within the urban fabric. We validate CUGA in the Dublin Region using a stratified random sample of 1-ha cells and compare its performance against five alternative datasets. Results show that CUGA achieves the highest classification accuracy, spatial coverage, and statistical robustness across all counties, significantly outperforming administrative, crowdsourced, and satellite-derived sources. The method also delivers greater net spatial impact in terms of green area, catchment coverage, and residential land intercepted. These findings support the use of CUGA as a reliable and transferable tool for urban green space planning, policy evaluation, and sustainability reporting, particularly in data-scarce or fragmented governance contexts. Full article
Show Figures

Figure 1

19 pages, 692 KB  
Article
Employment-Related Assistive Technology Needs in Autistic Adults: A Mixed-Methods Study
by Kaiqi Zhou, Constance Richard, Yusen Zhai, Dan Li and Hannah Fry
Eur. J. Investig. Health Psychol. Educ. 2025, 15(9), 170; https://doi.org/10.3390/ejihpe15090170 - 26 Aug 2025
Viewed by 965
Abstract
Background: Assistive technology (AT) can support autistic adults in navigating employment-related challenges. However, limited research has explored autistic adults’ actual needs and experiences with AT in the workplace. Existing studies often overlook how well current AT solutions align with the real-world demands autistic [...] Read more.
Background: Assistive technology (AT) can support autistic adults in navigating employment-related challenges. However, limited research has explored autistic adults’ actual needs and experiences with AT in the workplace. Existing studies often overlook how well current AT solutions align with the real-world demands autistic adults face across the employment process. To address this gap, this study conducted a needs assessment to explore autistic adults’ perceived AT and AT service needs across employment stages, identify satisfaction and discontinuation patterns, and examine barriers and facilitators to effective use. Methods: A total of 501 autistic adults were recruited through an online crowdsourcing platform, Prolific. Participants completed a needs assessment that included Likert-scale items and open-ended questions. Quantitative data were analyzed using descriptive statistics and weighted needs scoring procedures. Thematic analysis was applied to qualitative responses regarding satisfaction, discontinuation, and general reflections on AT use. Results: Job retention received the highest total weighted needs score, followed closely by skill development and job performance. Participants reported lower perceived needs for AT in the job development and placement domain. Qualitative findings revealed that AT was described as essential for daily functioning and independence, but barriers such as limited access, inadequate training, and social stigma affected use. Participants also emphasized the need for more person-centered and context-specific AT services. Conclusions: AT has the potential to significantly enhance employment outcomes for autistic adults. However, current services often lack personalization and alignment with real-world needs. Findings support the development of more inclusive, tailored, and accessible AT solutions across all employment stages. Full article
Show Figures

Figure 1

30 pages, 1831 KB  
Article
Integrating Cacao Physicochemical-Sensory Profiles via Gaussian Processes Crowd Learning and Localized Annotator Trustworthiness
by Juan Camilo Lugo-Rojas, Maria José Chica-Morales, Sergio Leonardo Florez-González, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Foods 2025, 14(17), 2961; https://doi.org/10.3390/foods14172961 - 25 Aug 2025
Viewed by 501
Abstract
Understanding the intricate relationship between sensory perception and physicochemical properties of cacao-based products is crucial for advancing quality control and driving product innovation. However, effectively integrating these heterogeneous data sources poses a significant challenge, particularly when sensory evaluations are derived from low-quality, subjective, [...] Read more.
Understanding the intricate relationship between sensory perception and physicochemical properties of cacao-based products is crucial for advancing quality control and driving product innovation. However, effectively integrating these heterogeneous data sources poses a significant challenge, particularly when sensory evaluations are derived from low-quality, subjective, and often inconsistent annotations provided by multiple experts. We propose a comprehensive framework that leverages a correlated chained Gaussian processes model for learning from crowds, termed MAR-CCGP, specifically designed for a customized Casa Luker database that integrates sensory and physicochemical data on cacao-based products. By formulating sensory evaluations as regression tasks, our approach enables the estimation of continuous perceptual scores from physicochemical inputs, while concurrently inferring the latent, input-dependent reliability of each annotator. To address the inherent noise, subjectivity, and non-stationarity in expert-generated sensory data, we introduce a three-stage methodology: (i) construction of an integrated database that unifies physicochemical parameters with corresponding sensory descriptors; (ii) application of a MAR-CCGP model to infer the underlying ground truth from noisy, crowd-sourced, and non-stationary sensory annotations; and (iii) development of a novel localized expert trustworthiness approach, also based on MAR-CCGP, which dynamically adjusts for variations in annotator consistency across the input space. Our approach provides a robust, interpretable, and scalable solution for learning from heterogeneous and noisy sensory data, establishing a principled foundation for advancing data-driven sensory analysis and product optimization in the food science domain. We validate the effectiveness of our method through a series of experiments on both semi-synthetic data and a novel real-world dataset developed in collaboration with Casa Luker, which integrates sensory evaluations with detailed physicochemical profiles of cacao-based products. Compared to state-of-the-art learning-from-crowds baselines, our framework consistently achieves superior predictive performance and more precise annotator reliability estimation, demonstrating its efficacy in multi-annotator regression settings. Of note, our unique combination of a novel database, robust noisy-data regression, and input-dependent trust scoring sets MAR-CCGP apart from existing approaches. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine Learning for Foods)
Show Figures

Figure 1

64 pages, 20332 KB  
Review
Reviewing a Decade of Structural Health Monitoring in Footbridges: Advances, Challenges, and Future Directions
by JP Liew, Maria Rashidi, Khoa Le, Ali Matin Nazar and Ehsan Sorooshnia
Remote Sens. 2025, 17(16), 2807; https://doi.org/10.3390/rs17162807 - 13 Aug 2025
Cited by 1 | Viewed by 3466
Abstract
Aging infrastructure is a growing concern worldwide, with many bridges exceeding 50 years of service, prompting questions about their structural integrity. Over the past decade, the deterioration of bridges has driven extensive research into Structural Health Monitoring (SHM), a tool for early detection [...] Read more.
Aging infrastructure is a growing concern worldwide, with many bridges exceeding 50 years of service, prompting questions about their structural integrity. Over the past decade, the deterioration of bridges has driven extensive research into Structural Health Monitoring (SHM), a tool for early detection of structural deterioration, with particular emphasis on remote-sensing technologies. This review combines a scientometric analysis and a state-of-the-art review to assess recent advancements in the field. From a dataset of 702 publications (2014–2024), 171 relevant papers were analyzed, covering key SHM aspects including sensing devices, data acquisition, processing, damage detection, and reporting. Results show a 433% increase in publications, with the United States leading in output (28.65%), and Glisic, B., with collaborators forming the largest research cluster (11.7%). Accelerometers are the most commonly used sensors (50.88%), and data processing dominates the research focus (50.29%). Key challenges identified include cost (noted in 17.5% of studies), data corruption, and WSN limitations, particularly energy supply. Trends show a notable growth in AI applications (400%), and increasing interest in low-cost, crowdsource-based SHM using smartphones, MEMS, and cameras. These findings highlight both progress and future opportunities in SHM of footbridges. Full article
Show Figures

Figure 1

20 pages, 4189 KB  
Article
Improving Wildfire Simulations via Geometric Primitive Analysis in Noisy Crowdsourced Data
by Ioannis Karakonstantis and George Xylomenos
Appl. Sci. 2025, 15(16), 8844; https://doi.org/10.3390/app15168844 - 11 Aug 2025
Viewed by 332
Abstract
A key challenge in real-time wildfire simulation is data acquisition from dynamic sources, such as user-submitted data collected via mobile phones. Information obtained from firefighting personnel in the field, or even bystanders, typically outperforms pre-existing information in terms of its spatial and time [...] Read more.
A key challenge in real-time wildfire simulation is data acquisition from dynamic sources, such as user-submitted data collected via mobile phones. Information obtained from firefighting personnel in the field, or even bystanders, typically outperforms pre-existing information in terms of its spatial and time resolution and can be used to execute more accurate fire simulations; these can be continuously updated as new data are added. However, combining data from users with heterogeneous knowledge backgrounds and biased conceptual barriers introduces additional distortion to what we know about an evolving wildfire. We examine the problem of resolving geometric ambiguity, where users submit duplicate or distorted spatial entries about a modeled wildfire, under real-time constraints. We argue that an optimization algorithm from the Ant Colony Optimization family is a strong candidate to tackle this problem, taking into account the nature of the submitted data and the limitations introduced by mobile phones. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

27 pages, 7810 KB  
Article
Mutation Interval-Based Segment-Level SRDet: Side Road Detection Based on Crowdsourced Trajectory Data
by Ying Luo, Fengwei Jiao, Longgang Xiang, Xin Chen and Meng Wang
ISPRS Int. J. Geo-Inf. 2025, 14(8), 299; https://doi.org/10.3390/ijgi14080299 - 31 Jul 2025
Viewed by 611
Abstract
Accurate side road detection is essential for traffic management, urban planning, and vehicle navigation. However, existing research mainly focuses on road network construction, lane extraction, and intersection identification, while fine-grained side road detection remains underexplored. Therefore, this study proposes a road segment-level side [...] Read more.
Accurate side road detection is essential for traffic management, urban planning, and vehicle navigation. However, existing research mainly focuses on road network construction, lane extraction, and intersection identification, while fine-grained side road detection remains underexplored. Therefore, this study proposes a road segment-level side road detection method based on crowdsourced trajectory data: First, considering the geometric and dynamic characteristics of trajectories, SRDet introduces a trajectory lane-change pattern recognition method based on mutation intervals to distinguish the heterogeneity of lane-change behaviors between main and side roads. Secondly, combining geometric features with spatial statistical theory, SRDet constructs multimodal features for trajectories and road segments, and proposes a potential side road segment classification model based on random forests to achieve precise detection of side road segments. Finally, based on mutation intervals and potential side road segments, SRDet utilizes density peak clustering to identify main and side road access points, completing the fitting of side roads. Experiments were conducted using 2021 Beijing trajectory data. The results show that SRDet achieves precision and recall rates of 84.6% and 86.8%, respectively. This demonstrates the superior performance of SRDet in side road detection across different areas, providing support for the precise updating of urban road navigation information. Full article
Show Figures

Figure 1

13 pages, 736 KB  
Article
Birding via Facebook—Methodological Considerations When Crowdsourcing Observations of Bird Behavior via Social Media
by Dirk H. R. Spennemann
Birds 2025, 6(3), 39; https://doi.org/10.3390/birds6030039 - 28 Jul 2025
Viewed by 736
Abstract
This paper outlines a methodology to compile geo-referenced observational data of Australian birds acting as pollinators of Strelitzia sp. (Bird of Paradise) flowers and dispersers of their seeds. Given the absence of systematic published records, a crowdsourcing approach was employed, combining data from [...] Read more.
This paper outlines a methodology to compile geo-referenced observational data of Australian birds acting as pollinators of Strelitzia sp. (Bird of Paradise) flowers and dispersers of their seeds. Given the absence of systematic published records, a crowdsourcing approach was employed, combining data from natural history platforms (e.g., iNaturalist, eBird), image hosting websites (e.g., Flickr) and, in particular, social media. Facebook emerged as the most productive channel, with 61.4% of the 301 usable observations sourced from 43 ornithology-related groups. The strategy included direct solicitation of images and metadata via group posts and follow-up communication. The holistic, snowballing search strategy yielded a unique, behavior-focused dataset suitable for analysis. While the process exposed limitations due to user self-censorship on image quality and completeness, the approach demonstrates the viability of crowdsourced behavioral ecology data and contributes a replicable methodology for similar studies in under-documented ecological contexts. Full article
Show Figures

Figure 1

Back to TopTop