Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (655)

Search Parameters:
Keywords = interoperability standard

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 341 KiB  
Review
Revolutionizing Data Exchange Through Intelligent Automation: Insights and Trends
by Yeison Nolberto Cardona-Álvarez, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Computers 2025, 14(5), 194; https://doi.org/10.3390/computers14050194 (registering DOI) - 17 May 2025
Abstract
This review paper presents a comprehensive analysis of the evolving landscape of data exchange, with a particular focus on the transformative role of emerging technologies such as blockchain, field-programmable gate arrays (FPGAs), and artificial intelligence (AI). We explore how the integration of these [...] Read more.
This review paper presents a comprehensive analysis of the evolving landscape of data exchange, with a particular focus on the transformative role of emerging technologies such as blockchain, field-programmable gate arrays (FPGAs), and artificial intelligence (AI). We explore how the integration of these technologies into data management systems enhances operational efficiency, precision, and security through intelligent automation and advanced machine learning techniques. The paper also critically examines the key challenges facing data exchange today, including issues of interoperability, the demand for real-time processing, and the stringent requirements of regulatory compliance. Furthermore, it underscores the urgent need for robust ethical frameworks to guide the responsible use of AI and to protect data privacy. In addressing these challenges, the paper calls for innovative research aimed at overcoming current limitations in scalability and security. It advocates for interdisciplinary approaches that harmonize technological innovation with legal and ethical considerations. Ultimately, this review highlights the pivotal role of collaboration among researchers, industry stakeholders, and policymakers in fostering a digitally inclusive future—one that strengthens data exchange practices while upholding global standards of fairness, transparency, and accountability. Full article
(This article belongs to the Special Issue Cloud Computing and Big Data Mining)
20 pages, 2736 KiB  
Article
Clinical Validation and Post-Implementation Performance Monitoring of a Neural Network-Assisted Approach for Detecting Chronic Lymphocytic Leukemia Minimal Residual Disease by Flow Cytometry
by Jansen N. Seheult, Gregory E. Otteson, Matthew J. Weybright, Michael M. Timm, Wenchao Han, Dragan Jevremovic, Pedro Horna, Horatiu Olteanu and Min Shi
Cancers 2025, 17(10), 1688; https://doi.org/10.3390/cancers17101688 (registering DOI) - 17 May 2025
Abstract
Background: Flow cytometric detection of minimal residual disease (MRD) in chronic lymphocytic leukemia (CLL) is complex, time-consuming, and subject to inter-operator variability. Deep neural networks (DNNs) offer potential for standardization and efficiency improvement, but require rigorous validation and monitoring for safe clinical [...] Read more.
Background: Flow cytometric detection of minimal residual disease (MRD) in chronic lymphocytic leukemia (CLL) is complex, time-consuming, and subject to inter-operator variability. Deep neural networks (DNNs) offer potential for standardization and efficiency improvement, but require rigorous validation and monitoring for safe clinical implementation. Methods: We evaluated a DNN-assisted human-in-the-loop approach for CLL MRD detection. Initial validation included method comparison against manual analysis (n = 240), precision studies, and analytical sensitivity verification. Post-implementation monitoring comprised four components: daily electronic quality control, input data drift detection, error analysis, and attribute acceptance sampling. Laboratory efficiency was assessed through a timing study of 161 cases analyzed by five technologists. Results: Method comparison demonstrated 97.5% concordance with manual analysis for qualitative classification (sensitivity 100%, specificity 95%) and excellent correlation for quantitative assessment (r = 0.99, Deming slope = 0.99). Precision studies confirmed high repeatability and within-laboratory precision across multiple operators. Analytical sensitivity was verified at 0.002% MRD. Post-implementation monitoring identified 2.97% of cases (26/874) with input data drift, primarily high-burden CLL and non-CLL neoplasms. Error analysis showed the DNN alone achieved 97% sensitivity compared to human-in-the-loop-reviewed results, with 13 missed cases (1.5%) showing atypical immunophenotypes. Attribute acceptance sampling confirmed 98.8% of reported negative cases were true negatives. The DNN-assisted workflow reduced average analysis time by 60.3% compared to manual analysis (4.2 ± 2.3 vs. 10.5 ± 5.8 min). Conclusions: The implementation of a DNN-assisted approach for CLL MRD detection in a clinical laboratory provides diagnostic performance equivalent to expert manual analysis while substantially reducing analysis time. Comprehensive performance monitoring ensures ongoing safety and effectiveness in routine clinical practice. This approach provides a model for responsible AI integration in clinical laboratories, balancing automation benefits with expert oversight. Full article
Show Figures

Figure 1

23 pages, 1686 KiB  
Systematic Review
Methods for Assessing the Ecosystem Service of Honey Provisioning by the European Honey Bee (Apis mellifera L.): A Systematic Review
by Ildikó Arany and Bálint Czúcz
Sustainability 2025, 17(10), 4533; https://doi.org/10.3390/su17104533 - 15 May 2025
Abstract
Honey bees (Apis mellifera L.) provide several valuable ecosystem services, including honey provisioning. While pollination by honey bees is well-studied, the scientific assessment of honey-provisioning capacity (HPC) has received less attention. In this study, we performed a qualitative systematic review (critical interpretive [...] Read more.
Honey bees (Apis mellifera L.) provide several valuable ecosystem services, including honey provisioning. While pollination by honey bees is well-studied, the scientific assessment of honey-provisioning capacity (HPC) has received less attention. In this study, we performed a qualitative systematic review (critical interpretive synthesis) to identify the main types of models that can be used to map and assess honey provision as an ecosystem service, together with the background and implications of the use of these methodological approaches in the scientific literature (WOS, Scopus, search date: 5 July 2022. resulting in an initial pool of 281 studies). From the initial list, we retained only those studies that presented concrete case studies modelling the capacity of specific sites, landscapes or regions for provisioning honey by A. mellifera (17 studies). We identified three main model types in the reviewed studies: (A) simple rule-based models (“matrix” models), (B) extended rule-based models (incorporating bee foraging-range simulations), and (C) predictive statistical models. The vast majority of studies used rule-based approaches, with varying levels of complexity in their input data and output metrics. Key decision points in the modeling process, including the treatment of seasonality, spatial variability in floral resources, and bee foraging behavior, were identified. We also identified possible sources of methodological uncertainties and suggested potential approaches by which to improve the accuracy and robustness of HPC assessments. Our findings underscore the need for more standardized methodologies, transparent documentation, and integration of local beekeeping knowledge to advance scientific understanding and the practical application of HPC assessments. Furthermore, our experiences also suggest that critical interpretive synthesis has a wide range of applicability in the study of ecosystem services, with great potential for advancing the interoperability of assessment methodologies. Full article
(This article belongs to the Section Environmental Sustainability and Applications)
Show Figures

Figure 1

23 pages, 2597 KiB  
Review
Life Cycle Assessment in the Early Design Phase of Buildings: Strategies, Tools, and Future Directions
by Deepak Kumar, Kranti Kumar Maurya, Shailendra K. Mandal, Basit A. Mir, Anissa Nurdiawati and Sami G. Al-Ghamdi
Buildings 2025, 15(10), 1612; https://doi.org/10.3390/buildings15101612 - 10 May 2025
Viewed by 313
Abstract
The construction industry plays a significant role in global warming, accounting for 42% of primary energy use and 39% of greenhouse gas (GHG) emissions worldwide. Life Cycle Assessment (LCA) has emerged as a key methodology for evaluating environmental impacts throughout a building’s life [...] Read more.
The construction industry plays a significant role in global warming, accounting for 42% of primary energy use and 39% of greenhouse gas (GHG) emissions worldwide. Life Cycle Assessment (LCA) has emerged as a key methodology for evaluating environmental impacts throughout a building’s life cycle, yet its integration in the early design phase remains limited. This review aims to examine strategies and tools for incorporating LCA in the early design phase to enhance sustainability in building construction. The objectives of this study are: (1) to identify the main challenges in integrating LCA into early design workflows, (2) to analyze and compare LCA tools suitable for early-stage assessments, and (3) to explore emerging trends and technological advancements. A systematic literature review was employed using the Scopus database to analyze existing literature, identifying current practices, challenges, and technological advancements in early-stage LCA implementation. A total of 56 studies were identified for the review. The results highlight the growing adoption of Building Information Modeling (BIM), Artificial Intelligence (AI), and parametric modeling in streamlining LCA integration. Despite these advancements, barriers such as data scarcity, lack of standardization, and interoperability issues persist. Key findings suggest that simplified and computational LCA tools can improve accessibility and real-time decision-making during early-stage design. The study concludes that enhancing data availability, refining methodologies, and fostering collaboration between architects, engineers, and policymakers are crucial for mainstreaming LCA in sustainable building design. This review provides actionable insights to bridge the gap between sustainability goals and early-stage design decisions and framework, ultimately supporting a more environmentally responsible construction industry. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

30 pages, 3732 KiB  
Systematic Review
A Bibliometric and Systematic Review of Carbon Footprint Tracking in Cross-Sector Industries: Emerging Tools and Technologies
by Nishan Adhikari, Hailin Li and Bhaskaran Gopalakrishnan
Sustainability 2025, 17(9), 4205; https://doi.org/10.3390/su17094205 - 7 May 2025
Viewed by 187
Abstract
The Paris Agreement’s pressing global mandate to limit global warming to 1.5 degrees Celsius above pre-industrial levels by 2030 has placed immense pressure on energy-consuming industries and businesses to deploy robust, advanced, and accurate monitoring and tracking of carbon footprints. This critical issue [...] Read more.
The Paris Agreement’s pressing global mandate to limit global warming to 1.5 degrees Celsius above pre-industrial levels by 2030 has placed immense pressure on energy-consuming industries and businesses to deploy robust, advanced, and accurate monitoring and tracking of carbon footprints. This critical issue is examined through a systematic review of English-language studies (2015–2024) retrieved from three leading databases: Scopus (n = 1528), Web of Science (n = 1152), and GreenFILE (n = 271). The selected literature collectively highlights key carbon footprint tracking methods. The resulting dataset is subjected to bibliometric and scientometric analysis after refinement through deduplication and screening, based on the PRISMA framework. Methodologically, the analysis integrated the following: (1) evaluating long-term trends via the Mann–Kendall and Hurst exponent tests; (2) exploring keywords and country-based contributions using VOSviewer (v1.6.20); (3) applying Bradford’s law of scattering and Leimkuhler’s model; and (4) investigating authorship patterns and networks through Biblioshiny (v4.3.0). Further, based on eligibility criteria, 35 papers were comprehensively reviewed to investigate the emerging carbon footprint tracking technologies such as life cycle assessment (LCA), machine learning (ML), artificial intelligence (AI), blockchain, and data analytics. This study identified three main challenges: (a) lack of industry-wide standards and approaches; (b) real-time tracking of dynamic emissions using LCA; and (c) need for robust frameworks for interoperability of these technologies. Overall, our systematic review identifies the current state and trends of technologies and tools used in carbon emissions tracking in cross-sectors such as industries, buildings, construction, and transportation and provides valuable insights for industry practitioners, researchers, and policymakers to develop uniform, integrated, scalable, and compliant carbon tracking systems and support the global shift to a low-carbon and sustainable economy. Full article
(This article belongs to the Section Energy Sustainability)
Show Figures

Figure 1

13 pages, 3512 KiB  
Article
Measuring Lower-Limb Kinematics in Walking: Wearable Sensors Achieve Comparable Reliability to Motion Capture Systems and Smartphone Cameras
by Peiyu Ma, Qingyao Bian, Jin Min Kim, Khalid Alsayed and Ziyun Ding
Sensors 2025, 25(9), 2899; https://doi.org/10.3390/s25092899 - 4 May 2025
Viewed by 241
Abstract
Marker-based, IMU-based (6-axis IMU), and smartphone-based (OpenCap) motion capture methods are commonly used for motion analysis. The accuracy and reliability of these methods are crucial for applications in rehabilitation and sports training. This study compares the accuracy and inter-operator reliability of inverse kinematics [...] Read more.
Marker-based, IMU-based (6-axis IMU), and smartphone-based (OpenCap) motion capture methods are commonly used for motion analysis. The accuracy and reliability of these methods are crucial for applications in rehabilitation and sports training. This study compares the accuracy and inter-operator reliability of inverse kinematics (IK) solutions obtained from these methods, aiming to assist researchers in selecting the most appropriate system. For most lower limb inverse kinematics during walking motion, the IMU-based method and OpenCap show comparable accuracy to marker-based methods. The IMU-based method demonstrates higher accuracy in knee angle (5.74 ± 0.80 versus 7.36 ± 3.14 deg, with p = 0.020) and ankle angle (7.47 ± 3.91 versus 8.20 ± 3.00 deg, with p = 0.011), while OpenCap shows higher accuracy than IMU in pelvis tilt (5.49 ± 2.22 versus 4.28 ± 1.47 deg, with p = 0.013), hip adduction (6.10 ± 1.35 versus 4.06 ± 0.78 deg, with p = 0.019) and hip rotation (6.09 ± 1.74 versus 4.82 ± 2.30 deg, with p = 0.009). The inter-operator reliability of the marker-based method and the IMU-based method shows no significant differences in most motions except for hip adduction (evaluated by the intraclass correlation coefficient-ICC, 0.910 versus 0.511, with p = 0.016). In conclusion, for measuring lower-limb kinematics, wearable sensors (6-axis IMUs) achieve comparable accuracy and reliability to the gold standard, marker-based motion capture method, with lower equipment requirements and fewer movement constraints during data acquisition. Full article
(This article belongs to the Special Issue Sensors for Biomechanical and Rehabilitation Engineering)
Show Figures

Figure 1

14 pages, 1271 KiB  
Article
Cognitive Electronic Unit for AI-Guided Real-Time Echocardiographic Imaging
by Emanuele De Luca, Emanuele Amato, Vincenzo Valente, Marianna La Rocca, Tommaso Maggipinto, Roberto Bellotti and Francesco Dell’Olio
Appl. Sci. 2025, 15(9), 5001; https://doi.org/10.3390/app15095001 - 30 Apr 2025
Viewed by 133
Abstract
Echocardiography is a fundamental tool in cardiovascular diagnostics, providing radiation-free real-time assessments of cardiac function. However, its accuracy strongly depends on operator expertise, resulting in inter-operator variability that affects diagnostic consistency. Recent advances in artificial intelligence have enabled new applications for real-time image [...] Read more.
Echocardiography is a fundamental tool in cardiovascular diagnostics, providing radiation-free real-time assessments of cardiac function. However, its accuracy strongly depends on operator expertise, resulting in inter-operator variability that affects diagnostic consistency. Recent advances in artificial intelligence have enabled new applications for real-time image classification and probe guidance, but these typically rely on large datasets and specialized hardware such as GPU-based or embedded accelerators, limiting their clinical adoption. Here, we address this challenge by developing a cognitive electronic unit that integrates convolutional neural network (CNN) models and an inertial sensor for assisted echocardiography. We show that our system—powered by an NVIDIA Jetson Orin Nano—can effectively classify standard cardiac views and differentiate good-quality from poor-quality ultrasound images in real time even when trained on relatively small datasets. Preliminary results indicate that the combined use of CNN-based classification and inertial sensor-based feedback can reduce inter-operator variability and may also enhance diagnostic precision. By lowering barriers to data acquisition and providing real-time guidance, this system has the potential to benefit both novice and experienced sonographers, helping to standardize echocardiographic exams and improve patient outcomes. Further data collection and model refinements are ongoing, progressing the way for a more robust and widely applicable clinical solution. Full article
(This article belongs to the Special Issue Recent Progress and Challenges of Digital Health and Bioengineering)
Show Figures

Figure 1

24 pages, 1798 KiB  
Article
HEalthcare Robotics’ ONtology (HERON): An Upper Ontology for Communication, Collaboration and Safety in Healthcare Robotics
by Penelope Ioannidou, Ioannis Vezakis, Maria Haritou, Rania Petropoulou, Stavros T. Miloulis, Ioannis Kouris, Konstantinos Bromis, George K. Matsopoulos and Dimitrios D. Koutsouris
Healthcare 2025, 13(9), 1031; https://doi.org/10.3390/healthcare13091031 - 30 Apr 2025
Viewed by 182
Abstract
Background: Healthcare robotics needs context-aware policy-compliant reasoning to achieve safe human–agent collaboration. The current ontologies fail to provide healthcare-relevant information and flexible semantic enforcement systems. Methods: HERON represents a modular upper ontology which enables healthcare robotic systems to communicate and collaborate while ensuring [...] Read more.
Background: Healthcare robotics needs context-aware policy-compliant reasoning to achieve safe human–agent collaboration. The current ontologies fail to provide healthcare-relevant information and flexible semantic enforcement systems. Methods: HERON represents a modular upper ontology which enables healthcare robotic systems to communicate and collaborate while ensuring safety during operations. The system enables domain-specific instantiations through SPARQL queries and SHACL-based constraint validation to perform context-driven logic. The system models robotic task interactions through simulated eldercare and diagnostic and surgical support scenarios which follow ethical and regulatory standards. Results: The validation tests demonstrated HERON’s capacity to enable safe and explainable autonomous operations in changing environments. The semantic constraints enforced proper eligibility for roles and privacy conditions and policy override functionality during agent task execution. The HERON system demonstrated compatibility with healthcare IT systems and demonstrated adaptability to the GDPR and other policy frameworks. Conclusions: The semantically rich framework of HERON establishes an interoperable foundation for healthcare robotics. The system architecture maintains an open design which enables HL7/FHIR standard integration and robotic middleware compatibility. HERON demonstrates superior healthcare-specific capabilities through its evaluation against SUMO HL7 and MIMO. The future research will focus on optimizing HERON for low-resource clinical environments while extending its applications to remote care emergency triage and adaptive human–robot collaboration. Full article
(This article belongs to the Section TeleHealth and Digital Healthcare)
Show Figures

Figure 1

19 pages, 5766 KiB  
Article
Tree-to-Me: Standards-Driven Traceability for Farm-Level Visibility
by Ya Cho, Arbind Agrahari Baniya and Kieran Murphy
Agronomy 2025, 15(5), 1074; https://doi.org/10.3390/agronomy15051074 - 28 Apr 2025
Viewed by 186
Abstract
Traditional horticultural information systems lack fine-grained, transparent on-farm event traceability, often providing only high-level post-harvest summaries. These systems also fail to standardise and integrate diverse data sources, ensure data privacy, and scale effectively to meet the demands of modern agriculture. Concurrently, rising requirements [...] Read more.
Traditional horticultural information systems lack fine-grained, transparent on-farm event traceability, often providing only high-level post-harvest summaries. These systems also fail to standardise and integrate diverse data sources, ensure data privacy, and scale effectively to meet the demands of modern agriculture. Concurrently, rising requirements for global environmental, social, and governance (ESG) compliance, notably Scope 3 emissions reporting, are driving the need for farm-level visibility. To address these gaps, this study proposes a novel traceability framework tailored to horticulture, leveraging global data standards. The system captures key on-farm events (e.g., irrigation, harvesting, and chemical applications) at varied resolutions, using decentralised identification, secure data-sharing protocols, and farmer-controlled access. Built on a progressive Web application with microservice-enabled cloud infrastructure, the platform integrates dynamic APIs and digital links to connect on-farm operations and external supply chains, resolving farm-level data bottlenecks. Initial testing on Victorian farms demonstrates its scalability potential. Pilot studies further validate its on-farm interoperability and support for sustainability claims through digitally verifiable credentials for an international horticultural export case study. The system also provides a tested baseline for integrating data to and from emerging technologies, such as farm robotics and digital twins, with potential for broader application across agricultural commodities. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

22 pages, 332 KiB  
Review
Personalized Medical Approach in Gastrointestinal Surgical Oncology: Current Trends and Future Perspectives
by Dae Hoon Kim
J. Pers. Med. 2025, 15(5), 175; https://doi.org/10.3390/jpm15050175 - 27 Apr 2025
Viewed by 313
Abstract
Advances in artificial intelligence (AI), multi-omic profiling, and sophisticated imaging technologies have significantly advanced personalized medicine in gastrointestinal surgical oncology. These technological innovations enable precise patient stratification, tailored surgical strategies, and individualized therapeutic approaches, thereby significantly enhancing clinical outcomes. Despite remarkable progress, challenges [...] Read more.
Advances in artificial intelligence (AI), multi-omic profiling, and sophisticated imaging technologies have significantly advanced personalized medicine in gastrointestinal surgical oncology. These technological innovations enable precise patient stratification, tailored surgical strategies, and individualized therapeutic approaches, thereby significantly enhancing clinical outcomes. Despite remarkable progress, challenges persist, including the standardization and integration of diverse data types, ethical concerns regarding patient privacy, and rigorous clinical validation of predictive models. Addressing these challenges requires establishing international standards for data interoperability, such as Fast Healthcare Interoperability Resources, and adopting advanced security methods, such as homomorphic encryption, to facilitate secure multi-institutional data sharing. Moreover, ensuring model transparency and explainability through techniques such as explainable AI is critical for fostering trust among clinicians and patients. The successful integration of these advanced technologies necessitates strong multidisciplinary collaboration among surgeons, radiologists, geneticists, pathologists, and oncologists. Ultimately, the continued development and effective implementation of these personalized medical strategies complemented by human expertise promise a transformative shift toward patient-centered care, improving long-term outcomes for patients with gastrointestinal cancer. Full article
(This article belongs to the Special Issue Personalized Medicine in Gastrointestinal Surgical Oncology)
18 pages, 7788 KiB  
Article
Cultural Categorization in Epigraphic Heritage Digitization
by Hamest Tamrazyan and Gayane Hovhannisyan
Heritage 2025, 8(5), 148; https://doi.org/10.3390/heritage8050148 - 24 Apr 2025
Viewed by 552
Abstract
The digitization of cultural and intellectual heritage is expanding the research scope and methodologies of the scientific discipline of Humanities. Culturally diverse epigraphic systems reveal a range of methodological impediments on the way to their integration into digital epigraphic data preservation systems—EAGLE and [...] Read more.
The digitization of cultural and intellectual heritage is expanding the research scope and methodologies of the scientific discipline of Humanities. Culturally diverse epigraphic systems reveal a range of methodological impediments on the way to their integration into digital epigraphic data preservation systems—EAGLE and FAIR ontologies predominantly based on Greco-Roman cultural categorization. We suggest an interdisciplinary approach—drawing from Heritage Studies, Cultural Epistemology, and Social Semiotics—to ensure the comprehensive encoding, preservation, and accessibility of at-risk cultural artifacts. Heritage Studies emphasize inscriptions as material reflections of historical memory. Cultural Epistemology helps us to understand how different knowledge systems influence data categorization, while semiotic analysis reveals how inscriptions function within their social and symbolic contexts. Together, these methods guide the integration of culturally specific information into broader digital infrastructures. The case of Ukrainian epigraphy illustrates how this approach can be applied to ensure that local traditions are accurately represented and not flattened by standardized international systems. We argue that the same methodology can also support the digitization of other non-Greco-Roman heritage. FAIR Ontology and EAGLE vocabularies prioritize standardization and interoperability, introducing text mining, GIS mapping, and digital visualization to trace patterns across the vast body of texts from different historical periods. Standardizing valuable elements of cultural categorization and reconstructing and integrating lost or underrepresented cultural narratives will expand the capacity of the above systems and will foster greater inclusivity in Humanities research. Ukrainian epigraphic classification systems offer a unique, granular approach to inscription studies as a worthwhile contribution to the broader cognitive and epistemological horizons of the Humanities. Through a balanced use of specificity and interoperability principles, this study attempts to contribute to epigraphic metalanguage by challenging the monocentric ontologies, questioning cultural biases in digital categorization, and promoting open access to diverse sources of knowledge production. Full article
Show Figures

Figure 1

30 pages, 5336 KiB  
Article
Railway Cloud Resource Management as a Service
by Ivaylo Atanasov, Dragomira Dimitrova, Evelina Pencheva and Ventsislav Trifonov
Future Internet 2025, 17(5), 192; https://doi.org/10.3390/fi17050192 - 24 Apr 2025
Cited by 1 | Viewed by 323
Abstract
Cloud computing has the potential to accelerate the digital journey of railways. Railway systems are big and complex, involving a lot of parts, like trains, tracks, signaling systems, and control systems, among others. The application of cloud computing technologies in the railway industry [...] Read more.
Cloud computing has the potential to accelerate the digital journey of railways. Railway systems are big and complex, involving a lot of parts, like trains, tracks, signaling systems, and control systems, among others. The application of cloud computing technologies in the railway industry has the potential to enhance operational efficiency, data management, and overall system performance. Cloud management is essential for complex systems, and the automation of management services can speed up the provisioning, deployment, and maintenance of cloud infrastructure and applications by enabling visibility across the environment. It can provide consistent and unified management over resource allocation, streamline security processes, and automate the monitoring of key performance indicators. Key railway cloud management challenges include the lack of open interfaces and standardization, which are related to the vendor lock-in problem. In this paper, we propose an approach to design the railway cloud resource management as a service. Based on typical use cases, the requirements to fault and performance management of the railway cloud resources are identified. The main functionality is designed as RESTful services. The approach feasibility is proved by formal verification of the cloud resource management models supported by cloud management application and services. The proposed approach is open, in contrast to any proprietary solutions and feature scalability and interoperability. Full article
(This article belongs to the Special Issue Cloud and Edge Computing for the Next-Generation Networks)
Show Figures

Figure 1

16 pages, 1226 KiB  
Article
Advanced Digital System for International Collaboration on Biosample-Oriented Research: A Multicriteria Query Tool for Real-Time Biosample and Patient Cohort Searches
by Alexandros Fridas, Anna Bourouliti, Loukia Touramanidou, Desislava Ivanova, Kostantinos Votis and Panagiotis Katsaounis
Computers 2025, 14(5), 157; https://doi.org/10.3390/computers14050157 - 23 Apr 2025
Viewed by 237
Abstract
The advancement of biomedical research depends on efficient data sharing, integration, and annotation to ensure reproducibility, accessibility, and cross-disciplinary collaboration. International collaborative research is crucial for advancing biomedical science and innovation but often faces significant barriers, such as data sharing limitations, inefficient sample [...] Read more.
The advancement of biomedical research depends on efficient data sharing, integration, and annotation to ensure reproducibility, accessibility, and cross-disciplinary collaboration. International collaborative research is crucial for advancing biomedical science and innovation but often faces significant barriers, such as data sharing limitations, inefficient sample management, and scalability challenges. Existing infrastructures for biosample and data repositories face challenges limiting large-scale research efforts. This study presents a novel platform designed to address these issues, enabling researchers to conduct high-quality research more efficiently and at reduced costs. The platform employs a modular, distributed architecture that ensures high availability, redundancy, and interoperability among diverse stakeholders, as well as integrates advanced features, including secure access management, comprehensive query functionalities, real-time availability reporting, and robust data mining capabilities. In addition, this platform supports dynamic, multi-criteria searches tailored to disease-specific patient profiles and biosample-related data across pre-analytical, post-analytical, and cryo-storage processes. By evaluating the platform’s modular architecture and pilot testing outcomes, this study demonstrates its potential to enhance interdisciplinary collaboration, streamline research workflows, and foster transformative advancements in biomedical research. The key is the innovation of a real-time dynamic e-consent (DRT e-consent) system, which allows donors to update their consent status in real time, ensuring compliance with ethical and regulatory frameworks such as GDPR and HIPAA. The system also supports multi-modal data integration, including genomic sequences, electronic health records (EHRs), and imaging data, enabling researchers to perform complex queries and generate comprehensive insights. Full article
(This article belongs to the Special Issue Future Systems Based on Healthcare 5.0 for Pandemic Preparedness 2024)
Show Figures

Figure 1

29 pages, 1326 KiB  
Article
A Coordination Layer for Time Synchronization in Level-4 Multi-vECU Simulation
by Hyeongrae Kim, Harim Lee and Jeonghun Cho
Electronics 2025, 14(8), 1690; https://doi.org/10.3390/electronics14081690 - 21 Apr 2025
Viewed by 206
Abstract
In automotive software development, testing and validation workloads are often concentrated at the end of the development cycle, leading to delays and late-stage issue discovery. To address this, virtual Electronic Control Units (vECUs) have gained attention for enabling earlier-stage verification. In our previous [...] Read more.
In automotive software development, testing and validation workloads are often concentrated at the end of the development cycle, leading to delays and late-stage issue discovery. To address this, virtual Electronic Control Units (vECUs) have gained attention for enabling earlier-stage verification. In our previous work, we developed a Level-4 vECU using a hardware-level emulator. However, when simulating multiple vECUs with independent clocks across distributed emulators, we observed poor timing reproducibility due to the lack of explicit synchronization. To solve this, we implemented an integration layer compliant with the functional mock-up interface (FMI), a widely used standard for simulation tool interoperability. The layer enables synchronized simulation between a centralized simulation master and independently running vECUs. We also developed a virtual CAN bus model to simulate message arbitration and validate inter-vECU communication behavior. Simulation results show that our framework correctly reproduces CAN arbitration logic and significantly improves timing reproducibility compared to conventional Linux-based interfaces. To improve simulation performance, the FMI master algorithm was parallelized, resulting in up to 85.2% reduction in simulation time with eight vECUs. These contributions offer a practical solution for synchronizing distributed Level-4 vECUs and lay the groundwork for future cloud-native simulation of automotive systems. Full article
Show Figures

Figure 1

40 pages, 2062 KiB  
Review
State of the Art in Internet of Things Standards and Protocols for Precision Agriculture with an Approach to Semantic Interoperability
by Eduard Roccatello, Antonino Pagano, Nicolò Levorato and Massimo Rumor
Network 2025, 5(2), 14; https://doi.org/10.3390/network5020014 - 21 Apr 2025
Viewed by 580
Abstract
The integration of Internet of Things (IoT) technology into the agricultural sector enables the collection and analysis of large amounts of data, facilitating greater control over internal processes, resulting in cost reduction and improved quality of the final product. One of the main [...] Read more.
The integration of Internet of Things (IoT) technology into the agricultural sector enables the collection and analysis of large amounts of data, facilitating greater control over internal processes, resulting in cost reduction and improved quality of the final product. One of the main challenges in designing an IoT system is the need for interoperability among devices: different sensors collect information in non-homogeneous formats, which are often incompatible with each other. Therefore, the user of the system is forced to use different platforms and software to consult the data, making the analysis complex and cumbersome. The solution to this problem lies in the adoption of an IoT standard that standardizes the output of the data. This paper first provides an overview of the standards and protocols used in precision farming and then presents a system architecture designed to collect measurements from sensors and translate them into a standard. The standard is selected based on an analysis of the state of the art and tailored to meet the specific needs of precision agriculture. With the introduction of a connector device, the system can accommodate any number of different sensors while maintaining the output data in a uniform format. Each type of sensor is associated with a specific connector that intercepts the data intended for the database and translates it into the standard format before forwarding it to the central server. Finally, examples with real sensors are presented to illustrate the operation of the connectors and their role in an interoperable architecture, aiming to combine flexibility and ease of use with low implementation costs. Full article
Show Figures

Figure 1

Back to TopTop