Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (29)

Search Parameters:
Keywords = automated timed up and go test

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2692 KB  
Article
Low-Cost AI-Enabled Optoelectronic Wearable for Gait and Breathing Monitoring: Design, Validation, and Applications
by Samilly Morau, Leandro Macedo, Eliton Morais, Rafael Menegardo, Jan Nedoma, Radek Martinek and Arnaldo Leal-Junior
Biosensors 2025, 15(9), 612; https://doi.org/10.3390/bios15090612 - 16 Sep 2025
Viewed by 388
Abstract
This paper presents the development of an optoelectronic wearable sensor system for portable monitoring of the movement and physiological parameters of patients. The sensor system is based on a low-cost inertial measurement unit (IMU) and an optical fiber-integrated chest belt for breathing rate [...] Read more.
This paper presents the development of an optoelectronic wearable sensor system for portable monitoring of the movement and physiological parameters of patients. The sensor system is based on a low-cost inertial measurement unit (IMU) and an optical fiber-integrated chest belt for breathing rate monitoring with wireless connection with a gateway connected to the cloud. The sensors also use artificial intelligence algorithms for clustering, classification, and regression of the data. Results show a root mean squared error (RMSE) between the reference data and the proposed breathing rate sensor of 0.6 BPM, whereas RMSEs of 0.037 m/s2 and 0.27 °/s are obtained for the acceleration and angular velocity analysis, respectively. For the sensor validation under different movement analysis protocols, the balance and Timed up and Go (TUG) tests performed with 12 subjects demonstrate the feasibility of the proposed device for biomechanical and physical therapy protocols’ automatization and assessment. The balance tests were performed in two different conditions, with a wider and narrower base, whereas the TUG tests were made with the combination of cognitive and motor tests. The results demonstrate the influence of the change of base on the balance analysis as well as the dual task effect on the scores during the TUG testing, where the combination between motor and cognitive tests lead to smaller scores on the TUG tests due to the increase of complexity of the task. Therefore, the proposed approach results in a low-cost and fully automated sensor system that can be used in different protocols for physical rehabilitation. Full article
(This article belongs to the Special Issue Wearable Biosensors and Health Monitoring)
Show Figures

Figure 1

24 pages, 5198 KB  
Article
A Markerless Vision-Based Physical Frailty Assessment System for the Older Adults
by Muhammad Huzaifa, Wajiha Ali, Khawaja Fahad Iqbal, Ishtiaq Ahmad, Yasar Ayaz, Hira Taimur, Yoshihisa Shirayama and Motoyuki Yuasa
AI 2025, 6(9), 224; https://doi.org/10.3390/ai6090224 - 10 Sep 2025
Viewed by 1232
Abstract
The geriatric syndrome known as frailty is characterized by diminished physiological reserves and heightened susceptibility to unfavorable health consequences. As the world’s population ages, it is crucial to detect frailty early and accurately in order to reduce hazards, including falls, hospitalization, and death. [...] Read more.
The geriatric syndrome known as frailty is characterized by diminished physiological reserves and heightened susceptibility to unfavorable health consequences. As the world’s population ages, it is crucial to detect frailty early and accurately in order to reduce hazards, including falls, hospitalization, and death. In particular, functional tests are frequently used to evaluate physical frailty. However, current evaluation techniques are limited in their scalability and are prone to inconsistency due to their heavy reliance on subjective interpretation and manual observation. In this paper, we provide a completely automated, impartial, and comprehensive frailty assessment system that employs computer vision techniques for assessing physical frailty tests. Machine learning models have been specifically designed to analyze each clinical test. In order to extract significant features, our system analyzes the depth and joint coordinate data for important physical performance tests such as the Walking Speed Test, Timed Up and Go (TUG) Test, Functional Reach Test, Seated Forward Bend Test, Standing on One Leg Test, and Grip Strength Test. The proposed system offers a comprehensive system with consistent measurements, intelligent decision-making, and real-time feedback, in contrast to current systems, which lack real-time analysis and standardization. Strong model accuracy and conformity to clinical benchmarks are demonstrated by the experimental outcomes. The proposed system can be considered a scalable and useful tool for frailty screening in clinical and distant care settings by eliminating observer dependency and improving accessibility. Full article
(This article belongs to the Special Issue Multimodal Artificial Intelligence in Healthcare)
Show Figures

Figure 1

19 pages, 991 KB  
Article
Enhancing Machine Learning-Based DDoS Detection Through Hyperparameter Optimization
by Shao-Rui Chen, Shiang-Jiun Chen and Wen-Bin Hsieh
Electronics 2025, 14(16), 3319; https://doi.org/10.3390/electronics14163319 - 20 Aug 2025
Viewed by 855
Abstract
In recent years, the occurrence and complexity of Distributed Denial of Service (DDoS) attacks have escalated significantly, posing threats to the availability, performance, and security of networked systems. With the rapid progression of Artificial Intelligence (AI) and Machine Learning (ML) technologies, attackers can [...] Read more.
In recent years, the occurrence and complexity of Distributed Denial of Service (DDoS) attacks have escalated significantly, posing threats to the availability, performance, and security of networked systems. With the rapid progression of Artificial Intelligence (AI) and Machine Learning (ML) technologies, attackers can leverage intelligent tools to automate and amplify DDoS attacks with minimal human intervention. The increasing sophistication of such attacks highlights the pressing need for more robust and precise detection methodologies. This research proposes a method to enhance the effectiveness of ML models in detecting DDoS attacks based on hyperparameter tuning. By optimizing model parameters, the proposed approach is going to enhance the performance of ML models in identifying DDoS attacks. The CIC-DDoS2019 dataset is utilized in this study as it offers a comprehensive set of real-world DDoS attack scenarios across various protocols and services. The proposed methodology comprises key stages, including data preprocessing, data splitting, and model training, validation, and testing. Three ML models are trained and tuned using an adaptive GridSearchCV (Cross Validation) strategy to identify optimal parameter configurations. The results demonstrate that our method significantly improves performance and efficiency compared with the general GridSearchCV. The SVM model achieves 99.87% testing accuracy and requires approximately 28% less execution time than the general GridSearchCV. The LR model achieves 99.6830% testing accuracy with an execution time of 16.90 s, maintaining the same testing accuracy but reducing the execution time by about 22.8%. The KNN model achieves 99.8395% testing accuracy and 2388.89 s of execution time, also preserving accuracy while decreasing the execution time by approximately 63%. These results indicate that our approach enhances DDoS detection performance and efficiency, offering novel insights into the practical application of hyperparameter tuning for improving ML model performance in real-world scenarios. Full article
(This article belongs to the Special Issue Advancements in AI-Driven Cybersecurity and Securing AI Systems)
Show Figures

Figure 1

17 pages, 9327 KB  
Article
Supply-Blockchain Functional Prototype for Optimizing Port Operations Using Hyperledger Fabric
by Bidah Alkhaldi and Alauddin Al-Omary
Blockchains 2024, 2(3), 217-233; https://doi.org/10.3390/blockchains2030011 - 11 Jul 2024
Cited by 1 | Viewed by 2498
Abstract
Supply chain bottlenecks in port operations lead to significant delays and inefficiencies. Blockchain technology emerges as a viable solution, offering tamper-resistant ledgers, secure transactions, and automation capabilities. While considerable research on developing blockchain-based solutions currently exist, there is a lack of studies that [...] Read more.
Supply chain bottlenecks in port operations lead to significant delays and inefficiencies. Blockchain technology emerges as a viable solution, offering tamper-resistant ledgers, secure transactions, and automation capabilities. While considerable research on developing blockchain-based solutions currently exist, there is a lack of studies that specifically focus on optimizing port document management to speed up supply chain operations. In this paper, a supply-blockchain functional prototype for optimizing port operations using Hyperledger Fabric is introduced. In terms of core functionality, the prototype allows initiation of smart contract corresponding to functions such as creating and editing port-related documents, minimizing manual interventions and enhancing efficiency to reduce port congestion. Furthermore, it provides live tracking of completed events and transactions, facilitating transparency and streamlined oversight. The permissioned nature of Hyperledger Fabric ensures security and robust access controls, aligning well with sensitive port operations. Hyperledger Firefly and its connector framework was used as the middleware to facilitate blockchain integration and various functions of the prototype, while chaincode developed using Go language was used to package and deploy smart contracts. The supply-blockchain framework was used as the theoretical framework for prototype development, and agile project management was adopted to ensure timely completion. The results based on functional and performance testing demonstrate the prototype’s potential in alleviating port documentation bottlenecks and quickly delivering benefits to key stakeholders. Full article
Show Figures

Figure 1

20 pages, 28798 KB  
Article
Analysis of Cryptographic Algorithms to Improve Cybersecurity in the Industrial Electrical Sector
by Francisco Alonso, Benjamín Samaniego, Gonzalo Farias and Sebastián Dormido-Canto
Appl. Sci. 2024, 14(7), 2964; https://doi.org/10.3390/app14072964 - 31 Mar 2024
Cited by 5 | Viewed by 2471
Abstract
This article provides a general overview of the communication protocols used in the IEC61850 standard for the automation of electrical substations. Specifically, it examines the GOOSE and R-GOOSE protocols, which are used for exchanging various types of information. The article then presents real [...] Read more.
This article provides a general overview of the communication protocols used in the IEC61850 standard for the automation of electrical substations. Specifically, it examines the GOOSE and R-GOOSE protocols, which are used for exchanging various types of information. The article then presents real cases of cyber attacks on the industrial sector, highlighting the importance of addressing cybersecurity in the IEC61850 standard. The text presents security drawbacks of the communication protocols mentioned earlier and briefly explains two algorithms defined in the IEC61850 standard to address them. However, the authors suggest that having only a couple of algorithms may not be sufficient to ensure digital security in substations. This article presents a study on the cryptographic algorithms ChaCha20 and Poly1305. The purpose of the study is to experimentally verify their adaptation to the strict time requirements that GOOSE must meet for their operation. These algorithms can operate independently or in combination, creating an Authenticated Encryption with Associated Data (AEAD) algorithm. Both algorithms were thoroughly reviewed and tested using GOOSE and R-GOOSE frames generated by the S-GoSV software. The computational time required was also observed. The frames were analysed using the Wireshark software. It was concluded that the algorithms are suitable for the communication requirements of electrical substations and can be used as an alternative to the cryptographic algorithms proposed under the IEC61850 standard. Full article
Show Figures

Figure 1

30 pages, 6249 KB  
Systematic Review
Advances in Machine Learning Techniques Used in Fatigue Life Prediction of Welded Structures
by Sadiq Gbagba, Lorenzo Maccioni and Franco Concli
Appl. Sci. 2024, 14(1), 398; https://doi.org/10.3390/app14010398 - 31 Dec 2023
Cited by 16 | Viewed by 6241
Abstract
In the shipbuilding, construction, automotive, and aerospace industries, welding is still a crucial manufacturing process because it can be utilized to create massive, intricate structures with exact dimensional specifications. These kinds of structures are essential for urbanization considering they are used in applications [...] Read more.
In the shipbuilding, construction, automotive, and aerospace industries, welding is still a crucial manufacturing process because it can be utilized to create massive, intricate structures with exact dimensional specifications. These kinds of structures are essential for urbanization considering they are used in applications such as tanks, ships, and bridges. However, one of the most important types of structural damage in welding continues to be fatigue. Therefore, it is necessary to take this phenomenon into account when designing and to assess it while a structure is in use. Although traditional methodologies including strain life, linear elastic fracture mechanics, and stress-based procedures are useful for diagnosing fatigue failures, these techniques are typically geometry restricted, require a lot of computing time, are not self-improving, and have limited automation capabilities. Meanwhile, following the conception of machine learning, which can swiftly discover failure trends, cut costs, and time while also paving the way for automation, many damage problems have shown promise in receiving exceptional solutions. This study seeks to provide a thorough overview of how algorithms of machine learning are utilized to forecast the life span of structures joined with welding. It will also go through their drawbacks and advantages. Specifically, the perspectives examined are from the views of the material type, application, welding method, input parameters, and output parameters. It is seen that input parameters such as arc voltage, welding speed, stress intensity factor range, crack growth parameters, stress histories, thickness, and nugget size influence output parameters in the manner of residual stress, number of cycles to failure, impact strength, and stress concentration factors, amongst others. Steel (including high strength steel and stainless steel) accounted for the highest frequency of material usage, while bridges were the most desired area of application. Meanwhile, the predominant taxonomy of machine learning was the random/hybrid-based type. Thus, the selection of the most appropriate and reliable algorithm for any requisite matter in this area could ultimately be determined, opening new research and development opportunities for automation, testing, structural integrity, structural health monitoring, and damage-tolerant design of welded structures. Full article
(This article belongs to the Section Mechanical Engineering)
Show Figures

Figure 1

32 pages, 5441 KB  
Technical Note
A Tool for Semi-Automated Extraction of Cotton Gin Energy Consumption from Power Data
by Sean P. Donohoe, Femi P. Alege and Joe W. Thomas
AgriEngineering 2023, 5(3), 1498-1529; https://doi.org/10.3390/agriengineering5030093 - 31 Aug 2023
Cited by 2 | Viewed by 1964
Abstract
The gin stand power is measurable using common tools; however, such tools typically do not detect active ginning. Detecting active ginning is important when trying to separate out the energy going to the moving parts of the gin stand (i.e., the baseline energy) [...] Read more.
The gin stand power is measurable using common tools; however, such tools typically do not detect active ginning. Detecting active ginning is important when trying to separate out the energy going to the moving parts of the gin stand (i.e., the baseline energy) versus the active energy doing work to remove the cotton fibers from the seed. Studies have shown that the gin stand is the second largest consumer of electricity in the ginning operation, while electricity accounts for nearly 17% of the average expense per bale. If active energy differences exist between cotton cultivars, there may be room to optimize and lower these expenses. The goal of the current work is to provide a method (and software tool) to analyze typical power logger data, and extract periods of active ginning, along with the energy consumed and ginning times, in a semi-automated way. The new method presented allows multiple periods of active ginning in a single file, and can separate the total energy into the active and baseline components. Other metrics of interest that the software calculates include the ginning time, and average power. Software validation using a simulated test signal showed that a 2%-or-lower error is possible with a noisy signal. Full article
Show Figures

Figure 1

13 pages, 1006 KB  
Article
Psycho-Cognitive Profile and NGF and BDNF Levels in Tears and Serum: A Pilot Study in Patients with Graves’ Disease
by Alice Bruscolini, Angela Iannitelli, Marco Segatto, Pamela Rosso, Elena Fico, Marzia Buonfiglio, Alessandro Lambiase and Paola Tirassa
Int. J. Mol. Sci. 2023, 24(9), 8074; https://doi.org/10.3390/ijms24098074 - 29 Apr 2023
Cited by 4 | Viewed by 2487
Abstract
Nerve Growth Factor (NGF) and Brain derived Neurotrophic Factor (BDNF) mature/precursor imbalance in tears and serum is suggested as a risk factor and symptomatology aggravation in ophthalmology and neuropsychiatric disturbances. Cognitive and mood alterations are reported by patients with Graves’ Orbitopathy (GO), indicating [...] Read more.
Nerve Growth Factor (NGF) and Brain derived Neurotrophic Factor (BDNF) mature/precursor imbalance in tears and serum is suggested as a risk factor and symptomatology aggravation in ophthalmology and neuropsychiatric disturbances. Cognitive and mood alterations are reported by patients with Graves’ Orbitopathy (GO), indicating neurotrophin alterations might be involved. To address this question, the expression levels of NGF and BDNF and their precursors in serum and tears of GO patients were analyzed and correlated with the ophthalmological and psycho-cognitive symptoms. Hamilton Rating Scale for Anxiety (HAM-A) and Depression (HAM-D), Temperament and Character Inventory (TCI), and Cambridge Neuropsychological Test Automated Battery (CANTAB) test were used as a score. NGF and BDNF levels were measured using ELISA and Western Blot and statistically analyzed for psychiatric/ocular variable trend association. GO patients show memorization time and level of distraction increase, together with high irritability and impulsiveness. HAM-A and CANTAB variables association, and some TCI dimensions are also found. NGF and BDNF expression correlates with ophthalmological symptoms only in tears, while mature/precursor NGF and BDNF correlate with the specific psycho-cognitive variables both in tears and serum. Our study is the first to show that changes in NGF and BDNF processing in tears and serum might profile ocular and cognitive alterations in patients. Full article
(This article belongs to the Special Issue Neurotrophins: Roles and Function in Human Diseases 2.0)
Show Figures

Figure 1

18 pages, 3435 KB  
Article
Automatic Tumor Identification from Scans of Histopathological Tissues
by Mantas Kundrotas, Edita Mažonienė and Dmitrij Šešok
Appl. Sci. 2023, 13(7), 4333; https://doi.org/10.3390/app13074333 - 29 Mar 2023
Cited by 5 | Viewed by 2294
Abstract
Latest progress in development of artificial intelligence (AI), especially machine learning (ML), allows to develop automated technologies that can eliminate or at least reduce human errors in analyzing health data. Due to the ethics of usage of AI in pathology and laboratory medicine, [...] Read more.
Latest progress in development of artificial intelligence (AI), especially machine learning (ML), allows to develop automated technologies that can eliminate or at least reduce human errors in analyzing health data. Due to the ethics of usage of AI in pathology and laboratory medicine, to the present day, pathologists analyze slides of histopathologic tissues that are stained with hematoxylin and eosin under the microscope; by law it cannot be substituted and must go under visual observation, as pathologists are fully accountable for the result. However, a profuse number of automated systems could solve complex problems that require an extremely fast response, accuracy, or take place on tasks that require both a fast and accurate response at the same time. Such systems that are based on ML algorithms can be adapted to work with medical imaging data, for instance whole slide images (WSIs) that allow clinicians to review a much larger number of health cases in a shorter time and give the ability to identify the preliminary stages of cancer or other diseases improving health monitoring strategies. Moreover, the increased opportunity to forecast and take control of the spread of global diseases could help to create a preliminary analysis and viable solutions. Accurate identification of a tumor, especially at an early stage, requires extensive expert knowledge, so often the cancerous tissue is identified only after experiencing its side effects. The main goal of our study was to expand the ability to find more accurate ML methods and techniques that can lead to detecting tumor damaged tissues in histopathological WSIs. According to the experiments that we conducted, there was a 1% AUC difference between the training and test datasets. Over several training iterations, the U-Net model was able to reduce the model size by almost twice while also improving accuracy from 0.95491 to 0.95515 AUC. Convolutional models worked well on groups of different sizes when properly trained. With the TTA (test time augmentation) method the result improved to 0.96870, and with the addition of the multi-model ensemble, it improved to 0.96977. We found out that flaws in the models can be found and fixed by using specialized analysis techniques. A correction of the image processing parameters was sufficient to raise the AUC by almost 0.3%. The result of the individual model increased to 0.96664 AUC (a more than 1% better result than the previous best model) after additional training data preparation. This is an arduous task due to certain factors: using such systems’ applications globally needs to achieve maximum accuracy and improvement in the ethics of Al usage in medicine; furthermore if hospitals could give scientific inquiry validation, while retaining patient data anonymity with clinical information that could be systemically analyzed and improved by scientists, thereby proving Al benefits. Full article
(This article belongs to the Special Issue AI Technology in Medical Image Analysis)
Show Figures

Figure 1

24 pages, 2799 KB  
Systematic Review
Smart Contracts in the Construction Industry: A Systematic Review
by Ishara Rathnayake, Gayan Wedawatta and Algan Tezel
Buildings 2022, 12(12), 2082; https://doi.org/10.3390/buildings12122082 - 28 Nov 2022
Cited by 27 | Viewed by 14303
Abstract
On-time delivery of documentation and contracts has been recognized as a crucial requirement for the successful delivery of projects. However, the construction industry still depends on time-consuming traditional contract processes, which negatively affect the overall productivity of projects in the industry. The use [...] Read more.
On-time delivery of documentation and contracts has been recognized as a crucial requirement for the successful delivery of projects. However, the construction industry still depends on time-consuming traditional contract processes, which negatively affect the overall productivity of projects in the industry. The use of Smart Contracts (SCs) is highlighted as a suitable novel technology to expedite the contract processes and establish a reliable payment environment in the construction industry. Whilst there has been an increase in the debate about the use of SCs in construction in recent years, their use in practice still seems to be in its infancy. As such, the topic will benefit from a thorough review of benefits, drivers, barriers and strategies that can enhance the implementation of SCs in construction. This article presents the key findings from a Systematic Literature Review (SLR) on SCs in the construction industry, critically assessing existing studies on the topic. The study initially involved 171 research papers for the SLR process, and out of that 49 research papers were filtered for further analysis after reading their abstracts. A total of 30 papers were finally filtered after the full-text reading for the SLR. Descriptive and content analysis were used to analyse the full-text findings. The study graphically mapped the bibliographic materials by using the Visualization of Similarities (VoS) Viewer software. As per the findings, the topic has mostly been researched in Asia and the Pacific as a region and China as a country. It was noted that there were more empirical articles than theoretical studies related to SCs, evidencing the industry relevance of the issue. A total of 55% of the articles reviewed have been published in journals with a Q1 ranking. All the articles were written by multiple authors, with 30% of the journal articles having international co-authors and benefitting from the collaboration between authors. Key advantages identified in the literature go beyond contract and payment provisions and include aspects such as logistic handling, decentralized applications, business process management, automated payments, etc. Key drivers for adoption are supply chain pressure, competitive pressure, top management support, simple layout, reduction in risks of clients, clarity in responsibility and risk allocation, whereas the key barriers include insecurity, limited observability, incompatibility, inactive government collaboration and limited storage capacity. Key strategies to enhance the application of SC in construction include integrating theorems proving symbolic execution, using the selective transparency method and lock fund system, testing the integration of SCs with other systems at the initial stage, incorporating semi-automated consensus mechanisms for payments, constructing a mechanism to actively engage with government bodies, etc. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

14 pages, 1217 KB  
Article
Personalized Deep Bi-LSTM RNN Based Model for Pain Intensity Classification Using EDA Signal
by Fatemeh Pouromran, Yingzi Lin and Sagar Kamarthi
Sensors 2022, 22(21), 8087; https://doi.org/10.3390/s22218087 - 22 Oct 2022
Cited by 30 | Viewed by 4028
Abstract
Automatic pain intensity assessment from physiological signals has become an appealing approach, but it remains a largely unexplored research topic. Most studies have used machine learning approaches built on carefully designed features based on the domain knowledge available in the literature on the [...] Read more.
Automatic pain intensity assessment from physiological signals has become an appealing approach, but it remains a largely unexplored research topic. Most studies have used machine learning approaches built on carefully designed features based on the domain knowledge available in the literature on the time series of physiological signals. However, a deep learning framework can automate the feature engineering step, enabling the model to directly deal with the raw input signals for real-time pain monitoring. We investigated a personalized Bidirectional Long short-term memory Recurrent Neural Networks (BiLSTM RNN), and an ensemble of BiLSTM RNN and Extreme Gradient Boosting Decision Trees (XGB) for four-category pain intensity classification. We recorded Electrodermal Activity (EDA) signals from 29 subjects during the cold pressor test. We decomposed EDA signals into tonic and phasic components and augmented them to original signals. The BiLSTM-XGB model outperformed the BiLSTM classification performance and achieved an average F1-score of 0.81 and an Area Under the Receiver Operating Characteristic curve (AUROC) of 0.93 over four pain states: no pain, low pain, medium pain, and high pain. We also explored a concatenation of the deep-learning feature representations and a set of fourteen knowledge-based features extracted from EDA signals. The XGB model trained on this fused feature set showed better performance than when it was trained on component feature sets individually. This study showed that deep learning could let us go beyond expert knowledge and benefit from the generated deep representations of physiological signals for pain assessment. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

24 pages, 443 KB  
Article
GaSubtle: A New Genetic Algorithm for Generating Subtle Higher-Order Mutants
by Fadi Wedyan, Abdullah Al-Shishani and Yaser Jararweh
Information 2022, 13(7), 327; https://doi.org/10.3390/info13070327 - 7 Jul 2022
Cited by 4 | Viewed by 2773
Abstract
Mutation testing is an effective, yet costly, testing approach, as it requires generating and running large numbers of faulty programs, called mutants. Mutation testing also suffers from a fundamental problem, which is having a large percentage of equivalent mutants. These are mutants that [...] Read more.
Mutation testing is an effective, yet costly, testing approach, as it requires generating and running large numbers of faulty programs, called mutants. Mutation testing also suffers from a fundamental problem, which is having a large percentage of equivalent mutants. These are mutants that produce the same output as the original program, and therefore, cannot be detected. Higher-order mutation is a promising approach that can produce hard-to-detect faulty programs called subtle mutants, with a low percentage of equivalent mutants. Subtle higher-order mutants contribute a small set of the large space of mutants which grows even larger as the order of mutation becomes higher. In this paper, we developed a genetic algorithm for finding subtle higher-order mutants. The proposed approach uses a new mechanism in the crossover phase and uses five selection techniques to select mutants that go to the next generation in the genetic algorithm. We implemented a tool, called GaSubtle that automates the process of creating subtle mutants. We evaluated the proposed approach by using 10 subject programs. Our evaluation shows that the proposed crossover generates more subtle mutants than the technique used in a previous genetic algorithm with less execution time. Results vary on the selection strategies, suggesting a dependency relation with the tested code. Full article
Show Figures

Figure 1

18 pages, 5805 KB  
Article
Method for Continuous Integration and Deployment Using a Pipeline Generator for Agile Software Projects
by Ionut-Catalin Donca, Ovidiu Petru Stan, Marius Misaros, Dan Gota and Liviu Miclea
Sensors 2022, 22(12), 4637; https://doi.org/10.3390/s22124637 - 20 Jun 2022
Cited by 30 | Viewed by 8679
Abstract
Lately, the software development industry is going through a slow but real transformation. Software is increasingly a part of everything, and, software developers, are trying to cope with this exploding demand through more automation. The pipelining technique of continuous integration (CI) and continuous [...] Read more.
Lately, the software development industry is going through a slow but real transformation. Software is increasingly a part of everything, and, software developers, are trying to cope with this exploding demand through more automation. The pipelining technique of continuous integration (CI) and continuous delivery (CD) has developed considerably due to the overwhelming demand for the deployment and deliverability of new features and applications. As a result, DevOps approaches and Agile principles have been developed, in which developers collaborate closely with infrastructure engineers to guarantee that their applications are deployed quickly and reliably. Thanks to pipeline approach thinking, the efficiency of projects has greatly improved. Agile practices represent the introduction to the system of new features in each sprint delivery. Those practices may contain well-developed features or can contain bugs or failures which impact the delivery. The pipeline approach, depicted in this paper, overcomes the problems of delivery, improving the delivery timeline, the test load steps, and the benchmarking tasks. It decreases system interruption by integrating multiple test steps and adds stability and deliverability to the entire process. It provides standardization which means having an established, time-tested process to use, and can also decrease ambiguity and guesswork, guarantee quality and boost productivity. This tool is developed with an interpreted language, namely Bash, which offers an easier method to integrate it into any platform. Based on the experimental results, we demonstrate the value that this solution currently creates. This solution provides an effective and efficient way to generate, manage, customize, and automate Agile-based CI and CD projects through automated pipelines. The suggested system acts as a starting point for standard CI/CD processes, caches Docker layers for subsequent usage, and implements highly available deliverables in a Kubernetes cluster using Helm. Changing the principles of this solution and expanding it into multiple platforms (windows) will be addressed in a future discussion. Full article
(This article belongs to the Special Issue Intelligent Control and Testing Systems and Applications)
Show Figures

Figure 1

15 pages, 5412 KB  
Article
Development of an Automated Linear Move Fertigation System for Cotton Using Active Remote Sensing
by Stewart Bell, A. Bulent Koc, Joe Mari Maja, Jose Payero, Ahmad Khalilian and Michael Marshall
AgriEngineering 2022, 4(1), 320-334; https://doi.org/10.3390/agriengineering4010022 - 18 Mar 2022
Cited by 2 | Viewed by 4808
Abstract
Optimum nitrogen (N) application is essential to the economic and environmental sustainability of cotton production. Variable-rate N fertigation could potentially help farmers optimize N applications, but current overhead irrigation systems normally lack automated site-specific variable-rate fertigation capabilities. The objective of this study was [...] Read more.
Optimum nitrogen (N) application is essential to the economic and environmental sustainability of cotton production. Variable-rate N fertigation could potentially help farmers optimize N applications, but current overhead irrigation systems normally lack automated site-specific variable-rate fertigation capabilities. The objective of this study was to develop an automated variable-rate N fertigation based on real-time Normalized Difference Vegetation Index (NDVI) measurements from crop sensors integrated with a lateral move irrigation system. For this purpose, NDVI crop sensors and a flow meter integrated with Arduino microcontrollers were constructed on a linear move fertigation system at the Edisto Research and Education Center in Blackville, South Carolina. A computer program was developed to automatically apply site-specific variable N rates based on real-time NDVI sensor data. The system’s ability to use the NDVI data to prescribe N rates, the flow meter to monitor the flow of N, and a rotary encoder to establish the lateral’s position were evaluated. Results from this study showed that the system could accurately use NDVI data to calculate N rates when compared to hand calculated N rates using a two-sample t-test (p > 0.05). Linear regression analysis showed a strong relationship between flow rates measured using the flow meter and hand calculations (R2 = 0.95), as well as the measured distance travelled using the encoder and the actual distance travelled (R2 = 0.99). This study concludes that N management decisions can be automated using NDVI data from on-the-go handheld GreenSeeker crop sensors. The developed system can provide an alternative N application solution for farmers and researchers. Full article
Show Figures

Figure 1

34 pages, 11943 KB  
Article
Energy Loss Impact in Electrical Smart Grid Systems in Australia
by Ashraf Zaghwan and Indra Gunawan
Sustainability 2021, 13(13), 7221; https://doi.org/10.3390/su13137221 - 28 Jun 2021
Cited by 5 | Viewed by 4978
Abstract
This research draws attention to the potential and contextual influences on energy loss in Australia’s electricity market and smart grid systems. It further examines barriers in the transition toward optimising the benefit opportunities between electricity demand and electricity supply. The main contribution of [...] Read more.
This research draws attention to the potential and contextual influences on energy loss in Australia’s electricity market and smart grid systems. It further examines barriers in the transition toward optimising the benefit opportunities between electricity demand and electricity supply. The main contribution of this study highlights the impact of individual end-users by controlling and automating individual home electricity profiles within the objective function set (AV) of optimum demand ranges. Three stages of analysis were accomplished to achieve this goal. Firstly, we focused on feasibility analysis using ‘weight of evidence’ (WOE) and ‘information value’ (IV) techniques to check sample data segmentation and possible variable reduction. Stage two of sensitivity analysis (SA) used a generalised reduced gradient algorithm (GRG) to detect and compare a nonlinear optimisation issue caused by end-user demand. Stage three of analysis used two methods adopted from the machine learning toolbox, piecewise linear distribution (PLD) and the empirical cumulative distribution function (ECDF), to test the normality of time series data and measure the discrepancy between them. It used PLD and ECDF to derive a nonparametric representation of the overall cumulative distribution function (CDF). These analytical methods were all found to be relevant and provided a clue to the sustainability approach. This study provides insights into the design of sustainable homes, which must go beyond the concept of increasing the capacity of renewable energy. In addition to this, this study examines the interplay between the variance estimation of the problematic levels and the perception of energy loss to introduce a novel realistic model of cost–benefit incentives. This optimisation goal contrasted with uncertainties that remain as to what constitutes the demand impact and individual house effects in diverse clustering patterns in a specific grid system. While ongoing effort is still needed to look for strategic solutions for this class of complex problems, this research shows significant contextual opportunities to manage the complexity of the problem according to the nature of the case, representing dense and significant changes in the situational complexity. Full article
(This article belongs to the Special Issue Applications of Complex System Approach in Project Management)
Show Figures

Figure 1

Back to TopTop