Computer Vision, Pattern Recognition and Machine Learning in Italy

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 30 April 2025 | Viewed by 37442

Special Issue Editors


E-Mail Website
Guest Editor
Institute of Information Science and Technologies "A. Faedo" (ISTI), National Research Council of Italy (CNR), 56124 Pisa, PI, Italy
Interests: artificial intelligence; artificial neural networks; decision support theory; computer vision; eHealth

Special Issue Information

Dear Colleagues,

Most modern technological innovations are also made possible with the most recent advances in pattern recognition, machine learning, and computer vision.

The main aim of this Special Issue is to collect works from the fervent Italian research community.

Works should aim to report the main theoretical improvements in the aforementioned research areas and their impact on different application contexts, such as video surveillance and biometry, sports analysis, inspection, assistive and manufacturing technologies, smart agriculture, eHealth, environment monitoring, intelligent transportation and construction, retail, and so on.

Dr. Marco Leo
Dr. Sara Colantonio
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computer vision
  • pattern recognition
  • machine learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 1910 KiB  
Article
Advancing Patient Care with an Intelligent and Personalized Medication Engagement System
by Ahsan Ismail, Muddasar Naeem, Madiha Haider Syed, Musarat Abbas and Antonio Coronato
Information 2024, 15(10), 609; https://doi.org/10.3390/info15100609 - 4 Oct 2024
Viewed by 723
Abstract
Therapeutic efficacy is affected by adherence failure as also demonstrated by WHO clinical studies that 50–70% of patients follow a treatment plan properly. Patients’ failure to follow prescribed drugs is the main reason for morbidity and mortality and more cost of healthcare services. [...] Read more.
Therapeutic efficacy is affected by adherence failure as also demonstrated by WHO clinical studies that 50–70% of patients follow a treatment plan properly. Patients’ failure to follow prescribed drugs is the main reason for morbidity and mortality and more cost of healthcare services. Adherence to medication could be improved with the use of patient engagement systems. Such engagement systems can include a patient’s preferences and beliefs in the treatment plans, resulting in more responsive and customized treatments. However, one key limitation of the existing engagement systems is their generic applications. We propose a personalized framework for patient medication engagement using AI methods such as Reinforcement Learning (RL) and Deep Learning (DL). The proposed Personalized Medication Engagement System (PMES) has two major components. The first component of the PMES is based on an RL agent, which is trained on adherence reports and later utilized to engage a patient. The RL agent, after training, can identify each patient’s patterns of responsiveness by observing and learning their response to signs and then optimize for each individual. The second component of the proposed system is based on DL and is used to monitor the medication process. The additional feature of the PMES is that it is cloud-based and can be utilized anywhere remotely. Moreover, the system is personalized as the RL component of PMES can be trained for each patient separately, while the DL part of the PMES can be trained for a given medication plan. Thus, the advantage of the proposed work is two-fold, i.e., RL component of the framework improves adherence to medication while the DL component minimizes medication errors. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
Show Figures

Figure 1

36 pages, 1803 KiB  
Article
An Overview on the Advancements of Support Vector Machine Models in Healthcare Applications: A Review
by Rosita Guido, Stefania Ferrisi, Danilo Lofaro and Domenico Conforti
Information 2024, 15(4), 235; https://doi.org/10.3390/info15040235 - 19 Apr 2024
Cited by 7 | Viewed by 5607
Abstract
Support vector machines (SVMs) are well-known machine learning algorithms for classification and regression applications. In the healthcare domain, they have been used for a variety of tasks including diagnosis, prognosis, and prediction of disease outcomes. This review is an extensive survey on the [...] Read more.
Support vector machines (SVMs) are well-known machine learning algorithms for classification and regression applications. In the healthcare domain, they have been used for a variety of tasks including diagnosis, prognosis, and prediction of disease outcomes. This review is an extensive survey on the current state-of-the-art of SVMs developed and applied in the medical field over the years. Many variants of SVM-based approaches have been developed to enhance their generalisation capabilities. We illustrate the most interesting SVM-based models that have been developed and applied in healthcare to improve performance metrics on benchmark datasets, including hybrid classification methods that combine, for instance, optimization algorithms with SVMs. We even report interesting results found in medical applications related to real-world data. Several issues around SVMs, such as selection of hyperparameters and learning from data of questionable quality, are discussed as well. The several variants developed and introduced over the years could be useful in designing new methods to improve performance in critical fields such as healthcare, where accuracy, specificity, and other metrics are crucial. Finally, current research trends and future directions are underlined. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
Show Figures

Figure 1

18 pages, 6689 KiB  
Article
Exploring the Potential of Ensembles of Deep Learning Networks for Image Segmentation
by Loris Nanni, Alessandra Lumini and Carlo Fantozzi
Information 2023, 14(12), 657; https://doi.org/10.3390/info14120657 - 12 Dec 2023
Cited by 1 | Viewed by 2132
Abstract
To identify objects in images, a complex set of skills is needed that includes understanding the context and being able to determine the borders of objects. In computer vision, this task is known as semantic segmentation and it involves categorizing each pixel in [...] Read more.
To identify objects in images, a complex set of skills is needed that includes understanding the context and being able to determine the borders of objects. In computer vision, this task is known as semantic segmentation and it involves categorizing each pixel in an image. It is crucial in many real-world situations: for autonomous vehicles, it enables the identification of objects in the surrounding area; in medical diagnosis, it enhances the ability to detect dangerous pathologies early, thereby reducing the risk of serious consequences. In this study, we compare the performance of various ensembles of convolutional and transformer neural networks. Ensembles can be created, e.g., by varying the loss function, the data augmentation method, or the learning rate strategy. Our proposed ensemble, which uses a simple averaging rule, demonstrates exceptional performance across multiple datasets. Notably, compared to prior state-of-the-art methods, our ensemble consistently shows improvements in the well-studied polyp segmentation problem. This problem involves the precise delineation and identification of polyps within medical images, and our approach showcases noteworthy advancements in this domain, obtaining an average Dice of 0.887, which outperforms the current SOTA with an average Dice of 0.885. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
Show Figures

Figure 1

16 pages, 575 KiB  
Article
Cost-Sensitive Models to Predict Risk of Cardiovascular Events in Patients with Chronic Heart Failure
by Maria Carmela Groccia, Rosita Guido, Domenico Conforti, Corrado Pelaia, Giuseppe Armentaro, Alfredo Francesco Toscani, Sofia Miceli, Elena Succurro, Marta Letizia Hribal and Angela Sciacqua
Information 2023, 14(10), 542; https://doi.org/10.3390/info14100542 - 3 Oct 2023
Cited by 1 | Viewed by 1092
Abstract
Chronic heart failure (CHF) is a clinical syndrome characterised by symptoms and signs due to structural and/or functional abnormalities of the heart. CHF confers risk for cardiovascular deterioration events which cause recurrent hospitalisations and high mortality rates. The early prediction of these events [...] Read more.
Chronic heart failure (CHF) is a clinical syndrome characterised by symptoms and signs due to structural and/or functional abnormalities of the heart. CHF confers risk for cardiovascular deterioration events which cause recurrent hospitalisations and high mortality rates. The early prediction of these events is very important to limit serious consequences, improve the quality of care, and reduce its burden. CHF is a progressive condition in which patients may remain asymptomatic before the onset of symptoms, as observed in heart failure with a preserved ejection fraction. The early detection of underlying causes is critical for treatment optimisation and prognosis improvement. To develop models to predict cardiovascular deterioration events in patients with chronic heart failure, a real dataset was constructed and a knowledge discovery task was implemented in this study. The dataset is imbalanced, as it is common in real-world applications. It thus posed a challenge because imbalanced datasets tend to be overwhelmed by the abundance of majority-class instances during the learning process. To address the issue, a pipeline was developed specifically to handle imbalanced data. Different predictive models were developed and compared. To enhance sensitivity and other performance metrics, we employed multiple approaches, including data resampling, cost-sensitive methods, and a hybrid method that combines both techniques. These methods were utilised to assess the predictive capabilities of the models and their effectiveness in handling imbalanced data. By using these metrics, we aimed to identify the most effective strategies for achieving improved model performance in real scenarios with imbalanced datasets. The best model for predicting cardiovascular events achieved mean a sensitivity 65%, a mean specificity 55%, and a mean area under the curve of 0.71. The results show that cost-sensitive models combined with over/under sampling approaches are effective for the meaningful prediction of cardiovascular events in CHF patients. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
Show Figures

Figure 1

17 pages, 5105 KiB  
Article
Investigation of a Hybrid LSTM + 1DCNN Approach to Predict In-Cylinder Pressure of Internal Combustion Engines
by Federico Ricci, Luca Petrucci, Francesco Mariani and Carlo Nazareno Grimaldi
Information 2023, 14(9), 507; https://doi.org/10.3390/info14090507 - 15 Sep 2023
Cited by 3 | Viewed by 1474
Abstract
The control of internal combustion engines is becoming increasingly challenging to the customer’s requirements for growing performance and ever-stringent emission regulations. Therefore, significant computational efforts are required to manage the large amount of data coming from the field for engine optimization, leading to [...] Read more.
The control of internal combustion engines is becoming increasingly challenging to the customer’s requirements for growing performance and ever-stringent emission regulations. Therefore, significant computational efforts are required to manage the large amount of data coming from the field for engine optimization, leading to increased operating times and costs. Machine-learning techniques are being increasingly used in the automotive field as virtual sensors, fault detection systems, and performance-optimization applications for their real-time and low-cost implementation. Among them, the combination of long short-term memory (LSTM) together with one-dimensional convolutional neural networks (1DCNN), i.e., LSTM + 1DCNN, has proved to be a promising tool for signal analysis. The architecture exploits the CNN characteristic to combine feature classification and extraction, creating a single adaptive learning body with the ability of LSTM to follow the sequential nature of sensor measurements over time. The current research focus is on evaluating the possibility of integrating virtual sensors into the on-board control system. Specifically, the primary objective is to assess and harness the potential of advanced machine-learning technologies to replace physical sensors. In realizing this goal, the present work establishes the first step by evaluating the forecasting performance of a LSTM + 1DCNN architecture. Experimental data coming from a three-cylinder spark-ignition engine under different operating conditions are used to predict the engine’s in-cylinder pressure traces. Since using in-cylinder pressure transducers in road cars is not economically viable, adopting advanced machine-learning technologies becomes crucial to avoid structural modifications while preserving engine integrity. The results show that LSTM + 1DCNN is particularly suited for the prediction of signals characterized by a higher variability. In particular, it consistently outperforms other architectures utilized for comparative purposes, achieving average error percentages below 2%. As cycle-to-cycle variability increases, LSTM + 1DCNN reaches average error percentages below 1.5%, demonstrating the architecture’s potential for replacing physical sensors. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
Show Figures

Figure 1

19 pages, 5691 KiB  
Article
Development of Technologies for the Detection of (Cyber)Bullying Actions: The BullyBuster Project
by Giulia Orrù, Antonio Galli, Vincenzo Gattulli, Michela Gravina, Marco Micheletto, Stefano Marrone, Wanda Nocerino, Angela Procaccino, Grazia Terrone, Donatella Curtotti, Donato Impedovo, Gian Luca Marcialis and Carlo Sansone
Information 2023, 14(8), 430; https://doi.org/10.3390/info14080430 - 1 Aug 2023
Viewed by 3188
Abstract
Bullying and cyberbullying are harmful social phenomena that involve the intentional, repeated use of power to intimidate or harm others. The ramifications of these actions are felt not just at the individual level but also pervasively throughout society, necessitating immediate attention and practical [...] Read more.
Bullying and cyberbullying are harmful social phenomena that involve the intentional, repeated use of power to intimidate or harm others. The ramifications of these actions are felt not just at the individual level but also pervasively throughout society, necessitating immediate attention and practical solutions. The BullyBuster project pioneers a multi-disciplinary approach, integrating artificial intelligence (AI) techniques with psychological models to comprehensively understand and combat these issues. In particular, employing AI in the project allows the automatic identification of potentially harmful content by analyzing linguistic patterns and behaviors in various data sources, including photos and videos. This timely detection enables alerts to relevant authorities or moderators, allowing for rapid interventions and potential harm mitigation. This paper, a culmination of previous research and advancements, details the potential for significantly enhancing cyberbullying detection and prevention by focusing on the system’s design and the novel application of AI classifiers within an integrated framework. Our primary aim is to evaluate the feasibility and applicability of such a framework in a real-world application context. The proposed approach is shown to tackle the pervasive issue of cyberbullying effectively. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
Show Figures

Figure 1

16 pages, 8024 KiB  
Article
NARX Technique to Predict Torque in Internal Combustion Engines
by Federico Ricci, Luca Petrucci, Francesco Mariani and Carlo Nazareno Grimaldi
Information 2023, 14(7), 417; https://doi.org/10.3390/info14070417 - 20 Jul 2023
Cited by 5 | Viewed by 1737
Abstract
To carry out increasingly sophisticated checks, which comply with international regulations and stringent constraints, on-board computational systems are called upon to manipulate a growing number of variables, provided by an ever-increasing number of real and virtual sensors. The optimization phase of an ICE [...] Read more.
To carry out increasingly sophisticated checks, which comply with international regulations and stringent constraints, on-board computational systems are called upon to manipulate a growing number of variables, provided by an ever-increasing number of real and virtual sensors. The optimization phase of an ICE passes through the control of these numerous variables, which often exhibit rapidly changing trends over time. On the one hand, the amount of data to be processed, with narrow cyclical frequencies, entails ever more powerful computational equipment. On the other hand, computational strategies and techniques are required which allow actuation times that are useful for timely and optimized control. In the automotive industry, the ‘machine learning’ approach is becoming one the most used approaches to perform forecasting activities with reduced computational effort, due to both its cost-effectiveness and its simple and compact structure. In the present work, the nonlinear dynamic system we address is related to the torque estimation of an ICE through a nonlinear autoregressive with exogenous inputs (NARX) approach. Preliminary activities were performed to optimize the neural network in terms of neurons, hidden layers, and the number of input parameters to be assessed. A Shapley sensitivity analysis allowed quantification of the impact of each variable on the target prediction, and therefore, a reduction in the amount of data to be processed by the architecture. In all cases analyzed, the optimized structure was able to achieve average percentage errors on the target prediction that were always lower than a critical threshold of 10%. In particular, when the dataset was augmented or the analyzed cases merged, the architecture achieved average prediction errors of about 1%, highlighting its remarkable ability to reproduce the target with fidelity. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
Show Figures

Figure 1

24 pages, 12290 KiB  
Article
METRIC—Multi-Eye to Robot Indoor Calibration Dataset
by Davide Allegro, Matteo Terreran and Stefano Ghidoni
Information 2023, 14(6), 314; https://doi.org/10.3390/info14060314 - 29 May 2023
Cited by 1 | Viewed by 1973
Abstract
Multi-camera systems are an effective solution for perceiving large areas or complex scenarios with many occlusions. In such a setup, an accurate camera network calibration is crucial in order to localize scene elements with respect to a single reference frame shared by all [...] Read more.
Multi-camera systems are an effective solution for perceiving large areas or complex scenarios with many occlusions. In such a setup, an accurate camera network calibration is crucial in order to localize scene elements with respect to a single reference frame shared by all the viewpoints of the network. This is particularly important in applications such as object detection and people tracking. Multi-camera calibration is a critical requirement also in several robotics scenarios, particularly those involving a robotic workcell equipped with a manipulator surrounded by multiple sensors. Within this scenario, the robot-world hand-eye calibration is an additional crucial element for determining the exact position of each camera with respect to the robot, in order to provide information about the surrounding workspace directly to the manipulator. Despite the importance of the calibration process in the two scenarios outlined above, namely (i) a camera network, and (ii) a camera network with a robot, there is a lack of standard datasets available in the literature to evaluate and compare calibration methods. Moreover they are usually treated separately and tested on dedicated setups. In this paper, we propose a general standard dataset acquired in a robotic workcell where calibration methods can be evaluated in two use cases: camera network calibration and robot-world hand-eye calibration. The Multi-Eye To Robot Indoor Calibration (METRIC) dataset consists of over 10,000 synthetic and real images of ChAruCo and checkerboard patterns, each one rigidly attached to the robot end-effector, which was moved in front of four cameras surrounding the manipulator from different viewpoints during the image acquisition. The real images in the dataset includes several multi-view image sets captured by three different types of sensor networks: Microsoft Kinect V2, Intel RealSense Depth D455 and Intel RealSense Lidar L515, to evaluate their advantages and disadvantages for calibration. Furthermore, in order to accurately analyze the effect of camera-robot distance on calibration, we acquired a comprehensive synthetic dataset, with related ground truth, with three different camera network setups corresponding to three levels of calibration difficulty depending on the cell size. An additional contribution of this work is to provide a comprehensive evaluation of state-of-the-art calibration methods using our dataset, highlighting their strengths and weaknesses, in order to outline two benchmarks for the two aforementioned use cases. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
Show Figures

Figure 1

16 pages, 3998 KiB  
Article
Lightweight Implicit Blur Kernel Estimation Network for Blind Image Super-Resolution
by Asif Hussain Khan, Christian Micheloni and Niki Martinel
Information 2023, 14(5), 296; https://doi.org/10.3390/info14050296 - 18 May 2023
Cited by 1 | Viewed by 3353
Abstract
Blind image super-resolution (Blind-SR) is the process of leveraging a low-resolution (LR) image, with unknown degradation, to generate its high-resolution (HR) version. Most of the existing blind SR techniques use a degradation estimator network to explicitly estimate the blur kernel to guide the [...] Read more.
Blind image super-resolution (Blind-SR) is the process of leveraging a low-resolution (LR) image, with unknown degradation, to generate its high-resolution (HR) version. Most of the existing blind SR techniques use a degradation estimator network to explicitly estimate the blur kernel to guide the SR network with the supervision of ground truth (GT) kernels. To solve this issue, it is necessary to design an implicit estimator network that can extract discriminative blur kernel representation without relying on the supervision of ground-truth blur kernels. We design a lightweight approach for blind super-resolution (Blind-SR) that estimates the blur kernel and restores the HR image based on a deep convolutional neural network (CNN) and a deep super-resolution residual convolutional generative adversarial network. Since the blur kernel for blind image SR is unknown, following the image formation model of blind super-resolution problem, we firstly introduce a neural network-based model to estimate the blur kernel. This is achieved by (i) a Super Resolver that, from a low-resolution input, generates the corresponding SR image; and (ii) an Estimator Network generating the blur kernel from the input datum. The output of both models is used in a novel loss formulation. The proposed network is end-to-end trainable. The methodology proposed is substantiated by both quantitative and qualitative experiments. Results on benchmarks demonstrate that our computationally efficient approach (12x fewer parameters than the state-of-the-art models) performs favorably with respect to existing approaches and can be used on devices with limited computational capabilities. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
Show Figures

Figure 1

17 pages, 6534 KiB  
Article
A Shallow System Prototype for Violent Action Detection in Italian Public Schools
by Erica Perseghin and Gian Luca Foresti
Information 2023, 14(4), 240; https://doi.org/10.3390/info14040240 - 14 Apr 2023
Cited by 5 | Viewed by 2224
Abstract
This paper presents a novel low-cost integrated system prototype, called School Violence Detection system (SVD), based on a 2D Convolutional Neural Network (CNN). It is used for classifying and identifying automatically violent actions in educational environments based on shallow cost hardware. Moreover, the [...] Read more.
This paper presents a novel low-cost integrated system prototype, called School Violence Detection system (SVD), based on a 2D Convolutional Neural Network (CNN). It is used for classifying and identifying automatically violent actions in educational environments based on shallow cost hardware. Moreover, the paper fills the gap of real datasets in educational environments by proposing a new one, called Daily School Break dataset (DSB), containing original videos recorded in an Italian high school yard. The proposed CNN has been pre-trained with an ImageNet model and a transfer learning approach. To extend its capabilities, the DSB was enriched with online images representing students in school environments. Experimental results analyze the classification performances of the SVD and investigate how it performs through the proposed DSB dataset. The SVD, which achieves a recognition accuracy of 95%, is considered computably efficient and low-cost. It could be adapted to other scenarios such as school arenas, gyms, playgrounds, etc. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
Show Figures

Figure 1

16 pages, 8063 KiB  
Article
Using a Machine Learning Approach to Evaluate the NOx Emissions in a Spark-Ignition Optical Engine
by Federico Ricci, Luca Petrucci and Francesco Mariani
Information 2023, 14(4), 224; https://doi.org/10.3390/info14040224 - 6 Apr 2023
Cited by 6 | Viewed by 2110
Abstract
Currently, machine learning (ML) technologies are widely employed in the automotive field for determining physical quantities thanks to their ability to ensure lower computational costs and faster operations than traditional methods. Within this context, the present work shows the outcomes of forecasting activities [...] Read more.
Currently, machine learning (ML) technologies are widely employed in the automotive field for determining physical quantities thanks to their ability to ensure lower computational costs and faster operations than traditional methods. Within this context, the present work shows the outcomes of forecasting activities on the prediction of pollutant emissions from engines using an artificial neural network technique. Tests on an optical access engine were conducted under lean mixture conditions, which is the direction in which automotive research is developing to meet the ever-stricter regulations on pollutant emissions. A NARX architecture was utilized to estimate the engine’s nitrogen oxide emissions starting from in-cylinder pressure data and images of the flame front evolution recorded by a high-speed camera and elaborated through a Mask R-CNN technique. Based on the obtained results, the methodology’s applicability to real situations, such as metal engines, was assessed using a sensitivity analysis presented in the second part of the work, which helped identify and quantify the most important input parameters for the nitrogen oxide forecast. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
Show Figures

Figure 1

20 pages, 8149 KiB  
Article
Automatic Identification and Geo-Validation of Event-Related Images for Emergency Management
by Marco Vernier, Manuela Farinosi, Alberto Foresti and Gian Luca Foresti
Information 2023, 14(2), 78; https://doi.org/10.3390/info14020078 - 28 Jan 2023
Cited by 1 | Viewed by 2033
Abstract
In recent years, social platforms have become integrated in a variety of economic, political and cultural domains. Social media have become the primary outlets for many citizens to consume news and information, and, at the same time, to produce and share online a [...] Read more.
In recent years, social platforms have become integrated in a variety of economic, political and cultural domains. Social media have become the primary outlets for many citizens to consume news and information, and, at the same time, to produce and share online a large amount of data and meta-data. This paper presents an innovative system able to analyze visual information shared by citizens on social media during extreme events for contributing to the situational awareness and supporting people in charge of coordinating the emergency management. The system analyzes all posts containing images shared by users by taking into account: (a) the event class and (b) the GPS coordinates of the geographical area affected by the event. Then, a Single Shot Multibox Detector (SSD) network is applied to select only the posted images correctly related to the event class and an advanced image processing procedure is used to verify if these images are correlated with the geographical area where the emergency event is ongoing. Several experiments have been carried out to evaluate the performance of the proposed system in the context of different emergency situations caused by earthquakes, floods and terrorist attacks. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
Show Figures

Figure 1

Review

Jump to: Research

32 pages, 1163 KiB  
Review
Computer Vision Tasks for Ambient Intelligence in Children’s Health
by Danila Germanese, Sara Colantonio, Marco Del Coco, Pierluigi Carcagnì and Marco Leo
Information 2023, 14(10), 548; https://doi.org/10.3390/info14100548 - 6 Oct 2023
Cited by 5 | Viewed by 2019
Abstract
Computer vision is a powerful tool for healthcare applications since it can provide objective diagnosis and assessment of pathologies, not depending on clinicians’ skills and experiences. It can also help speed-up population screening, reducing health care costs and improving the quality of service. [...] Read more.
Computer vision is a powerful tool for healthcare applications since it can provide objective diagnosis and assessment of pathologies, not depending on clinicians’ skills and experiences. It can also help speed-up population screening, reducing health care costs and improving the quality of service. Several works summarise applications and systems in medical imaging, whereas less work is devoted to surveying approaches for healthcare goals using ambient intelligence, i.e., observing individuals in natural settings. Even more, there is a lack of papers providing a survey of works exhaustively covering computer vision applications for children’s health, which is a particularly challenging research area considering that most existing computer vision technologies have been trained and tested only on adults. The aim of this paper is then to survey, for the first time in the literature, the papers covering children’s health-related issues by ambient intelligence methods and systems relying on computer vision. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
Show Figures

Figure 1

28 pages, 1304 KiB  
Review
Exploring the State of Machine Learning and Deep Learning in Medicine: A Survey of the Italian Research Community
by Alessio Bottrighi and Marzio Pennisi
Information 2023, 14(9), 513; https://doi.org/10.3390/info14090513 - 18 Sep 2023
Cited by 1 | Viewed by 2691
Abstract
Artificial intelligence (AI) is becoming increasingly important, especially in the medical field. While AI has been used in medicine for some time, its growth in the last decade is remarkable. Specifically, machine learning (ML) and deep learning (DL) techniques in medicine have been [...] Read more.
Artificial intelligence (AI) is becoming increasingly important, especially in the medical field. While AI has been used in medicine for some time, its growth in the last decade is remarkable. Specifically, machine learning (ML) and deep learning (DL) techniques in medicine have been increasingly adopted due to the growing abundance of health-related data, the improved suitability of such techniques for managing large datasets, and more computational power. ML and DL methodologies are fostering the development of new “intelligent” tools and expert systems to process data, to automatize human–machine interactions, and to deliver advanced predictive systems that are changing every aspect of the scientific research, industry, and society. The Italian scientific community was instrumental in advancing this research area. This article aims to conduct a comprehensive investigation of the ML and DL methodologies and applications used in medicine by the Italian research community in the last five years. To this end, we selected all the papers published in the last five years with at least one of the authors affiliated to an Italian institution that in the title, in the abstract, or in the keywords present the terms “machine learning” or “deep learning” and reference a medical area. We focused our research on journal papers under the hypothesis that Italian researchers prefer to present novel but well-established research in scientific journals. We then analyzed the selected papers considering different dimensions, including the medical topic, the type of data, the pre-processing methods, the learning methods, and the evaluation methods. As a final outcome, a comprehensive overview of the Italian research landscape is given, highlighting how the community has increasingly worked on a very heterogeneous range of medical problems. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
Show Figures

Figure 1

18 pages, 2425 KiB  
Review
A Systematic Review of Effective Hardware and Software Factors Affecting High-Throughput Plant Phenotyping
by Firozeh Solimani, Angelo Cardellicchio, Massimiliano Nitti, Alfred Lako, Giovanni Dimauro and Vito Renò
Information 2023, 14(4), 214; https://doi.org/10.3390/info14040214 - 1 Apr 2023
Cited by 6 | Viewed by 2714
Abstract
Plant phenotyping studies the complex characteristics of plants, with the aim of evaluating and assessing their condition and finding better exemplars. Recently, a new branch emerged in the phenotyping field, namely, high-throughput phenotyping (HTP). Specifically, HTP exploits modern data sampling techniques to gather [...] Read more.
Plant phenotyping studies the complex characteristics of plants, with the aim of evaluating and assessing their condition and finding better exemplars. Recently, a new branch emerged in the phenotyping field, namely, high-throughput phenotyping (HTP). Specifically, HTP exploits modern data sampling techniques to gather a high amount of data that can be used to improve the effectiveness of phenotyping. Hence, HTP combines the knowledge derived from the phenotyping domain with computer science, engineering, and data analysis techniques. In this scenario, machine learning (ML) and deep learning (DL) algorithms have been successfully integrated with noninvasive imaging techniques, playing a key role in automation, standardization, and quantitative data analysis. This study aims to systematically review two main areas of interest for HTP: hardware and software. For each of these areas, two influential factors were identified: for hardware, platforms and sensing equipment were analyzed; for software, the focus was on algorithms and new trends. The study was conducted following the PRISMA protocol, which allowed the refinement of the research on a wide selection of papers by extracting a meaningful dataset of 32 articles of interest. The analysis highlighted the diffusion of ground platforms, which were used in about 47% of reviewed methods, and RGB sensors, mainly due to their competitive costs, high compatibility, and versatility. Furthermore, DL-based algorithms accounted for the larger share (about 69%) of reviewed approaches, mainly due to their effectiveness and the focus posed by the scientific community over the last few years. Future research will focus on improving DL models to better handle hardware-generated data. The final aim is to create integrated, user-friendly, and scalable tools that can be directly deployed and used on the field to improve the overall crop yield. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
Show Figures

Figure 1

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Title: A systematic review of effective factors on high-throughput plant phenotyping based on ML /DL algorithms

Authors: Firozeh Solimani; Vito Renò
Affiliation: --
Abstract: Recently, a new science has emerged, called plant phenotype, which deals with the study of the complex characteristics of plants with the aim of evaluating their condition. Undoubtedly, this goal can only be achieved by combining biological knowledge with computer science and engineering skills, especially when it comes to the huge amounts of data generated by high-throughput phenotyping (HTP) platforms. In this scenario, machine learning and deep learning algorithms, which have been successfully integrated with non-invasive imaging techniques, play a key role in the automation, standardization, and quantitative analysis of big data. In this context, we aim to conduct a systematic study on high-throughput plant phenotyping to identify the factors that are effective in evaluating the condition of plants ( aerial part of the plant and root system). In this study, we followed the PRISMA protocol and investigated the topic by proposing 4 influential factors (platforms, sensors, algorithms, and new techniques). The study covers the period from 1 January 2019 to the end of 2022. We used the Scopus database to find 1000 articles dealing with the subject of high throughput plant phenotyping, which we were able to filter using inclusion and exclusion criteria. Following a thorough review, we selected 34 articles dealing with issues of our goal. The results of our data show that about 65% of recent studies by researchers are aimed at analyzing phenotyping data, indicating the importance of managing a huge amount of data through phenotyping platforms. Meanwhile, deep learning has taken a larger share of research with 59%, which can indicate the better accuracy and speed of this algorithm. Future research should focus on improving deep-learning models for managing big data generated by platforms and also reduce the cost of plant phenotyping for farmers by developing customized or user-friendly models.

Title: MCRC - A Novel Dataset for Multi-Camera and Robot-World Hand-Eye Calibration
Authors: Davide Allegro, Matteo Terreran, Stefano Ghidoni.
Affiliation: --
Abstract: Multi-camera systems are an effective solution for dealing with large areas or complex scenarios with many occlusions. In such a setup, an accurate camera network calibration is crucial in order to localize scene elements with respect to a single reference frame shared by all the viewpoints of the network. Multi-camera calibration is a critical requirement also in several robotics scenarios, particularly those involving a robotic workcell equipped with a manipulator surrounded by multiple sensors. Within this scenario the robot-world hand-eye calibration is an additional crucial element to determine the exact position of each camera with respect to the robot, in order to provide information about the surrounding workspace directly to the manipulator. Despite the importance of the calibration process in the two scenarios outlined above, namely i) a camera network, and ii) a camera network with a robot, there is a lack of standard datasets available in the literature to evaluate and compare calibration methods. Moreover they are usually treated separately and tested on dedicated setups. In this paper we propose a generic standard dataset acquired in a robotic workcell where both methods can be evaluated according to two benchmarks: camera network calibration and robot-world hand-eye calibration. MCRC is a Multi-Camera Robot system Calibration dataset, consisting of over 10000 synthetic and real images of ChAruCo and checkerboard patterns, each one rigidly attached to the robot end-effector which was moved in front of four cameras surrounding the manipulator from different viewpoints during the image acquisition. The real dataset includes several multi-view image sets captured by three different types of sensor networks: Microsoft Kinect V2, Intel RealSense Depth D455 and Lidar L515, to evaluate their advantages and disadvantages for calibration. Furthermore, in order to accurately analyze the effect of camera-robot distance on calibration, we acquired a comprehensive synthetic dataset, with related ground truth, with three different camera network setups corresponding to three levels of calibration difficulty depending on the cell size. An additional contribution of this work is to provide a comprehensive evaluation of state-of-the-art calibration methods using our dataset, highlighting their strengths and weaknesses, in order to outline the two aforementioned benchmarks.

Back to TopTop