Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (13)

Search Parameters:
Keywords = Teachable Machine

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 3987 KB  
Article
Interactive Application with Virtual Reality and Artificial Intelligence for Improving Pronunciation in English Learning
by Gustavo Caiza, Carlos Villafuerte and Adriana Guanuche
Appl. Sci. 2025, 15(17), 9270; https://doi.org/10.3390/app15179270 - 23 Aug 2025
Viewed by 1235
Abstract
Technological advances have enabled the development of innovative educational tools, particularly those aimed at supporting English as a Second Language (ESL) learning, with a specific focus on oral skills. However, pronunciation remains a significant challenge due to the limited availability of personalized learning [...] Read more.
Technological advances have enabled the development of innovative educational tools, particularly those aimed at supporting English as a Second Language (ESL) learning, with a specific focus on oral skills. However, pronunciation remains a significant challenge due to the limited availability of personalized learning opportunities that offer immediate feedback and contextualized practice. In this context, the present research proposes the design, implementation, and validation of an immersive application that leverages virtual reality (VR) and artificial intelligence (AI) to enhance English pronunciation. The proposed system integrates a 3D interactive environment developed in Unity, voice classification models trained using Teachable Machine, and real-time communication with Firebase, allowing users to practice and assess their pronunciation in a simulated library-like virtual setting. Through its integrated AI module, the application can analyze the pronunciation of each word in real time, detecting correct and incorrect utterances, and then providing immediate feedback to help users identify and correct their mistakes. The virtual environment was designed to be a welcoming and user-friendly, promoting active engagement with the learning process. The application’s distributed architecture enables automated feedback generation via data flow between the cloud-based AI, the database, and the visualization interface. Results demonstrate that using 400 samples per class and a confidence threshold of 99.99% for training the AI model effectively eliminated false positives, significantly increasing system accuracy and providing users with more reliable feedback. This directly contributes to enhanced learner autonomy and improved ESL acquisition outcomes. Furthermore, user surveys conducted to understand their perceptions of the application’s usefulness as a support tool for English learning yielded an average acceptance rate of 93%. This reflects the acceptance of these immersive technologies in educational contexts, as the combination of these technologies offers a realistic and user-friendly simulation environment, in addition to detailed word analysis, facilitating self-assessment and independent learning among students. Full article
Show Figures

Figure 1

14 pages, 3208 KB  
Article
Advancing Hydrogel-Based 3D Cell Culture Systems: Histological Image Analysis and AI-Driven Filament Characterization
by Lucio Assis Araujo Neto, Alessandra Maia Freire and Luciano Paulino Silva
Biomedicines 2025, 13(1), 208; https://doi.org/10.3390/biomedicines13010208 - 15 Jan 2025
Cited by 1 | Viewed by 1885
Abstract
Background: Machine learning is used to analyze images by training algorithms on data to recognize patterns and identify objects, with applications in various fields, such as medicine, security, and automation. Meanwhile, histological cross-sections, whether longitudinal or transverse, expose layers of tissues or tissue [...] Read more.
Background: Machine learning is used to analyze images by training algorithms on data to recognize patterns and identify objects, with applications in various fields, such as medicine, security, and automation. Meanwhile, histological cross-sections, whether longitudinal or transverse, expose layers of tissues or tissue mimetics, which provide crucial information for microscopic analysis. Objectives: This study aimed to employ the Google platform “Teachable Machine” to apply artificial intelligence (AI) in the interpretation of histological cross-section images of hydrogel filaments. Methods: The production of 3D hydrogel filaments involved different combinations of sodium alginate and gelatin polymers, as well as a cross-linking agent, and subsequent stretching until rupture using an extensometer. Cross-sections of stretched and unstretched filaments were created and stained with hematoxylin and eosin. Using the Teachable Machine platform, images were grouped and trained for subsequent prediction. Results: Over six hundred histological cross-section images were obtained and stored in a virtual database. Each hydrogel combination exhibited variations in coloration, and some morphological structures remained consistent. The AI efficiently identified and differentiated images of stretched and unstretched filaments. However, some confusion arose when distinguishing among variations in hydrogel combinations. Conclusions: Therefore, the image prediction tool for biopolymeric hydrogel histological cross-sections using Teachable Machine proved to be an efficient strategy for distinguishing stretched from unstretched filaments. Full article
(This article belongs to the Special Issue 3D Cell Culture Systems for Biomedical Research)
Show Figures

Figure 1

25 pages, 6143 KB  
Article
Dynamic Tracking and Real-Time Fall Detection Based on Intelligent Image Analysis with Convolutional Neural Network
by Ching-Bang Yao and Cheng-Tai Lu
Sensors 2024, 24(23), 7448; https://doi.org/10.3390/s24237448 - 22 Nov 2024
Cited by 2 | Viewed by 3168
Abstract
As many countries face rapid population aging, the supply of manpower for caregiving falls far short of the increasing demand for care. Therefore, if the care system can continuously recognize and track the care recipient and, at the first sign of a fall, [...] Read more.
As many countries face rapid population aging, the supply of manpower for caregiving falls far short of the increasing demand for care. Therefore, if the care system can continuously recognize and track the care recipient and, at the first sign of a fall, promptly analyze the image to accurately assess the circumstances of the fall, it would be highly critical. This study integrates the mobility of drones in conjunction with the Dlib HOG algorithm and intelligent fall posture analysis, aiming to achieve real-time tracking of care recipients. Additionally, the study improves and enhances the real-time multi-person action analysis feature of OpenPose to enhance its analytical capabilities for various fall scenarios, enabling accurate analysis of the approximate real-time situation when a care recipient falls. In the experimental results, the system’s identification accuracy for four fall directions is higher than that of Google Teachable Machine’s Pose Project training model. Particularly, there is the significant improvement in identifying backward falls, with the identification accuracy increasing from 70.35% to 95%. Furthermore, the identification accuracy for forward and leftward falls also increases by nearly 14%. Therefore, the experimental results demonstrate that the improved identification accuracy for the four fall directions in different scenarios exceeds 95%. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

10 pages, 3168 KB  
Proceeding Paper
Filipino Meal Recognition Scale with Food Nutrition Calculation and Smart Application
by Andrew D. R. Demition, Zephanie Ann L. Narciso and Charmaine C. Paglinawan
Eng. Proc. 2024, 74(1), 54; https://doi.org/10.3390/engproc2024074054 - 3 Sep 2024
Viewed by 4986
Abstract
Nutritional awareness is considered prevalent in today’s society. In effect, more people have been inclined to perform food calculations in the food they eat to improve their physical fitness and balance the meals they eat. In this study, the Internet of Things was [...] Read more.
Nutritional awareness is considered prevalent in today’s society. In effect, more people have been inclined to perform food calculations in the food they eat to improve their physical fitness and balance the meals they eat. In this study, the Internet of Things was used through a mobile application with Filipino meal recognition that was integrated into a weighing scale to simplify meal recognition and food calculation without individually scaling and measuring the macronutrients of each food. An ESP32 was programmed to determine the weight of the food sample. Moreover, a TensorFlow Lite Model was created using Teachable Machine, whereas the dataset comprised three Filipino meal combinations of rice, pork adobo, and pork giniling; rice, ginataang kalabasa, and pork giniling; and rice, ginataang kalabasa, and pork adobo. The model identified the 15 samples of Filipino meals per combination. The precision was 91.26% for the first meal combination, 82.73% for the second meal combination, and 85.46% for the third meal combination. One-factor ANOVA was conducted to determine the similarities of the actual and predicted macronutrient contents of the food samples, whereas 10 weight values of successfully determined food meals for each combination were used. The model recognized each Filipino food combination with an overall accuracy of 93.33%. The predicted macronutrient contents were similar to the actual macronutrient contents of the meal based on the statistical analysis performed. Full article
Show Figures

Figure 1

15 pages, 6228 KB  
Article
Individual Identification of Medaka, a Small Freshwater Fish, from the Dorsal Side Using Artificial Intelligence
by Mai Osada, Masaki Yasugi, Hirotsugu Yamamoto, Atsushi Ito and Shoji Fukamachi
Hydrobiology 2024, 3(2), 119-133; https://doi.org/10.3390/hydrobiology3020009 - 13 Jun 2024
Viewed by 2170
Abstract
Individual identification is an important ability for humans and perhaps also for non-human animals to lead social lives. It is also desirable for laboratory experiments to keep records of each animal while rearing them in mass. However, the specific body parts or the [...] Read more.
Individual identification is an important ability for humans and perhaps also for non-human animals to lead social lives. It is also desirable for laboratory experiments to keep records of each animal while rearing them in mass. However, the specific body parts or the acceptable visual angles that enable individual identification are mostly unknown for non-human animals. In this study, we investigated whether artificial intelligence (AI) could distinguish individual medaka, a model animal for biological, agrarian, ecological, and ethological studies, based on the dorsal view. Using Teachable Machine, we took photographs of adult fish (n = 4) and used the images for machine learning. To our surprise, the AI could perfectly identify the four individuals in a total of 11 independent experiments, and the identification was valid for up to 10 days. The AI could also distinguish eight individuals, although machine learning required more time and effort. These results clearly demonstrate that the dorsal appearances of this small spot-/stripe-less fish are polymorphic enough for individual identification. Whether these clues can be applied to laboratory experiments where individual identification would be beneficial is an intriguing theme for future research. Full article
Show Figures

Figure 1

17 pages, 2900 KB  
Article
The Impact of Teachable Machine on Middle School Teachers’ Perceptions of Science Lessons after Professional Development
by Terri L. Kurz, Suren Jayasuriya, Kimberlee Swisher, John Mativo, Ramana Pidaparti and Dawn T. Robinson
Educ. Sci. 2024, 14(4), 417; https://doi.org/10.3390/educsci14040417 - 16 Apr 2024
Cited by 5 | Viewed by 6173
Abstract
Technological advances in computer vision and machine learning image and audio classification will continue to improve and evolve. Despite their prevalence, teachers feel ill-prepared to use these technologies to support their students’ learning. To address this, in-service middle school teachers participated in professional [...] Read more.
Technological advances in computer vision and machine learning image and audio classification will continue to improve and evolve. Despite their prevalence, teachers feel ill-prepared to use these technologies to support their students’ learning. To address this, in-service middle school teachers participated in professional development, and middle school students participated in summer camp experiences that included the use of Google’s Teachable Machine, an easy-to-use interface for training machine learning classification models. An overview of Teachable Machine is provided. As well, lessons that highlight the use of Teachable Machine in middle school science are explained. Framed within Personal Construct Theory, an analysis of the impact of the professional development on middle school teachers’ perceptions (n = 17) of science lessons and activities is provided. Implications for future practice and future research are described. Full article
Show Figures

Figure 1

15 pages, 7379 KB  
Article
Machine Learning Classification of Self-Organized Surface Structures in Ultrashort-Pulse Laser Processing Based on Light Microscopic Images
by Robert Thomas, Erik Westphal, Georg Schnell and Hermann Seitz
Micromachines 2024, 15(4), 491; https://doi.org/10.3390/mi15040491 - 2 Apr 2024
Cited by 4 | Viewed by 2339
Abstract
In ultrashort-pulsed laser processing, surface modification is subject to complex laser and scanning parameter studies. In addition, quality assurance systems for monitoring surface modification are still lacking. Automated laser processing routines featuring machine learning (ML) can help overcome these limitations, but they are [...] Read more.
In ultrashort-pulsed laser processing, surface modification is subject to complex laser and scanning parameter studies. In addition, quality assurance systems for monitoring surface modification are still lacking. Automated laser processing routines featuring machine learning (ML) can help overcome these limitations, but they are largely absent in the literature and still lack practical applications. This paper presents a new methodology for machine learning classification of self-organized surface structures based on light microscopic images. For this purpose, three application-relevant types of self-organized surface structures are fabricated using a 300 fs laser system on hot working tool steel and stainless-steel substrates. Optical images of the hot working tool steel substrates were used to learn a classification algorithm based on the open-source tool Teachable Machine from Google. The trained classification algorithm achieved very high accuracy in distinguishing the surface types for the hot working steel substrate learned on, as well as for surface structures on the stainless-steel substrate. In addition, the algorithm also achieved very high accuracy in classifying the images of a specific structure class captured at different optical magnifications. Thus, the methodology proposed represents a simple and robust automated classification of surface structures that can be used as a basis for further development of quality assurance systems, automated process parameter recommendation, and inline laser parameter control. Full article
Show Figures

Graphical abstract

31 pages, 8358 KB  
Article
Advancements in Healthcare: Development of a Comprehensive Medical Information System with Automated Classification for Ocular and Skin Pathologies—Structure, Functionalities, and Innovative Development Methods
by Ana-Maria Ștefan, Nicu-Răzvan Rusu, Elena Ovreiu and Mihai Ciuc
Appl. Syst. Innov. 2024, 7(2), 28; https://doi.org/10.3390/asi7020028 - 27 Mar 2024
Cited by 2 | Viewed by 4032
Abstract
This article introduces a groundbreaking medical information system developed in Salesforce, featuring an automated classification module for ocular and skin pathologies using Google Teachable Machine. Integrating cutting-edge technology with Salesforce’s robust capabilities, the system provides a comprehensive solution for medical practitioners. The article [...] Read more.
This article introduces a groundbreaking medical information system developed in Salesforce, featuring an automated classification module for ocular and skin pathologies using Google Teachable Machine. Integrating cutting-edge technology with Salesforce’s robust capabilities, the system provides a comprehensive solution for medical practitioners. The article explores the system’s structure, emphasizing innovative functionalities that enhance diagnostic precision and streamline medical workflows. Methods used in development are discussed, offering insights into the integration of Google Teachable Machine into the Salesforce framework. This collaborative approach is a significant stride in intelligent pathology classification, advancing the field of medical information systems and fostering efficient healthcare practices. Full article
(This article belongs to the Section Information Systems)
Show Figures

Figure 1

17 pages, 1959 KB  
Article
Comparative Analysis of Machine Learning Models for Image Detection of Colonic Polyps vs. Resected Polyps
by Adriel Abraham, Rejath Jose, Jawad Ahmad, Jai Joshi, Thomas Jacob, Aziz-ur-rahman Khalid, Hassam Ali, Pratik Patel, Jaspreet Singh and Milan Toma
J. Imaging 2023, 9(10), 215; https://doi.org/10.3390/jimaging9100215 - 9 Oct 2023
Cited by 6 | Viewed by 3725
Abstract
(1) Background: Colon polyps are common protrusions in the colon’s lumen, with potential risks of developing colorectal cancer. Early detection and intervention of these polyps are vital for reducing colorectal cancer incidence and mortality rates. This research aims to evaluate and compare the [...] Read more.
(1) Background: Colon polyps are common protrusions in the colon’s lumen, with potential risks of developing colorectal cancer. Early detection and intervention of these polyps are vital for reducing colorectal cancer incidence and mortality rates. This research aims to evaluate and compare the performance of three machine learning image classification models’ performance in detecting and classifying colon polyps. (2) Methods: The performance of three machine learning image classification models, Google Teachable Machine (GTM), Roboflow3 (RF3), and You Only Look Once version 8 (YOLOv8n), in the detection and classification of colon polyps was evaluated using the testing split for each model. The external validity of the test was analyzed using 90 images that were not used to test, train, or validate the model. The study used a dataset of colonoscopy images of normal colon, polyps, and resected polyps. The study assessed the models’ ability to correctly classify the images into their respective classes using precision, recall, and F1 score generated from confusion matrix analysis and performance graphs. (3) Results: All three models successfully distinguished between normal colon, polyps, and resected polyps in colonoscopy images. GTM achieved the highest accuracies: 0.99, with consistent precision, recall, and F1 scores of 1.00 for the ‘normal’ class, 0.97–1.00 for ‘polyps’, and 0.97–1.00 for ‘resected polyps’. While GTM exclusively classified images into these three categories, both YOLOv8n and RF3 were able to detect and specify the location of normal colonic tissue, polyps, and resected polyps, with YOLOv8n and RF3 achieving overall accuracies of 0.84 and 0.87, respectively. (4) Conclusions: Machine learning, particularly models like GTM, shows promising results in ensuring comprehensive detection of polyps during colonoscopies. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

38 pages, 6704 KB  
Article
Performance Improvement of Melanoma Detection Using a Multi-Network System Based on Decision Fusion
by Hassan El-khatib, Ana-Maria Ștefan and Dan Popescu
Appl. Sci. 2023, 13(18), 10536; https://doi.org/10.3390/app131810536 - 21 Sep 2023
Cited by 10 | Viewed by 3243
Abstract
The incidence of melanoma cases continues to rise, underscoring the critical need for early detection and treatment. Recent studies highlight the significance of deep learning in melanoma detection, leading to improved accuracy. The field of computer-assisted detection is extensively explored along all lines, [...] Read more.
The incidence of melanoma cases continues to rise, underscoring the critical need for early detection and treatment. Recent studies highlight the significance of deep learning in melanoma detection, leading to improved accuracy. The field of computer-assisted detection is extensively explored along all lines, especially in the medical industry, as the benefit in this field is to save hu-man lives. In this domain, this direction must be maximally exploited and introduced into routine controls to improve patient prognosis, disease prevention, reduce treatment costs, improve population management, and improve patient empowerment. All these new aspects were taken into consideration to implement an EHR system with an automated melanoma detection system. The first step, as presented in this paper, is to build a system based on the fusion of decisions from multiple neural networks, such as DarkNet-53, DenseNet-201, GoogLeNet, Inception-V3, InceptionResNet-V2, ResNet-50, ResNet-101, and compare this classifier with four other applications: Google Teachable Machine, Microsoft Azure Machine Learning, Google Vertex AI, and SalesForce Einstein Vision based on the F1 score for further integration into an EHR platform. We trained all models on two databases, ISIC 2020 and DermIS, to also test their adaptability to a wide range of images. Comparisons with state-of-the-art research and existing applications confirm the promising performance of the proposed system. Full article
Show Figures

Figure 1

20 pages, 3436 KB  
Article
Deep Learning-Based Real Time Defect Detection for Optimization of Aircraft Manufacturing and Control Performance
by Imran Shafi, Muhammad Fawad Mazhar, Anum Fatima, Roberto Marcelo Alvarez, Yini Miró, Julio César Martínez Espinosa and Imran Ashraf
Drones 2023, 7(1), 31; https://doi.org/10.3390/drones7010031 - 1 Jan 2023
Cited by 37 | Viewed by 8919
Abstract
Monitoring tool conditions and sub-assemblies before final integration is essential to reducing processing failures and improving production quality for manufacturing setups. This research study proposes a real-time deep learning-based framework for identifying faulty components due to malfunctioning at different manufacturing stages in the [...] Read more.
Monitoring tool conditions and sub-assemblies before final integration is essential to reducing processing failures and improving production quality for manufacturing setups. This research study proposes a real-time deep learning-based framework for identifying faulty components due to malfunctioning at different manufacturing stages in the aerospace industry. It uses a convolutional neural network (CNN) to recognize and classify intermediate abnormal states in a single manufacturing process. The manufacturing process for aircraft factory products comprises different phases; analyzing the components after the integration is labor-intensive and time-consuming, which often puts the company’s stake at high risk. To overcome these challenges, the proposed AI-based system can perform inspection and defect detection and alleviate the probability of components’ needing to be re-manufacturing after being assembled. In addition, it analyses the impact value, i.e., rework delays and costs, of manufacturing processes using a statistical process control tool on real-time data for various manufactured components. Defects are detected and classified using the CNN and teachable machine in the single manufacturing process during the initial stage prior to assembling the components. The results show the significance of the proposed approach in improving operational cost management and reducing rework-induced delays. Ground tests are conducted to calculate the impact value followed by the air tests of the final assembled aircraft. The statistical results indicate a 52.88% and 34.32% reduction in time delays and total cost, respectively. Full article
Show Figures

Figure 1

10 pages, 2752 KB  
Article
Feasibility of the Machine Learning Network to Diagnose Tympanic Membrane Lesions without Coding Experience
by Hayoung Byun, Seung Hwan Lee, Tae Hyun Kim, Jaehoon Oh and Jae Ho Chung
J. Pers. Med. 2022, 12(11), 1855; https://doi.org/10.3390/jpm12111855 - 7 Nov 2022
Cited by 6 | Viewed by 3021
Abstract
A machine learning platform operated without coding knowledge (Teachable machine®) has been introduced. The aims of the present study were to assess the performance of the Teachable machine® for diagnosing tympanic membrane lesions. A total of 3024 tympanic membrane images [...] Read more.
A machine learning platform operated without coding knowledge (Teachable machine®) has been introduced. The aims of the present study were to assess the performance of the Teachable machine® for diagnosing tympanic membrane lesions. A total of 3024 tympanic membrane images were used to train and validate the diagnostic performance of the network. Tympanic membrane images were labeled as normal, otitis media with effusion (OME), chronic otitis media (COM), and cholesteatoma. According to the complexity of the categorization, Level I refers to normal versus abnormal tympanic membrane; Level II was defined as normal, OME, or COM + cholesteatoma; and Level III distinguishes between all four pathologies. In addition, eighty representative test images were used to assess the performance. Teachable machine® automatically creates a classification network and presents diagnostic performance when images are uploaded. The mean accuracy of the Teachable machine® for classifying tympanic membranes as normal or abnormal (Level I) was 90.1%. For Level II, the mean accuracy was 89.0% and for Level III it was 86.2%. The overall accuracy of the classification of the 80 representative tympanic membrane images was 78.75%, and the hit rates for normal, OME, COM, and cholesteatoma were 95.0%, 70.0%, 90.0%, and 60.0%, respectively. Teachable machine® could successfully generate the diagnostic network for classifying tympanic membrane. Full article
(This article belongs to the Section Personalized Therapy in Clinical Medicine)
Show Figures

Figure 1

12 pages, 1835 KB  
Article
Development of an Image Analysis-Based Prognosis Score Using Google’s Teachable Machine in Melanoma
by Stephan Forchhammer, Amar Abu-Ghazaleh, Gisela Metzler, Claus Garbe and Thomas Eigentler
Cancers 2022, 14(9), 2243; https://doi.org/10.3390/cancers14092243 - 29 Apr 2022
Cited by 14 | Viewed by 4117
Abstract
Background: The increasing number of melanoma patients makes it necessary to establish new strategies for prognosis assessment to ensure follow-up care. Deep-learning-based image analysis of primary melanoma could be a future component of risk stratification. Objectives: To develop a risk score for overall [...] Read more.
Background: The increasing number of melanoma patients makes it necessary to establish new strategies for prognosis assessment to ensure follow-up care. Deep-learning-based image analysis of primary melanoma could be a future component of risk stratification. Objectives: To develop a risk score for overall survival based on image analysis through artificial intelligence (AI) and validate it in a test cohort. Methods: Hematoxylin and eosin (H&E) stained sections of 831 melanomas, diagnosed from 2012–2015 were photographed and used to perform deep-learning-based group classification. For this purpose, the freely available software of Google’s teachable machine was used. Five hundred patient sections were used as the training cohort, and 331 sections served as the test cohort. Results: Using Google’s Teachable Machine, a prognosis score for overall survival could be developed that achieved a statistically significant prognosis estimate with an AUC of 0.694 in a ROC analysis based solely on image sections of approximately 250 × 250 µm. The prognosis group “low-risk” (n = 230) showed an overall survival rate of 93%, whereas the prognosis group “high-risk” (n = 101) showed an overall survival rate of 77.2%. Conclusions: The study supports the possibility of using deep learning-based classification systems for risk stratification in melanoma. The AI assessment used in this study provides a significant risk estimate in melanoma, but it does not considerably improve the existing risk classification based on the TNM classification. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)
Show Figures

Figure 1

Back to TopTop