Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (815)

Search Parameters:
Keywords = ChatGPT-3.5

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 2199 KiB  
Article
Transforming Learning with Generative AI: From Student Perceptions to the Design of an Educational Solution
by Corina-Marina Mirea, Răzvan Bologa, Andrei Toma, Antonio Clim, Dimitrie-Daniel Plăcintă and Andrei Bobocea
Appl. Sci. 2025, 15(10), 5785; https://doi.org/10.3390/app15105785 - 21 May 2025
Abstract
Education is another field which generative artificial intelligence has made its way into, intervening in students’ learning processes. This article explores students’ perspectives on the use of generative AI tools, specifically ChatGPT-3.5 (free version) and ChatGPT-4 (with a subscription). The results of the [...] Read more.
Education is another field which generative artificial intelligence has made its way into, intervening in students’ learning processes. This article explores students’ perspectives on the use of generative AI tools, specifically ChatGPT-3.5 (free version) and ChatGPT-4 (with a subscription). The results of the survey revealed a correlation between the use of ChatGPT and the perception of grade improvement by students. In addition, this article proposes an architecture for an adaptive learning system based on generative artificial intelligence (AI). To develop the architectural proposal, we incorporated the results of the student survey along with insights gained from analyzing the architectures of other learning platforms. The proposed architecture is based on a study of adaptive learning platforms with classically virtual assistants. The main question from which the current research started was how artificial intelligence can be integrated into a learning system to improve student outcomes based on their experience with generative AI. This has been sectioned into two more specific questions: 1. How do students perceive the use of generative artificial intelligence tools, such as ChatGPT, in enhancing their learning journey? 2. Is it possible to integrate generative AI into a learning system used in education? Consequently, this article concludes with a proposed architecture for a learning platform incorporating generative artificial intelligence technologies. This article aims to present a way to understand how generative AI technologies support education and contribute to improving academic performance. Full article
Show Figures

Figure 1

26 pages, 28790 KiB  
Article
Understanding Social Biases in Large Language Models
by Ojasvi Gupta, Stefano Marrone, Francesco Gargiulo, Rajesh Jaiswal and Lidia Marassi
AI 2025, 6(5), 106; https://doi.org/10.3390/ai6050106 - 20 May 2025
Abstract
Background/Objectives: Large Language Models (LLMs) like ChatGPT, LLAMA, and Mistral are widely used for automating tasks such as content creation and data analysis. However, due to their training on publicly available internet data, they may inherit social biases. We aimed to investigate [...] Read more.
Background/Objectives: Large Language Models (LLMs) like ChatGPT, LLAMA, and Mistral are widely used for automating tasks such as content creation and data analysis. However, due to their training on publicly available internet data, they may inherit social biases. We aimed to investigate the social biases (i.e., ethnic, gender, and disability biases) in these models and evaluate how different model versions handle them. Methods: We instruction-tuned popular models (like Mistral, LLAMA, and Gemma), and for this we curated a dataset constructed by collecting and modifying diverse data from various public datasets. Prompts were run through a controlled pipeline, and responses were categorized (e.g., biased, confused, repeated, or accurate) and analyzed. Results: We found that models responded differently to bias prompts depending on their version. Fine-tuned models showed fewer overt biases but more confusion or censorship. Disability-related prompts triggered the most consistent biases across models. Conclusions: Bias persists in LLMs despite instruction tuning. Differences between model versions may lead to inconsistent user experiences and hidden harms in downstream applications. Greater transparency and robust fairness testing are essential. Full article
(This article belongs to the Special Issue AI Bias in the Media and Beyond)
Show Figures

Figure 1

15 pages, 852 KiB  
Article
The Impact of Language Variability on Artificial Intelligence Performance in Regenerative Endodontics
by Hatice Büyüközer Özkan, Tülin Doğan Çankaya and Türkay Kölüş
Healthcare 2025, 13(10), 1190; https://doi.org/10.3390/healthcare13101190 - 20 May 2025
Abstract
Background: Regenerative endodontic procedures (REPs) are promising treatments for immature teeth with necrotic pulp. Artificial intelligence (AI) is increasingly used in dentistry; thus, this study evaluates the reliability of AI-generated information on REPs, comparing four AI models against clinical guidelines. Methods: ChatGPT-4o, Claude [...] Read more.
Background: Regenerative endodontic procedures (REPs) are promising treatments for immature teeth with necrotic pulp. Artificial intelligence (AI) is increasingly used in dentistry; thus, this study evaluates the reliability of AI-generated information on REPs, comparing four AI models against clinical guidelines. Methods: ChatGPT-4o, Claude 3.5 Sonnet, Grok 2, and Gemini 2.0 Advanced were tested with 20 REP-related questions from the ESE/AAE guidelines and expert consensus. Questions were posed in Turkish and English, with or without prompts. Two specialists assessed 640 AI-generated answers via a four-point rubric. Inter-rater reliability and response accuracy were statistically analyzed. Results: Inter-rater reliability was high (0.85–0.97). ChatGPT-4o showed higher accuracy with English prompts (p < 0.05). Claude was more accurate than Grok in the Turkish (nonprompted) and English (prompted) conditions (p < 0.05). No model reached ≥80% accuracy. Claude (English, prompted) scored highest; Grok-Turkish (nonprompted) scored lowest. Conclusions: The performance of AI models varies significantly across languages. English queries yield higher accuracy. While AI shows potential for REPs information, current models lack sufficient accuracy for clinical reliance. Cautious interpretation and validation against guidelines are essential. Further research is needed to enhance AI performance in specialized dental fields. Full article
Show Figures

Figure 1

9 pages, 389 KiB  
Review
Artificial Intelligence and Novel Technologies for the Diagnosis of Upper Tract Urothelial Carcinoma
by Nikolaos Kostakopoulos, Vasileios Argyropoulos, Themistoklis Bellos, Stamatios Katsimperis and Athanasios Kostakopoulos
Medicina 2025, 61(5), 923; https://doi.org/10.3390/medicina61050923 - 20 May 2025
Abstract
Background and Objectives: Upper tract urothelial carcinoma (UTUC) is one of the most underdiagnosed but, at the same time, one of the most lethal cancers. In this review article, we investigated the application of artificial intelligence and novel technologies in the prompt [...] Read more.
Background and Objectives: Upper tract urothelial carcinoma (UTUC) is one of the most underdiagnosed but, at the same time, one of the most lethal cancers. In this review article, we investigated the application of artificial intelligence and novel technologies in the prompt identification of high-grade UTUC to prevent metastases and facilitate timely treatment. Materials and Methods: We conducted an extensive search of the literature from the Pubmed, Google scholar and Cochrane library databases for studies investigating the application of artificial intelligence for the diagnosis of UTUC, according to the PRISMA guidelines. After the exclusion of non-associated and non-English studies, we included 12 articles in our review. Results: Artificial intelligence systems have been shown to enhance post-radical nephroureterectomy urine cytology reporting, in order to facilitate the early diagnosis of bladder recurrence, as well as improve diagnostic accuracy in atypical cells, by being trained on annotated cytology images. In addition to this, by extracting textural radiomics features from data from computed tomography urograms, we can develop machine learning models to predict UTUC tumour grade and stage in small-size and especially high-grade tumours. Random forest models have been shown to have the best performance in predicting high-grade UTUC, while hydronephrosis is the most significant independent factor for high-grade tumours. ChatGPT, although not mature enough to provide information on diagnosis and treatment, can assist in improving patients’ understanding of the disease’s epidemiology and risk factors. Computer vision models, in real time, can augment visualisation during endoscopic ureteral tumour diagnosis and ablation. A deep learning workflow can also be applied in histopathological slides to predict UTUC protein-based subtypes. Conclusions: Artificial intelligence has been shown to greatly facilitate the timely diagnosis of high-grade UTUC by improving the diagnostic accuracy of urine cytology, CT Urograms and ureteroscopy visualisation. Deep learning systems can become a useful and easily accessible tool in physicians’ armamentarium to deal with diagnostic uncertainties in urothelial cancer. Full article
(This article belongs to the Section Urology & Nephrology)
Show Figures

Figure 1

12 pages, 489 KiB  
Article
Generative Artificial Intelligence and Risk Appetite in Medical Decisions in Rheumatoid Arthritis
by Florian Berghea, Dan Andras and Elena Camelia Berghea
Appl. Sci. 2025, 15(10), 5700; https://doi.org/10.3390/app15105700 - 20 May 2025
Abstract
With Generative AI (GenAI) entering medicine, understanding its decision-making under uncertainty is important. It is well known that human subjective risk appetite influences medical decisions. This study investigated whether the risk appetite of GenAI can be evaluated and if established human risk assessment [...] Read more.
With Generative AI (GenAI) entering medicine, understanding its decision-making under uncertainty is important. It is well known that human subjective risk appetite influences medical decisions. This study investigated whether the risk appetite of GenAI can be evaluated and if established human risk assessment tools are applicable for this purpose in a medical context. Five GenAI systems (ChatGPT 4.5, Gemini 2.0, Qwen 2.5 MAX, DeepSeek-V3, and Perplexity) were evaluated using Rheumatoid Arthritis (RA) clinical scenarios. We employed two methods adapted from human risk assessment: the General Risk Propensity Scale (GRiPS) and the Time Trade-Off (TTO) technique. Queries involving RA cases with varying prognoses and hypothetical treatment choices were posed repeatedly to assess risk profiles and response consistency. All GenAIs consistently identified the same RA cases for the best and worst prognoses. However, the two risk assessment methodologies yielded varied results. The adapted GRiPS showed significant differences in general risk propensity among GenAIs (ChatGPT being the least risk-averse and Qwen/DeepSeek the most), though these differences diminished in specific prognostic contexts. Conversely, the TTO method indicated a strong general risk aversion (unwillingness to trade lifespan for pain relief) across systems yet revealed Perplexity as significantly more risk-tolerant than Gemini. The variability in risk profiles obtained using the GRiPS versus the TTO for the same AI systems raises questions about tool applicability. This discrepancy suggests that these human-centric instruments may not adequately or consistently capture the nuances of risk processing in Artificial Intelligence. The findings imply that current tools might be insufficient, highlighting the need for methodologies specifically tailored for evaluating AI decision-making under medical uncertainty. Full article
(This article belongs to the Special Issue Machine Learning in Biomedical Sciences)
Show Figures

Figure 1

10 pages, 208 KiB  
Opinion
A Talk with ChatGPT: The Role of Artificial Intelligence in Shaping the Future of Cardiology and Electrophysiology
by Angelica Cersosimo, Elio Zito, Nicola Pierucci, Andrea Matteucci and Vincenzo Mirco La Fazia
J. Pers. Med. 2025, 15(5), 205; https://doi.org/10.3390/jpm15050205 - 20 May 2025
Abstract
Background: Artificial intelligence (AI) is poised to significantly impact the future of cardiology and electrophysiology, offering new tools to interpret complex datasets, improve diagnosis, optimize clinical workflows, and personalize therapy. ChatGPT-4o, a leading AI-based language model, exemplifies the transformative potential of AI [...] Read more.
Background: Artificial intelligence (AI) is poised to significantly impact the future of cardiology and electrophysiology, offering new tools to interpret complex datasets, improve diagnosis, optimize clinical workflows, and personalize therapy. ChatGPT-4o, a leading AI-based language model, exemplifies the transformative potential of AI in clinical research, medical education, and patient care. Aim and Methods: In this paper, we present an exploratory dialogue with ChatGPT to assess the role of AI in shaping the future of cardiology, with a particular focus on arrhythmia management and cardiac electrophysiology. Topics discussed include AI applications in ECG interpretation, arrhythmia detection, procedural guidance during ablation, and risk stratification for sudden cardiac death. We also examine the risks associated with AI use, including overreliance, interpretability challenges, data bias, and generalizability. Conclusions: The integration of AI into cardiovascular care offers the potential to enhance diagnostic accuracy, tailor interventions, and support decision-making. However, the adoption of AI must be carefully balanced with clinical expertise and ethical considerations. By fostering collaboration between clinicians and AI developers, it is possible to guide the development of reliable, transparent, and effective tools that will shape the future of personalized cardiology and electrophysiology. Full article
(This article belongs to the Section Methodology, Drug and Device Discovery)
Show Figures

Graphical abstract

23 pages, 1195 KiB  
Article
Exploring Tourism Experiences: The Vision of Generation Z Versus Artificial Intelligence
by Ioana-Simona Ivasciuc, Adina Nicoleta Candrea and Ana Ispas
Adm. Sci. 2025, 15(5), 186; https://doi.org/10.3390/admsci15050186 - 19 May 2025
Viewed by 61
Abstract
Generation Z, known for its digital fluency and distinct consumer behaviors, is an increasingly influential demographic in the tourism industry. As a sustainability-focused generation, their preferences and behaviors are shaping the future of travel. This study explores the tourism experiences of Romanian Generation [...] Read more.
Generation Z, known for its digital fluency and distinct consumer behaviors, is an increasingly influential demographic in the tourism industry. As a sustainability-focused generation, their preferences and behaviors are shaping the future of travel. This study explores the tourism experiences of Romanian Generation Z members, focusing on their travel patterns, motivations, information sources, and service preferences. A bibliometric analysis of the existing literature was conducted to identify research trends and gaps in understanding Generation Z’s tourism behaviors. Using a mixed-method approach, the study integrates survey data from 399 respondents with AI-generated insights from ChatGPT 4o mini to compare traditional research methods with AI-driven analysis. It examines how AI interprets and predicts travel behaviors, highlighting the reliability and biases inherent in AI models. Key discrepancies between the two methods were found: The survey indicated a preference for car travel and commercial accommodation, while AI predictions favored air travel and private accommodation. Additionally, AI emphasized a growing interest in eco-friendly transportation and connections to natural and cultural environments, offering a broader scope than the survey alone. Both methods revealed a trend toward digital platforms for travel planning, moving away from traditional agencies. The findings suggest that AI can complement traditional research by providing actionable insights, though its limitations emphasize the need for a balanced integration of both methods. This study offers new perspectives on Generation Z’s tourism experiences. Full article
Show Figures

Figure 1

10 pages, 285 KiB  
Article
The Role of Artificial Intelligence (ChatGPT-4o) in Supporting Tumor Board Decisions
by Berkan Karabuğa, Cengiz Karaçin, Mustafa Büyükkör, Doğan Bayram, Ergin Aydemir, Osman Bilge Kaya, Mehmet Emin Yılmaz, Elif Sertesen Çamöz and Yakup Ergün
J. Clin. Med. 2025, 14(10), 3535; https://doi.org/10.3390/jcm14103535 - 18 May 2025
Viewed by 320
Abstract
Background/Objectives: Artificial intelligence (AI) has emerged as a promising field in the era of personalized oncology due to its potential to save time and workforce while serving as a supportive tool in patient management decisions. Although several studies in the literature have explored [...] Read more.
Background/Objectives: Artificial intelligence (AI) has emerged as a promising field in the era of personalized oncology due to its potential to save time and workforce while serving as a supportive tool in patient management decisions. Although several studies in the literature have explored the integration of AI into oncology practice across different tumor types, available data remain limited. In our study, we aimed to evaluate the role of AI in the management of complex cancer cases by comparing the decisions of an in-house tumor board and ChatGPT-4o for patients with various tumor types. Methods: A total of 102 patients with diverse cancer types were included. Treatment and follow-up decisions proposed by both the tumor board and ChatGPT-4o were independently evaluated by two medical oncologists using a 5-point Likert scale. Results: Analysis of agreement levels showed high inter-rater reliability (κ = 0.722, p < 0.001 for tumor board decisions; κ = 0.794, p < 0.001 for ChatGPT decisions). However, concordance between the tumor board and ChatGPT was low, as reflected in the assessments of both raters (Rater 1: κ = 0.211, p = 0.003; Rater 2: κ = 0.376, p < 0.001). Both raters more frequently agreed with the tumor board decisions, and a statistically significant difference between tumor board and AI decisions was observed for both (Rater 1: Z = +4.548, p < 0.001; Rater 2: Z = +3.990, p < 0.001). Conclusions: These findings suggest that AI, in its current form, is not yet capable of functioning as a standalone decision-maker in the management of challenging oncology cases. Clinical experience and expert judgment remain the most critical factors in guiding patient care. Full article
(This article belongs to the Section Oncology)
Show Figures

Figure 1

3 pages, 141 KiB  
Editorial
ChatGPT—A Stormy Innovation for a Sustainable Business
by Nada Mallah Boustani
Adm. Sci. 2025, 15(5), 184; https://doi.org/10.3390/admsci15050184 - 17 May 2025
Viewed by 116
Abstract
Not only does generative AI, such as ChatGPT, represent an evolution in computational capability, but it is also going to change the way organizations approach knowledge creation, problem-solving, and innovation [...] Full article
(This article belongs to the Special Issue ChatGPT, a Stormy Innovation for a Sustainable Business)
16 pages, 1870 KiB  
Article
Artificial Intelligence as a Potential Tool for Predicting Surgical Margin Status in Early Breast Cancer Using Mammographic Specimen Images
by David Andras, Radu Alexandru Ilies, Victor Esanu, Stefan Agoston, Tudor Florin Marginean Jumate and George Calin Dindelegan
Diagnostics 2025, 15(10), 1276; https://doi.org/10.3390/diagnostics15101276 - 17 May 2025
Viewed by 238
Abstract
Background/Objectives: Breast cancer is the most common malignancy among women globally, with an increasing incidence, particularly in younger populations. Achieving complete surgical excision is essential to reduce recurrence. Artificial intelligence (AI), including large language models like ChatGPT, has potential for supporting diagnostic [...] Read more.
Background/Objectives: Breast cancer is the most common malignancy among women globally, with an increasing incidence, particularly in younger populations. Achieving complete surgical excision is essential to reduce recurrence. Artificial intelligence (AI), including large language models like ChatGPT, has potential for supporting diagnostic tasks, though its role in surgical oncology remains limited. Methods: This retrospective study evaluated ChatGPT’s performance (ChatGPT-4, OpenAI, March 2025) in predicting surgical margin status (R0 or R1) based on intraoperative mammograms of lumpectomy specimens. AI-generated responses were compared with histopathological findings. Performance was evaluated using sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), F1 score, and Cohen’s kappa coefficient. Results: Out of a total of 100 patients, ChatGPT achieved an accuracy of 84.0% in predicting surgical margin status. Sensitivity for identifying R1 cases (incomplete excision) was 60.0%, while specificity for R0 (complete excision) was 86.7%. The positive predictive value (PPV) was 33.3%, and the negative predictive value (NPV) was 95.1%. The F1 score for R1 classification was 0.43, and Cohen’s kappa coefficient was 0.34, indicating moderate agreement with histopathological findings. Conclusions: ChatGPT demonstrated moderate accuracy in confirming complete excision but showed limited reliability in identifying incomplete margins. While promising, these findings emphasize the need for domain-specific training and further validation before such models can be implemented in clinical breast cancer workflows. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

25 pages, 1622 KiB  
Review
ChatGPT as a Digital Tool in the Transformation of Digital Teaching Competence: A Systematic Review
by José Fernández Cerero, Marta Montenegro Rueda, Pedro Román Graván and José María Fernández Batanero
Technologies 2025, 13(5), 205; https://doi.org/10.3390/technologies13050205 - 16 May 2025
Viewed by 147
Abstract
In recent years, the use of tools based on artificial intelligence, such as ChatGPT, has begun to play a relevant role in education, particularly in the development of teachers’ digital competence. However, its impact and the implications of its integration in the educational [...] Read more.
In recent years, the use of tools based on artificial intelligence, such as ChatGPT, has begun to play a relevant role in education, particularly in the development of teachers’ digital competence. However, its impact and the implications of its integration in the educational environment still need to be rigorously analysed. This study aims to examine the role of ChatGPT as a digital tool in the transformation and strengthening of teachers’ digital competence, identifying its advantages and limitations in pedagogical practices. To this end, a systematic literature review was carried out in four academic databases: Web of Science, Scopus, ERIC and Google Scholar. Eighteen relevant articles addressing the relationship between the use of ChatGPT and professional teacher development were selected. Among the main findings, it was identified that this technology can contribute to the continuous updating of teachers, facilitate the understanding of complex content, optimise teaching planning, and reduce the burden of repetitive tasks. However, challenges related to technology dependency, the need for specific training, and the ethics of its educational application were also noted. The results of this study suggest that the use of ChatGPT in education should be approached from a critical and informed perspective, considering both its benefits and limitations. Empirical studies are recommended to evaluate its real impact in different educational contexts and the implementation of teacher training strategies that favour its responsible and effective use in the classroom. Full article
Show Figures

Figure 1

19 pages, 1840 KiB  
Article
Facial Analysis for Plastic Surgery in the Era of Artificial Intelligence: A Comparative Evaluation of Multimodal Large Language Models
by Syed Ali Haider, Srinivasagam Prabha, Cesar A. Gomez-Cabello, Sahar Borna, Ariana Genovese, Maissa Trabilsy, Adekunle Elegbede, Jenny Fei Yang, Andrea Galvao, Cui Tao and Antonio Jorge Forte
J. Clin. Med. 2025, 14(10), 3484; https://doi.org/10.3390/jcm14103484 - 16 May 2025
Viewed by 72
Abstract
Background/Objectives: Facial analysis is critical for preoperative planning in facial plastic surgery, but traditional methods can be time consuming and subjective. This study investigated the potential of Artificial Intelligence (AI) for objective and efficient facial analysis in plastic surgery, with a specific focus [...] Read more.
Background/Objectives: Facial analysis is critical for preoperative planning in facial plastic surgery, but traditional methods can be time consuming and subjective. This study investigated the potential of Artificial Intelligence (AI) for objective and efficient facial analysis in plastic surgery, with a specific focus on Multimodal Large Language Models (MLLMs). We evaluated their ability to analyze facial skin quality, volume, symmetry, and adherence to aesthetic standards such as neoclassical facial canons and the golden ratio. Methods: We evaluated four MLLMs—ChatGPT-4o, ChatGPT-4, Gemini 1.5 Pro, and Claude 3.5 Sonnet—using two evaluation forms and 15 diverse facial images generated by a Generative Adversarial Network (GAN). The general analysis form evaluated qualitative skin features (texture, type, thickness, wrinkling, photoaging, and overall symmetry). The facial ratios form assessed quantitative structural proportions, including division into equal fifths, adherence to the rule of thirds, and compatibility with the golden ratio. MLLM assessments were compared with evaluations from a plastic surgeon and manual measurements of facial ratios. Results: The MLLMs showed promise in analyzing qualitative features, but they struggled with precise quantitative measurements of facial ratios. Mean accuracy for general analysis were ChatGPT-4o (0.61 ± 0.49), Gemini 1.5 Pro (0.60 ± 0.49), ChatGPT-4 (0.57 ± 0.50), and Claude 3.5 Sonnet (0.52 ± 0.50). In facial ratio assessments, scores were lower, with Gemini 1.5 Pro achieving the highest mean accuracy (0.39 ± 0.49). Inter-rater reliability, based on Cohen’s Kappa values, ranged from poor to high for qualitative assessments (κ > 0.7 for some questions) but was generally poor (near or below zero) for quantitative assessments. Conclusions: Current general purpose MLLMs are not yet ready to replace manual clinical assessments but may assist in general facial feature analysis. These findings are based on testing models not specifically trained for facial analysis and serve to raise awareness among clinicians regarding the current capabilities and inherent limitations of readily available MLLMs in this specialized domain. This limitation may stem from challenges with spatial reasoning and fine-grained detail extraction, which are inherent limitations of current MLLMs. Future research should focus on enhancing the numerical accuracy and reliability of MLLMs for broader application in plastic surgery, potentially through improved training methods and integration with other AI technologies such as specialized computer vision algorithms for precise landmark detection and measurement. Full article
(This article belongs to the Special Issue Innovation in Hand Surgery)
Show Figures

Figure 1

31 pages, 2216 KiB  
Article
Students’ Perceptions of Generative Artificial Intelligence (GenAI) Use in Academic Writing in English as a Foreign Language
by Andrew S. Nelson, Paola V. Santamaría, Josephine S. Javens and Marvin Ricaurte
Educ. Sci. 2025, 15(5), 611; https://doi.org/10.3390/educsci15050611 - 16 May 2025
Viewed by 271
Abstract
While research articles on students’ perceptions of large language models such as ChatGPT in language learning have proliferated since ChatGPT’s release, few studies have focused on these perceptions among English as a foreign language (EFL) university students in South America or their application [...] Read more.
While research articles on students’ perceptions of large language models such as ChatGPT in language learning have proliferated since ChatGPT’s release, few studies have focused on these perceptions among English as a foreign language (EFL) university students in South America or their application to academic writing in a second language (L2) for STEM classes. ChatGPT can generate human-like text that worries teachers and researchers. Academic cheating, especially in the language classroom, is not new; however, the concept of AI-giarism is novel. This study evaluated how 56 undergraduate university students in Ecuador viewed GenAI use in academic writing in English as a foreign language. The research findings indicate that students worried more about hindering the development of their own writing skills than the risk of being caught and facing academic penalties. Students believed that ChatGPT-written works are easily detectable, and institutions should incorporate plagiarism detectors. Submitting chatbot-generated text in the classroom was perceived as academic dishonesty, and fewer participants believed that submitting an assignment machine-translated from Spanish to English was dishonest. The results of this study will inform academic staff and educational institutions about how Ecuadorian university students perceive the overall influence of GenAI on academic integrity within the scope of academic writing, including reasons why students might rely on AI tools for dishonest purposes and how they view the detection of AI-based works. Ideally, policies, procedures, and instruction should prioritize using AI as an emerging educational tool and not as a shortcut to bypass intellectual effort. Pedagogical practices should minimize factors that have been shown to lead to the unethical use of AI, which, for our survey, was academic pressure and lack of confidence. By and large, these factors can be mitigated with approaches that prioritize the process of learning rather than the production of a product. Full article
(This article belongs to the Special Issue Emerging Pedagogies for Integrating AI in Education)
Show Figures

Figure 1

15 pages, 1666 KiB  
Brief Report
When ChatGPT Writes Your Research Proposal: Scientific Creativity in the Age of Generative AI
by Vera Eymann, Thomas Lachmann and Daniela Czernochowski
J. Intell. 2025, 13(5), 55; https://doi.org/10.3390/jintelligence13050055 - 16 May 2025
Viewed by 102
Abstract
Within the last years, generative artificial intelligence (AI) has not only entered the field of creativity; it might even be marking a turning point for some creative domains. This raises the question of whether AI also poses a turning point for scientific creativity, [...] Read more.
Within the last years, generative artificial intelligence (AI) has not only entered the field of creativity; it might even be marking a turning point for some creative domains. This raises the question of whether AI also poses a turning point for scientific creativity, which comprises the ability to develop new ideas or methodological approaches in science. In this study, we use a new scientific creativity task to investigate the extent to which AI—in this case, ChatGPT-4—can generate creative ideas in a scientific context. Specifically, we compare AI-generated responses with those of graduate students in terms of their ability to generate scientific hypotheses, design experiments, and justify their ideas for a fictitious research scenario in the field of experimental psychology. We asked students to write and prompted ChatGPT to generate a brief version of a research proposal containing four separate assignments (i.e., formulating a hypothesis, designing an experiment, listing the required equipment, and justifying the chosen method). Using a structured (blinded) rating, two experts from the field evaluated students’ research proposals and proposals generated by ChatGPT in terms of their scientific creativity. Our results indicate that ChatGPT received significantly higher overall scores, but even more crucially exceeded students in sub-scores measuring originality or meaningfulness of the ideas. In addition to a statistical evaluation, we qualitatively assess our data providing a more detailed report in regards to subtle differences between students’ and AI-generated responses. Lastly, we discuss challenges and provide potential future directions for the field. Full article
(This article belongs to the Special Issue Generative AI: Reflections on Intelligence and Creativity)
Show Figures

Figure 1

19 pages, 4702 KiB  
Article
A Deep Learning Approach to Classify AI-Generated and Human-Written Texts
by Ayla Kayabas, Ahmet Ercan Topcu, Yehia Ibrahim Alzoubi and Mehmet Yıldız
Appl. Sci. 2025, 15(10), 5541; https://doi.org/10.3390/app15105541 - 15 May 2025
Viewed by 103
Abstract
The rapid advancement of artificial intelligence (AI) has introduced new challenges, particularly in the generation of AI-written content that closely resembles human-authored text. This poses a significant risk for misinformation, digital fraud, and academic dishonesty. While large language models (LLM) have demonstrated impressive [...] Read more.
The rapid advancement of artificial intelligence (AI) has introduced new challenges, particularly in the generation of AI-written content that closely resembles human-authored text. This poses a significant risk for misinformation, digital fraud, and academic dishonesty. While large language models (LLM) have demonstrated impressive capabilities across various languages, there remains a critical gap in evaluating and detecting AI-generated content in under-resourced languages such as Turkish. To address this, our study investigates the effectiveness of long short-term memory (LSTM) networks—a computationally efficient and interpretable architecture—for distinguishing AI-generated Turkish texts produced by ChatGPT from human-written content. LSTM was selected due to its lower hardware requirements and its proven strength in sequential text classification, especially under limited computational resources. Four experiments were conducted, varying hyperparameters such as dropout rate, number of epochs, embedding size, and patch size. The model trained over 20 epochs achieved the best results, with a classification accuracy of 97.28% and an F1 score of 0.97 for both classes. The confusion matrix confirmed high precision, with only 19 misclassified instances out of 698. These findings highlight the potential of LSTM-based approaches for AI-generated text detection in the Turkish language context. This study not only contributes a practical method for Turkish NLP applications but also underlines the necessity of tailored AI detection tools for low-resource languages. Future work will focus on expanding the dataset, incorporating other architectures, and applying the model across different domains to enhance generalizability and robustness. Full article
Show Figures

Figure 1

Back to TopTop