Topic Editors

School of Life Sciences, Tiangong University, Tianjin 300387, China
School of Electronics and Information Engineering, Tiangong University, Tianjin 300387, China

Selected Papers from ICCAI 2023 and IMIP 2023

Abstract submission deadline
closed (25 July 2023)
Manuscript submission deadline
closed (31 October 2023)
Viewed by
1744

Topic Information

Dear Colleagues,

The 2023 9th International Conference on Computing and Artificial Intelligence (ICCAI 2023) and its combined conference—the 2023 5th International Conference on Intelligent Medicine and Image Processing (IMIP 2023)—are international conferences devoted specifically to the facilitation of synergies in research and development in the areas of computer and artificial intelligence, intelligent medicine and image processing. They provide communication platforms for leading experts and scholars worldwide.

This Topic will mainly focus on computer technology, information technology, intelligent computing, artificial intelligence, smart healthcare, medical informatics, medical imaging and image processing and other related areas. We cordially invite authors of selected papers from the conference to submit extended versions of their original papers and contributions under the conference topics.

Prof. Dr. Zhitao Xiao
Dr. Guangxu Li
Topic Editors

Keywords

  • entropy
  • information theory
  • computational mathematics
  • pattern recognition
  • signal processing
  • smart healthcare
  • bio-informatics
  • nonlinear dynamics
  • biological optics
  • image processing

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Computers
computers
2.8 4.7 2012 17.7 Days CHF 1800
Entropy
entropy
2.7 4.7 1999 20.8 Days CHF 2600
Information
information
3.1 5.8 2010 18 Days CHF 1600
Mathematics
mathematics
2.4 3.5 2013 16.9 Days CHF 2600

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (1 paper)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
12 pages, 1044 KiB  
Article
The Generation of Articulatory Animations Based on Keypoint Detection and Motion Transfer Combined with Image Style Transfer
by Xufeng Ling, Yu Zhu, Wei Liu, Jingxin Liang and Jie Yang
Computers 2023, 12(8), 150; https://doi.org/10.3390/computers12080150 - 28 Jul 2023
Cited by 1 | Viewed by 1150
Abstract
Knowing the correct positioning of the tongue and mouth for pronunciation is crucial for learning English pronunciation correctly. Articulatory animation is an effective way to address the above task and helpful to English learners. However, articulatory animations are all traditionally hand-drawn. Different situations [...] Read more.
Knowing the correct positioning of the tongue and mouth for pronunciation is crucial for learning English pronunciation correctly. Articulatory animation is an effective way to address the above task and helpful to English learners. However, articulatory animations are all traditionally hand-drawn. Different situations require varying animation styles, so a comprehensive redraw of all the articulatory animations is necessary. To address this issue, we developed a method for the automatic generation of articulatory animations using a deep learning system. Our method leverages an automatic keypoint-based detection network, a motion transfer network, and a style transfer network to generate a series of articulatory animations that adhere to the desired style. By inputting a target-style articulation image, our system is capable of producing animations with the desired characteristics. We created a dataset of articulation images and animations from public sources, including the International Phonetic Association (IPA), to establish our articulation image animation dataset. We performed preprocessing on the articulation images by segmenting them into distinct areas each corresponding to a specific articulatory part, such as the tongue, upper jaw, lower jaw, soft palate, and vocal cords. We trained a deep neural network model capable of automatically detecting the keypoints in typical articulation images. Also, we trained a generative adversarial network (GAN) model that can generate end-to-end animation of different styles automatically from the characteristics of keypoints and the learned image style. To train a relatively robust model, we used four different style videos: one magnetic resonance imaging (MRI) articulatory video and three hand-drawn videos. For further applications, we combined the consonant and vowel animations together to generate a syllable animation and the animation of a word consisting of many syllables. Experiments show that this system can auto-generate articulatory animations according to input phonetic symbols and should be helpful to people for English articulation correction. Full article
(This article belongs to the Topic Selected Papers from ICCAI 2023 and IMIP 2023)
Show Figures

Figure 1

Back to TopTop