Edge Computing with AI

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (10 August 2022) | Viewed by 15411

Special Issue Editors


E-Mail Website1 Website2
Guest Editor
Department of Computer Science, Tunghai University, Taichung 40704, Taiwan
Interests: cloud computing; big data; machine learning; parallel processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science and Information Engineering, National Chin-Yi University of Technology, Taichung 41170, Taiwan
Interests: cloud computing; big data; web-based applications; combinatorial optimization
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science & Engineering, Sant Longowal Institute of Engineering & Technology, Punjab 148106, India
Interests: wireless sensor networks; trust and reputation systems; cloud computing; brain computing; internet of things; big data
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

As the computational ability of end devices grows, many of the services they provide are becoming more intelligent than ever before. For example, intelligent agriculture provides precise irrigation by sensing the environment to make a decision. Production constantly monitors the status of each piece of equipment during manufacturing to ensure high product quality. Since information is captured easily in end devices, artificial intelligence technology could be applied to predict future events or recognize critical features, thereby enabling us to tackle more everyday challenges and issues with the help of artificially intelligent end devices.

In this Special Issue, we are inviting submissions on new discoveries in the applications of edge computation offering artificial intelligence services through theoretical analysis, experimental studies, and system implementations.

Prof. Dr. Chao-Tung Yang
Dr. Chen-Kun Tsung
Dr. Neil Yuwen Yen
Dr. Vinod Kumar Verma
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • big data
  • cloud computing
  • data mining
  • machine learning

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 2885 KiB  
Article
A Data Factor Study for Machine Learning on Heterogenous Edge Computing
by Dong-Meau Chang, Tse-Chuan Hsu, Chao-Tung Yang and Junjie Yang
Appl. Sci. 2023, 13(6), 3405; https://doi.org/10.3390/app13063405 - 07 Mar 2023
Cited by 3 | Viewed by 1129
Abstract
As plants and animals grow, there are many factors that influence the changes that will affect how plants grow and how botanical experts distinguish them. The use of the Internet of Things (IoT) for data collection is an important part of smart agriculture. [...] Read more.
As plants and animals grow, there are many factors that influence the changes that will affect how plants grow and how botanical experts distinguish them. The use of the Internet of Things (IoT) for data collection is an important part of smart agriculture. Many related studies have shown that remote data management and cloud computing make it possible and practical to monitor the functionality of IoT devices. In automated agriculture, machine learning intelligence is more necessary to use to automatically determine whether the correlation between learning factors influences plant growth patterns. In this research experiment, the relevant data are automatically collected through a detection device, and data modeling and computation are performed in an edge computing environment. At the same time, the data model is transmitted via the communication protocol, and another node is available for verification of the modeling and calculation results. The experimental results show that the single-point data-trained model is able to accurately predict the growth trend of the plants. In the case of verification of the second measurement point at a different space, the data model must be trained with more than two layers in order to improve the training results and reduce errors. Full article
(This article belongs to the Special Issue Edge Computing with AI)
Show Figures

Figure 1

14 pages, 1365 KiB  
Article
Partitioning DNNs for Optimizing Distributed Inference Performance on Cooperative Edge Devices: A Genetic Algorithm Approach
by Jun Na, Handuo Zhang, Jiaxin Lian and Bin Zhang
Appl. Sci. 2022, 12(20), 10619; https://doi.org/10.3390/app122010619 - 20 Oct 2022
Cited by 5 | Viewed by 1586
Abstract
To fully unleash the potential of edge devices, it is popular to cut a neural network into multiple pieces and distribute them among available edge devices to perform inference cooperatively. Up to now, the problem of partitioning a deep neural network (DNN), which [...] Read more.
To fully unleash the potential of edge devices, it is popular to cut a neural network into multiple pieces and distribute them among available edge devices to perform inference cooperatively. Up to now, the problem of partitioning a deep neural network (DNN), which can result in the optimal distributed inferencing performance, has not been adequately addressed. This paper proposes a novel layer-based DNN partitioning approach to obtain an optimal distributed deployment solution. In order to ensure the applicability of the resulted deployment scheme, this work defines the partitioning problem as a constrained optimization problem and puts forward an improved genetic algorithm (GA). Compared with the basic GA, the proposed algorithm can result in a running time approximately one to three times shorter than the basic GA while achieving a better deployment. Full article
(This article belongs to the Special Issue Edge Computing with AI)
Show Figures

Figure 1

14 pages, 1243 KiB  
Article
Designing a Hybrid Equipment-Failure Diagnosis Mechanism under Mixed-Type Data with Limited Failure Samples
by Cheng-Hui Chen, Chen-Kun Tsung and Shyr-Shen Yu
Appl. Sci. 2022, 12(18), 9286; https://doi.org/10.3390/app12189286 - 16 Sep 2022
Cited by 6 | Viewed by 1749
Abstract
The rarity of equipment failures results in a high level of imbalance between failure data and normal operation data, which makes the effective classification and prediction of such data difficult. Furthermore, many failure data are dominated by mixed data, which makes the model [...] Read more.
The rarity of equipment failures results in a high level of imbalance between failure data and normal operation data, which makes the effective classification and prediction of such data difficult. Furthermore, many failure data are dominated by mixed data, which makes the model unable to adapt to this type of failure problem. Second, the replacement cycle of production equipment increases the difficulty of collecting failure data. In this paper, an equipment failure diagnosis method is proposed to solve the problem of poor prediction accuracy due to limited data. In this method, the synthetic minority oversampling technique is combined with a conditional tabular generative adversarial network. The proposed method can be used to predict limited data with a mixture of numerical and categorical data. Experimental results indicate that the proposed method can improve 6.45% compared to other similar methods when equipment failure data account for less than 1% of the total data. Full article
(This article belongs to the Special Issue Edge Computing with AI)
Show Figures

Figure 1

15 pages, 6271 KiB  
Article
Edge Computing Based on Federated Learning for Machine Monitoring
by Yao-Hong Tsai, Dong-Meau Chang and Tse-Chuan Hsu
Appl. Sci. 2022, 12(10), 5178; https://doi.org/10.3390/app12105178 - 20 May 2022
Cited by 5 | Viewed by 2570
Abstract
This paper focused on providing a general solution based on edge computing and cloud computing in IoT to machine monitoring in manufacturing of small and medium-sized factory. For real-time consideration, edge computing and cloud computing models were seamlessly cooperated to perform information capture, [...] Read more.
This paper focused on providing a general solution based on edge computing and cloud computing in IoT to machine monitoring in manufacturing of small and medium-sized factory. For real-time consideration, edge computing and cloud computing models were seamlessly cooperated to perform information capture, event detection, and adaptive learning. The proposed IoT system processed regional low-level features for detection and recognition in edge nodes. Cloud-computing including fog computing was responsible for mid- and high-level features by using the federated learning network. The system fully utilized all resources in the integrated deep learning network to achieve high performance operations. The edge node was implemented by a simple camera embedded on Terasic DE2-115 board to monitor machines and process data locally. Learning-based features were generated by cloud computing through the data sent from edge and the identification results could be obtained by combining mid- and high-level features with the nonlinear classifier. Therefore, each factory could monitor the real-time condition of machines without operators and keep its data privacy. Experimental results showed the efficiency of the proposed method when compared with other methods. Full article
(This article belongs to the Special Issue Edge Computing with AI)
Show Figures

Figure 1

21 pages, 1307 KiB  
Article
Fake News Classification Based on Content Level Features
by Chun-Ming Lai, Mei-Hua Chen, Endah Kristiani, Vinod Kumar Verma and Chao-Tung Yang
Appl. Sci. 2022, 12(3), 1116; https://doi.org/10.3390/app12031116 - 21 Jan 2022
Cited by 28 | Viewed by 6325
Abstract
Due to the openness and easy accessibility of online social media (OSM), anyone can easily contribute a simple paragraph of text to express their opinion on an article that they have seen. Without access control mechanisms, it has been reported that there are [...] Read more.
Due to the openness and easy accessibility of online social media (OSM), anyone can easily contribute a simple paragraph of text to express their opinion on an article that they have seen. Without access control mechanisms, it has been reported that there are many suspicious messages and accounts spreading across multiple platforms. Accordingly, identifying and labeling fake news is a demanding problem due to the massive amount of heterogeneous content. In essence, the functions of machine learning (ML) and natural language processing (NLP) are to enhance, speed up, and automate the analytical process. Therefore, this unstructured text can be transformed into meaningful data and insights. In this paper, the combination of ML and NLP are implemented to classify fake news based on an open, large and labeled corpus on Twitter. In this case, we compare several state-of-the-art ML and neural network models based on content-only features. To enhance classification performance, before the training process, the term frequency-inverse document frequency (TF-IDF) features were applied in ML training, while word embedding was utilized in neural network training. By implementing ML and NLP methods, all the traditional models have greater than 85% accuracy. All the neural network models have greater than 90% accuracy. From the experiments, we found that the neural network models outperform the traditional ML models by, on average, approximately 6% precision, with all neural network models reaching up to 90% accuracy. Full article
(This article belongs to the Special Issue Edge Computing with AI)
Show Figures

Figure 1

Back to TopTop