Big Data and AI Applications

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 15 November 2024 | Viewed by 2350

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science and Engineering, Incheon National University, Incheon 22012, Republic of Korea
Interests: internet software; mobile computing; web technology

E-Mail Website
Guest Editor
Department of Computer Science and Engineering, Kyung Hee University, Yongin-si 17104, Republic of Korea
Interests: database systems; big data processing and analysis; query processing; data mining; machine learning; knowledge processing

E-Mail
Guest Editor
School of Computer Science and Engineering, Pusan National University, Busan 46241, Republic of Korea
Interests: big data management and analytics; XML filtering; RFID data processing

Special Issue Information

Dear Colleagues,

Big data and AI applications are emerging research fields that have been drawing much attention from diverse disciplines,  including computer science, information technology, and the social sciences. This Special Issue aims to provide an academic platform to publish high-quality research papers on big data and AI applications, including (but not limited to) extended versions of the outstanding BigComp 2024 (https://www.bigcomputing.org/conf2024/) papers.

We invite the submission of original research contributions in the following areas, but we also welcome any original contributions that may cross the boundaries among areas or point in other novel directions:

  • Big data applications/big data as a service;
  • Big data analytics and social media;
  • Tools and systems for big data;
  • Cloud and grid computing for big data;
  • Machine learning and AI for big data;
  • Bioinformatics, multimedia, smartphones, etc.

Prof. Dr. Jinseok Chae
Prof. Dr. Young-Koo Lee
Dr. Joonho Kwon
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • big data
  • machine learning
  • AI
  • multimedia
  • social media

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 5807 KiB  
Article
Unfixed Seasonal Partition Based on Symbolic Aggregate Approximation for Forecasting Solar Power Generation Using Deep Learning
by Minjin Kwak, Tserenpurev Chuluunsaikhan, Azizbek Marakhimov, Jeong-Hun Kim and Aziz Nasridinov
Electronics 2024, 13(19), 3871; https://doi.org/10.3390/electronics13193871 - 30 Sep 2024
Viewed by 628
Abstract
Solar energy is an important alternative energy source, and it is essential to forecast solar power generation for efficient power management. Due to the seasonal characteristics of weather features, seasonal data partition strategies help develop prediction models that perform better in extreme weather-related [...] Read more.
Solar energy is an important alternative energy source, and it is essential to forecast solar power generation for efficient power management. Due to the seasonal characteristics of weather features, seasonal data partition strategies help develop prediction models that perform better in extreme weather-related situations. Most existing studies rely on fixed season partitions, such as meteorological and astronomical, where the start and end dates are specific. However, even if the countries are in the same Northern or Southern Hemisphere, seasonal changes can occur due to abnormal climates such as global warming. Therefore, we propose a novel unfixed seasonal data partition based on Symbolic Aggregate Approximation (SAX) to forecast solar power generation. Here, symbolic representations generated by SAX are used to select seasonal features and obtain seasonal criteria. We then employ two-layer stacked LSTM and combine predictions from various seasonal features and partitions through ensemble methods. The datasets used in the experiments are from real-world solar panel plants such as in Gyeongju, South Korea; and in California, USA. The results of the experiments show that the proposed methods perform better than non-partitioned or fixed-partitioned solar power generation forecasts. They outperform them by 2.2% to 3.5%; and 1.6% to 6.5% in the Gyeongju and California datasets, respectively. Full article
(This article belongs to the Special Issue Big Data and AI Applications)
Show Figures

Figure 1

13 pages, 524 KiB  
Article
Explainable Neural Tensor Factorization for Commercial Alley Revenues Prediction
by Minkyu Kim, Suan Lee and Jinho Kim
Electronics 2024, 13(16), 3279; https://doi.org/10.3390/electronics13163279 - 19 Aug 2024
Viewed by 466
Abstract
Many individuals aspire to start their own businesses and achieve financial success. Before launching a business, they must decide on a location and the type of service to offer. This decision requires collecting and analyzing various characteristics of potential locations and services, such [...] Read more.
Many individuals aspire to start their own businesses and achieve financial success. Before launching a business, they must decide on a location and the type of service to offer. This decision requires collecting and analyzing various characteristics of potential locations and services, such as average revenues and foot traffic. However, this process is challenging because it demands expert knowledge in data collection and analysis. To address this issue, we propose Neural Tensor Factorization (NeuralTF) and Explainable Neural Tensor Factorization (XNeuralTF). These methods automatically analyze these characteristics and predict revenues. NeuralTF integrates Tensor Factorization (TF) with Multi-Layer Perceptron (MLP). This integration allows it to handle multi-dimensional tensors effectively. It also learns both explicit and implicit higher-order feature interactions, leading to superior predictive performance. XNeuralTF extends NeuralTF by providing explainable recommendations for three-dimensional tensors. Additionally, we introduce two novel metrics to evaluate the explainability of recommendation models. We conducted extensive experiments to assess both predictive performance and explainability. Our results show that XNeuralTF achieves comparable or superior performance to state-of-the-art methods, while also offering the highest level of explainability. Full article
(This article belongs to the Special Issue Big Data and AI Applications)
Show Figures

Figure 1

17 pages, 9296 KiB  
Article
Machine Fault Diagnosis: Experiments with Different Attention Mechanisms Using a Lightweight SqueezeNet Architecture
by Mahe Zabin, Ho-Jin Choi, Muhammad Kubayeeb Kabir, Anika Nahian Binte Kabir and Jia Uddin
Electronics 2024, 13(16), 3112; https://doi.org/10.3390/electronics13163112 - 6 Aug 2024
Viewed by 908
Abstract
As artificial intelligence technology progresses, deep learning models are increasingly utilized for machine fault classification. However, a significant drawback of current state-of-the-art models is their high computational complexity, rendering them unsuitable for deployment in portable devices. This paper presents a compact fault diagnosis [...] Read more.
As artificial intelligence technology progresses, deep learning models are increasingly utilized for machine fault classification. However, a significant drawback of current state-of-the-art models is their high computational complexity, rendering them unsuitable for deployment in portable devices. This paper presents a compact fault diagnosis model that integrates a self-attention SqueezeNet architecture with a hybrid texture representation technique utilizing empirical mode decomposition (EMD) and a gammatone spectrogram (GS) filter. In the model, the dominant signal is first isolated from the audio fault signals by discarding lower intrinsic mode functions (IMFs) from EMD, and subsequently, the dominant signals are transformed into 2D texture maps using the GS filter. These generated texture maps feed as input into the modified self-attention SqueezeNet classifier, featuring reduced model width and depth, for training and validation. Different attention modules were tested in the paper, including the self-attention, channel attention, spatial attention, and convolutional block attention module (CBAM). The models were tested on the MIMII and ToyADMOS datasets. The experimental results demonstrated that the self-attention mechanism with SqueezeNet achieved an accuracy of 97% on the previously unseen MIMII and ToyADMOS datasets. Furthermore, the proposed model outperformed the SqueezeNet attention model with other attention mechanisms and state-of-the-art deep architectures, exhibiting a higher precision, recall, and F1-score. Lastly, t-SNE is applied to visualize the features of the self-attention SqueezeNet for different fault classes of both MIMII and ToyADMOS. Full article
(This article belongs to the Special Issue Big Data and AI Applications)
Show Figures

Figure 1

Back to TopTop