Next Article in Journal
Hardware Implementation of a 2D Chaotic Map-Based Audio Encryption System Using S-Box
Next Article in Special Issue
A Cascade Fractional-N Synthesizer Topology of DLL and Frequency Multiplier for 5G+ Communication Systems
Previous Article in Journal
An Enhanced Multiscale Retinex, Oriented FAST and Rotated BRIEF (ORB), and Scale-Invariant Feature Transform (SIFT) Pipeline for Robust Key Point Matching in 3D Monitoring of Power Transmission Line Icing with Binocular Vision
Previous Article in Special Issue
Analysis of Influencing Factors on Multilevel Storage Performance in Phase-Change Random Access Memory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Edge Artificial Intelligence for Electrical Anomaly Detection Based on Process-In-Memory Chip

1
School of Communication & Electronic Engineering, East China Normal University, Shanghai 200241, China
2
Flash Billion Semiconductor Co., Ltd., Shanghai 200003, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(21), 4255; https://doi.org/10.3390/electronics13214255
Submission received: 25 September 2024 / Revised: 25 October 2024 / Accepted: 25 October 2024 / Published: 30 October 2024
(This article belongs to the Special Issue Advanced CMOS Devices and Applications, 2nd Edition)

Abstract

:
Neural-networks (NNs) for the current feature analysis bring novel electrical safety functions in smart circuit breakers (CBs), especially for preventing the fire hazard from electric vehicle/bike battery charging. In this work, the edge artificial intelligence (AI) solutions for the electrical anomaly detection were designed and demonstrated based on the process-in-memory (PIM) AI chip. The ultra-low power and high-performance character of PIM AI chips enable the edge solution to embed in the limited space inside the circuit breaker and to detect improper battery charging at millisecond latency.

1. Introduction

A CB is an essential electrical switch designed to protect power systems by interrupting current faults which are typically caused by overloads or short-circuits [1]. Nowadays, thanks to AI, more functions are being introduced to traditional CBs, making them intelligent CBs. These functions include continuous monitoring of electrical parameters, remote control, and most importantly, the detection of complicated electrical abnormalities, such as, improper battery charging, arc-firing, and CB mechanical failures, which could not be captured by traditional CBs.
One typical practice in CB industry is to analyze currents by neural-networks (NNs) [2,3,4,5,6], especially for detecting improper electric vehicle or electric bike battery charging [7,8,9]. Over 30% of electric vehicle/bike related fires occur during improper charging [10,11] in China alone causing over 1600 home fires annually [12]. Such fires have yielded severe repercussions, leading to high death toll and substantial property damage. Thus, it is very important for an intelligent CB to detect electrical vehicle/bike charging and to take the proper measures.
State-of-the-art recognition results were demonstrated in several previous works for various kinds of hazard events. Yang et al. [13] and Liu et al. [14] investigated the NN detection of electric bike (E-bike) charging in households. Other similar types of electric hazard event detection were also performed by H. Park et al. [15] and Jiang et al. [16]. Table 1. shows the algorithms, accuracy, and hardware platforms used to deploy those detection functions.
Regardless of having decent accuracy, it is notable that none of the above detection functions were deployed on an edge device. It is understandable that the heavy computing cost of large scale NNs and low response latency require a powerful AI solution at the edge end [17,18,19,20], making it difficult to be embedded into an intelligent CB with limited space and without proper heat dissipation.
PIM technology is one of the emerging semiconductor approaches that combines a memory device and processing unit (PU) into a chip so that the chip can directly store and calculate data internally [21]. This architecture helps to overcome the limitations of traditional AI chips in terms of data processing speed and energy efficiency [21,22]. PIM chips can be appropriately embedded into edge AI devices with limited space and no heat dissipation, due to the high efficiency and low power consumption.
In this work, we designed an edge AI system with low power PIM AI chips to perform current analysis. It was integrated into the CBs and demonstrated high accuracy for battery charging detection with a latency of milliseconds. Section 2 Methods details the system from both hardware and software aspects. The hardware part is described in a bottom-up way from the PIM core in the AI chip to the entire system schematic. The software part contains both the sampling signal data and NN design. Then, in Section 3 Results, the feature extraction method, NN structure, and the algorithm deployment in hardware are discussed and optimized separately. Finally, a conclusion is drawn on the PIM AI chip based smart CB application in comparison with other edge chips.

2. Methods

The hardware platform we used integrated a 32-bit low-power central processing unit (CPU) with our own instruction set and PIM based neural processing unit (NPU), which implements computing processes and data storage in the same physical devices. This architecture combines computing and memory units. Hence, no data transfer is needed between the processor and RAM.
The PIM core used in this study is equipped with a crossbar circuit as shown in Figure 1. The memory cell is based on programmable linear random-access memory (PLRAM) [23], an optimized NOR-FLASH device providing an excellent program/erase scheme [24,25]. Input signals are converted into a vector of voltages and mapped to the rows of the memristor-based crossbar array.
The current traversing each memristor can be modulated by altering the voltages across the rows or columns because memristor terminals have array rows and columns. Within the crossbar array, Ohm’s Law is utilized to determine the current flowing through each memristor. The conductance ( G = 1 R ) of these memristors is carefully adjusted to emulate synaptic weights within the neural network. When the input signal is applied to the cross array, the conductance of each memristor is proportional to the current passing through it. By measuring the current in each column, the result of the input vector and weight matrix can be obtained. The sum of these currents, that is, the current at each output node (according to Kirchhoff Law, which is expressed in Equation (1)) represents the output of the network [26].
I o u t = V 1 , i n G 1 +   V 2 , i n G 2 + V 3 , i n G 3 + V 4 , i n G 4 ,
As shown in Figure 2, the edge AI platform is deployed on an intelligent CB main board. First, the current of the devices are collected by the current transformers (CT). A CT measures the current in a circuit based on the principle of electromagnetic induction [27]. Subsequently, the embedded PIM chip samples the signals at a sampling rate of 16 KHz, continuously. The data are then fed to the pre-trained NN-based model for each alternating current (AC) cycle to obtain the final output. The PIM AI chip provides output back to the CB main board, which finally gives corresponding warnings according to the output results. If the CB detects an abnormal situation, it immediately takes action, such as tripping the breaker to prevent potential hazards or triggering an alert via Bluetooth to notify the relevant parties.

2.1. Sampling Signal

The currents of various household appliances, including air conditioners, refrigerators, hair dryers, kettles, and electric vehicle/bikes were collected. Due to the hazardous nature of electric vehicle/bike charging inside the household, we regard all electric vehicle/bike charging as an abnormal condition in·this·paper. Figure 3 shows the currents of different electrical equipment passing through the CB in two cycles. It is notable that the unique features could be observed within one cycle (220 sampling points). In the paper, current signals were selected for the experiments because they exhibit more pronounced characteristics across different devices. There are in total 11,497 signal data in our dataset (the training data: 9198; the testing data: 2299). Finally, 250 data were used to test in the hardware.

2.2. NN Design

NN models have been applied to tasks such as data processing, classification, and prediction since they can learn data features [28,29,30,31,32]. As shown in Figure 4, a 2-fully-connected-layer NN-based model was trained using the PyTorch (Version: 2.1.2) framework in the Python 3.9.18 environment [33]. After the first fully connected (FC) layer, the model uses the rectified linear units (ReLU) activation function to improve the generalization ability and the dropout layer to avoid overfitting. We used adaptive configuration (start from 1 × 10−4) for learning rate, adaptive moment estimation (Adam) for the optimizer to update model parameters, negative log likelihood loss (NLLLoss) for the loss function, and 150 epochs of training [34,35]. The size of the input depends on the conversion method used. The final current classification is the maximum output value calculated by the SoftMax function [36,37]. Through NN training, we obtained the weights and biases of FC1 and FC2, which need to be deployed on PIM AI chips.

3. Results

3.1. Feature Extraction

Empirically, changing time-domain signals to frequency domain would benefit the NN training. We investigated two time-to-frequency conversion methods, namely, DFT and DCT/DST [38,39,40,41,42]. Figure 5 shows the difference of the two conversion methods (taking the signal of the electric vehicle/bike battery charging as example).
Figure 5a,b shows the results of DFT and DCT/DST, respectively. Equations (2)–(4) show the corresponding formula. The result from DCT/DST has more data-points than the DFT magnitude because DCT/DST combines magnitude and phase information:
D F T :   A k = n = 0 N 1 cos 2 π N k n 2 + sin 2 π N k n 2 k , N = 0,1 N 1 ,
                      D C T :   A c o s k = n = 0 N 1 cos 2 π N k n   k , N = 0,1 N 1 ,  
                D S T :   A s i n k = n = 0 N 1 sin 2 π N k n k , N = 0,1 N 1 ,

3.2. NN Training

As shown in Figure 6, the curves represent the accuracy rate on the training set during NN training. This accuracy refers to the whole classifications’ average accuracy. Obviously, the accuracy of DCT/DST (99.3%) is superior to that of DFT (93.1%). The model accuracy on the test set is the same as that on the training set.
The smaller and larger NN were also tried as shown in Figure 7. The smaller model (model-2, with one layer less than model-1) shows obviously lower accuracy. However, the benefits of the larger hidden-layer model (model-3, with 1.5 times the number of the hidden layer neuron of model-1) and the deeper model (model-4, with one layer more than model-1) on the accuracy is almost negligible. The above discussd model-1 is a good compromise of accuracy and computing consumption for the current dataset. It should be noted that, in more complicated tasks with more types of devices to be classified, a larger scale NN might be needed.

3.3. Hardware Deployment

The accuracy of the two feature extraction methods for the 6-class classification is shown in Figure 8a (0: Nothing; 1: hair dryers; 2: Electric Vehicle/Bike battery charging; 3: Kettles; 4: Air-conditioner; 5: Refrigerator). For each category, the result of DFT is also inferior to DCT/DST on the hardware deployment, which has the same trend as in the Python platform. For the best method, the average recognition accuracy reaches 98%. The conversion of floating-point numbers from software to fixed-point numbers on the PIM AI chip resulted in a 1 percent decrease, indicating that the chip’s operation can be properly performed. Especially, the accuracy of electric vehicle/bike battery charging can reach 99%. Figure 8b shows the confusion matrix of the DCT/DST method. It is notable that misrecognition occurs with a probability of only 0.1 between class 4 (the Air-conditioner) and class 5 (the Refrigerator), which has no impact on the identification of anomalies (detection of electric vehicle/bike charging). The recall, precision, and F1-score performance are listed in Table 2. The edge AI solution is shown in Figure 9.
We compared the performance between PIM AI chips, two mainstream microcontroller units (MCUs), and another type of edge AI solution. We deployed NN on the hardware and obtained information of time and power consumption. As shown in Table 3, the PIM AI chip has a low latency (4 ms) with the lowest power (8.66 mW), compared with the two MCUs and another type of edge AI chip.
The 4 ms latency ensures the CBs detect electrical abnormalities and shut down the current within one quarter period of an AC-cycle, which greatly reduces the time for a possible electric fire to be ignited. The low power consumption enables hardware implementation without thermal dissipation inside the limited space of a CB. These are the advantages of using PIM AI chips over the traditional MCU based AI. It is notable that the 4 ms computing latency is still shorter than one AC cycle (typically 16.7 ms to 20 ms), which indicates that our PIM chip still has resources to handle even more complicated tasks.

4. Conclusions

In the limited space inside a CB, a low power PIM AI chip can be well embedded without extra heat dissipation effort. The proposed NN-based model can achieve high classification accuracy of current load and electrical abnormality detection. When a single load is working in the power system, the recognition accuracy reaches 98%. Especially, the accuracy rate for electric vehicle/bike battery charging is 99.9%, which means the accuracy of electrical abnormality detection reaches 99.9%. The latency is about 1 ms, far less than the commonly used MCU implementation.

Author Contributions

Writing—original draft, J.J.; Writing—review & editing, X.Q. and C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Science and Technology Major Project under Grant, grant number 2020AAA0109001.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

Author Cimang Lu is employed by the company Flash Billion Semiconductor Co., Ltd., Shanghai 200003, China. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Wilson, H.; Dufournet, D.; Mercure, H.; Yeckley, R. History of Circuit Breakers. In Switching Equipment; Ito, H., Ed.; Springer: Cham, Switzerland, 2021; pp. 87–104. [Google Scholar]
  2. Boyaci, A.; Becker, O.; Amihai, I. Vibration Monitoring for Medium-Voltage Circuit Breaker Drives Using Artificial Intelligence. IET Conf. Proc. 2021, 2021, 628–632. [Google Scholar]
  3. Razi-Kazemi, A.A.; Niayesh, K. Condition Monitoring of High Voltage Circuit Breakers: Past to Future. IEEE Trans. Power Deliv. 2021, 36, 740–750. [Google Scholar] [CrossRef]
  4. Ren, Z.; Nemoto, Y.; Suzuki, Y.; Takagi, M.; Morishita, H.; Gustilo, R.C.; Iwao, T. Differentiation of Numerical Simulation Result of Direct Current Circuit Breaker Interruption Process Using Artificial Intelligence. IEEJ Trans. Electr. Electron. Eng. 2023, 18, 147–149. [Google Scholar] [CrossRef]
  5. Yang, J.; Zhang, G.; Chen, B.; Wang, Y. Vibration signal augmentation method for fault diagnosis of low-voltage circuit breaker based on W-CGAN. IEEE Trans. Instrum. Meas. 2023, 72, 1–11. [Google Scholar] [CrossRef]
  6. Wu, Y.; Zhang, J.; Yuan, Z.; Chen, H. Fault Diagnosis of Medium Voltage Circuit Breakers Based on Vibration Signal Envelope Analysis. Sensors 2023, 23, 8331. [Google Scholar] [CrossRef]
  7. Malik, J.A.; Haque, A.; Amir, M. Investigation of Intelligent Deep Convolution Neural Network for DC-DC Converters Faults Detection in Electric Vehicles Applications. In Proceedings of the 2023 International Conference on Recent Advances in Electrical, Electronics & Digital Healthcare Technologies (REEDCON), New Delhi, India, 1–3 May 2023. [Google Scholar]
  8. Vijitha, S.; Hebri, D.; Singh, S.; Manohara, M.; Ishrat, M.; Joseph, D.R. Neural Network Implementation for Battery Failure Detection in Electric Vehicles. In Proceedings of the 2023 International Conference on Innovative Computing, Intelligent Communication and Smart Electrical Systems (ICSES), Chennai, India, 14–15 December 2023. [Google Scholar]
  9. Tong, S.; Lacap, J.H.; Park, J.W. Battery state of charge estimation using a load-classifying neural network. J. Energy Storage 2016, 7, 236–243. [Google Scholar] [CrossRef]
  10. CNTV: How to Safely Prevent Explosions of Nearly 300 million Electric Bicycle Batteries Around Us. Available online: https://m.gmw.cn/baijia/2021-09/26/35189838.html (accessed on 26 September 2021).
  11. Chow, W.K.; Chow, C.L. Electric vehicle fire hazards associated with batteries, combustibles and smoke. Int. J. Automot. Sci. Technol. 2022, 6, 165–171. [Google Scholar] [CrossRef]
  12. Sun, P.; Bisschop, R.; Niu, H.; Huang, X. A Review of Battery Fires in Electric Vehicles. Fire Technol. 2020, 56, 1361–1410. [Google Scholar] [CrossRef]
  13. Yang, F.; Yang, Y.; Wang, X.; Ouyang, X.; Shuai, C. Electric bikes charging anomaly detection from alternating current side based on big data. Eng. Appl. Artif. Intell. 2024, 136 Pt B, 109042. [Google Scholar] [CrossRef]
  14. Liu, J.; Wang, C.; Xu, L.; Wang, M.; Hu, D.; Jin, W.; Li, Y. A Study of Electric Bicycle Lithium Battery Charging Monitoring Using CNN and BiLSTM Networks Model with NILM Method. Electronics 2024, 13, 3316. [Google Scholar] [CrossRef]
  15. Park, H.P.; Kwon, G.Y.; Lee, C.K.; Chang, S.J. AI-enhanced time–frequency domain reflectometry for robust series arc fault detection in DC grids. Measurement 2024, 238, 115188. [Google Scholar] [CrossRef]
  16. Jiang, R.; Wang, Y.; Gao, X.; Bao, G.; Hong, Q.; Booth, C.D. AC Series Arc Fault Detection Based on RLC Arc Model and Convolutional Neural Network. IEEE Sens. J. 2023, 23, 14618–14627. [Google Scholar] [CrossRef]
  17. Singh, R.; Gill, S.S. Edge AI: A survey. Internet Things Cyber-Phys. Syst. 2023, 3, 71–92. [Google Scholar] [CrossRef]
  18. Shi, Y.; Yang, K.; Jiang, T.; Zhang, J.; Letaief, K.B. Communication-Efficient Edge AI: Algorithms and Systems. IEEE Commun. Surv. Tutor. 2020, 22, 2167–2191. [Google Scholar] [CrossRef]
  19. Kim, D.; Yu, C.; Xie, S.; Chen, Y.; Kim, J.Y.; Kim, B.; Kulkarni, J.P.; Kim, T.T.-H. An Overview of Processing-in-Memory Circuits for Artificial Intelligence and Machine Learning. IEEE J. Emerg. Sel. Top. Circuits Syst. 2022, 12, 338–353. [Google Scholar] [CrossRef]
  20. Kim, T.T.H.; Kim, B.; Kim, J.Y.; Kulkarni, J.P. Guest Editorial Revolution of AI and Machine Learning with Processing-in-Memory (PIM): From Systems, Architectures, to Circuits. IEEE J. Emerg. Sel. Top. Circuits Syst. 2022, 12, 333–337. [Google Scholar] [CrossRef]
  21. Chung, E.; Sohn, S.Y. Processing-in-Memory Development Strategy for AI Computing Using Main-Path and Doc2Vec Analyses. Sustainability 2023, 15, 12439. [Google Scholar] [CrossRef]
  22. Wan, W.; Kubendran, R.; Schaefer, C.; Eryilmaz, S.B.; Zhang, W.; Wu, D.; Deiss, S.; Raina, P.; Qian, H.; Gao, B.; et al. A compute-in-memory chip based on resistive random-access memory. Nature 2022, 608, 504–512. [Google Scholar] [CrossRef]
  23. Gao, S.; Yang, G.; Qiu, X.; Yang, C.; Zhang, C.; Li, B.; Gao, C.; Jiang, H.; Wang, Z.; Hu, J.; et al. Programmable Linear RAM: A New Flash Memory-based Memristor for Artificial Synapses and Its Application to Speech Recognition System. In Proceedings of the 2019 IEEE International Electron Devices Meeting (IEDM), San Francisco, CA, USA, 7–11 December 2019; pp. 14.1.1–14.1.4. [Google Scholar]
  24. Gao, S.; Cong, Y.; Zhang, Z.; Qiu, X.; Lee, C.; Zhao, Y. Superior Data Retention of Programmable Linear RAM (PLRAM) for Compute-in-Memory Application. In Proceedings of the 2020 IEEE International Reliability Physics Symposium (IRPS), Dallas, TX, USA, 28 April–30 May 2020; pp. 1–5. [Google Scholar]
  25. Zhao, L.; Gao, S.; Zhang, S.; Qiu, X.; Yang, F.; Li, J.; Chen, Z.; Zhao, Y. Neural Network Acceleration and Voice Recognition with a Flash-based In-Memory Computing SoC. In Proceedings of the 2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS), Washington, DC, DC, USA, 6–9 June 2021; pp. 1–5. [Google Scholar]
  26. Prezioso, M.; Merrikh-Bayat, F.; Hoskins, B.D.; Adam, G.C.; Likharev, K.K.; Strukov, D.B. Training and operation of an integrated neuromorphic network based on metal-oxide memristors. Nature 2015, 521, 61–64. [Google Scholar] [CrossRef]
  27. Jadidian, J. A Compact Design for High Voltage Direct Current Circuit Breaker. IEEE Trans. Plasma Sci. 2009, 37, 1084–1091. [Google Scholar] [CrossRef]
  28. Wang, Z. Fast discrete sine transform algorithms. Signal Process. 1990, 19, 91–102. [Google Scholar] [CrossRef]
  29. Martucci, S.A. Symmetric convolution and the discrete sine and cosine transforms. IEEE Trans. Signal Process. 1994, 42, 1038–1051. [Google Scholar] [CrossRef]
  30. Ahmed, N.; Natarajan, T.; Rao, K.R. Discrete Cosine Transform. IEEE Trans. Comput. 1974, C-23, 90–93. [Google Scholar] [CrossRef]
  31. Pang, C.; Zhou, R.; Hu, B.; Hu, W.; El-Rafei, A. Signal and image compression using quantum discrete cosine transform. Inf. Sci. 2019, 473, 121–141. [Google Scholar] [CrossRef]
  32. Chen, W.-H.; Smith, C.; Fralick, S. A Fast Computational Algorithm for the Discrete Cosine Transform. IEEE Trans. Commun. 1977, 25, 1004–1009. [Google Scholar] [CrossRef]
  33. Imambi, S.; Prakash, K.B.; Kanagachidambaresan, G.R. PyTorch; Springer: Cham, Switzerland, 2021; pp. 87–104. [Google Scholar]
  34. Kaur, S.; Singla, J.; Nkenyereye, L.; Jha, S.; Prashar, D.; Joshi, G.P.; El-Sappagh, S.; Islam, M.S.; Islam, S.M.R. Medical Diagnostic Systems Using Artificial Intelligence (AI) Algorithms: Principles and Perspectives. IEEE Access 2022, 8, 228049–228069. [Google Scholar] [CrossRef]
  35. Feng, J.; Phillips, R.V.; Malenica, I.; Bishara, A.; Hubbard, A.E.; Celi, L.A.; Pirracchio, R. Clinical artificial intelligence quality improvement: Towards continual monitoring and updating of AI algorithms in healthcare. NPJ Digit. Med. 2022, 5, 66. [Google Scholar] [CrossRef]
  36. Hallmann, M.; Pietracho, R.; Komarnicki, P. Comparison of Artificial Intelligence and Machine Learning Methods Used in Electric Power System Operation. Energies 2024, 17, 2790. [Google Scholar] [CrossRef]
  37. Yang, Q.; Liao, Y. A novel mechanical fault diagnosis for high-voltage circuit breakers with zero-shot learning. Expert Syst. Appl. 2024, 245, 123133. [Google Scholar] [CrossRef]
  38. Zhang, C.; Ueng, Y.-L.; Studer, C.; Burg, A. Artificial Intelligence for 5G and Beyond 5G: Implementations, Algorithms, and Optimizations. IEEE J. Emerg. Sel. Top. Circuits Syst. 2020, 10, 149–163. [Google Scholar] [CrossRef]
  39. Sunil, S.; Anjar, W. Analysis of Backpropagation Algorithm in Predicting the Most Number of Internet Users in the World. J. Online Inform. 2018, 3, 110–115. [Google Scholar]
  40. Hughes, T.W.; Minko, M.; Shi, Y.; Fan, S. Training of photonic neural networks through in situ backpropagation and gradient measurement. Optica 2018, 5, 864–871. [Google Scholar] [CrossRef]
  41. Wang, M.; Lu, S.; Zhu, D.; Lin, J.; Wang, Z. A High-Speed and Low-Complexity Architecture for Softmax Function in Deep Learning. In Proceedings of the 2018 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS), Chengdu, China, 26–30 October 2018. [Google Scholar]
  42. Yuan, B. Efficient hardware architecture of softmax layer in deep neural network. In Proceedings of the 2016 29th IEEE International System-on-Chip Conference (SOCC), Seattle, WA, USA, 6–9 September 2016. [Google Scholar]
  43. STM32G0x1; Reference Manual. Technical Report May. STMicroelectronics: Plan-les-Ouates, Switzerland, 2019.
  44. STM32F103xx; Reference Manual. Technical Report June. STMicroelectronics: Plan-les-Ouates, Switzerland, 2015.
  45. From this AI MCU, Discussing the Renesas’ AI Ecosystem Layout. Available online: https://www.eet-china.com/news/202311229886.html (accessed on 22 November 2023).
Figure 1. The 4 × 4 crossbar circuit integrated by PLRAM [21,22,23,24].
Figure 1. The 4 × 4 crossbar circuit integrated by PLRAM [21,22,23,24].
Electronics 13 04255 g001
Figure 2. Flow chart of current detection through the edge AI solution.
Figure 2. Flow chart of current detection through the edge AI solution.
Electronics 13 04255 g002
Figure 3. The currents of different electric loads. The electrical appliances include hair dryers, electric vehicle/bikes, kettles, air conditioners, and refrigerators. Nothing means that there is no equipment connected to the CB.
Figure 3. The currents of different electric loads. The electrical appliances include hair dryers, electric vehicle/bikes, kettles, air conditioners, and refrigerators. Nothing means that there is no equipment connected to the CB.
Electronics 13 04255 g003
Figure 4. The structure of NN-based training model. FC1 and FC2 represent the fully connected layer 1 and fully connected layer 2.
Figure 4. The structure of NN-based training model. FC1 and FC2 represent the fully connected layer 1 and fully connected layer 2.
Electronics 13 04255 g004
Figure 5. The conversion results of 2 methods (taking the signal of electric vehicle/bike as example). (a) The conversion results of DFT method; (b) The conversion results of DCT/DST method.
Figure 5. The conversion results of 2 methods (taking the signal of electric vehicle/bike as example). (a) The conversion results of DFT method; (b) The conversion results of DCT/DST method.
Electronics 13 04255 g005
Figure 6. The accuracy transformation on the training set of NN training.
Figure 6. The accuracy transformation on the training set of NN training.
Electronics 13 04255 g006
Figure 7. Different types of NN-based model. All models have the feature extraction layer. The solid circle represents the data of the model used in the experiment. The dotted circle represents two lines using different Y-axes.
Figure 7. Different types of NN-based model. All models have the feature extraction layer. The solid circle represents the data of the model used in the experiment. The dotted circle represents two lines using different Y-axes.
Electronics 13 04255 g007
Figure 8. (a) The results of DFT and DCT/DST. (b) Confusion matrix of DCT/DST method.
Figure 8. (a) The results of DFT and DCT/DST. (b) Confusion matrix of DCT/DST method.
Electronics 13 04255 g008
Figure 9. The model of the edge AI solution.
Figure 9. The model of the edge AI solution.
Electronics 13 04255 g009
Table 1. The algorithm, accuracy, and hardware platforms used in the related work.
Table 1. The algorithm, accuracy, and hardware platforms used in the related work.
FeaturesYang et al. [13]Liu et al. [14]H. Park et al. [15]Jiang et al. [16]
AlgorithmsThe Random ForestConvolutional NNConvolutional NN1-D Convolutional NN
Hazard EventE-bike ChargingE-bike ChargingSeries Arc FaultArc Fault
Computing PlatformN.A.LaptopLaptopLaptop
Accuracy89%97%100%99.33%
Table 2. The recall, precision and F1-score of NN-based model of DCT/DST method. 0, 1, 2, 3, 4, 5 represent the different devices.
Table 2. The recall, precision and F1-score of NN-based model of DCT/DST method. 0, 1, 2, 3, 4, 5 represent the different devices.
Performance012345
Recall0.9990.9800.9990.9500.9000.900
Precision0.9990.9800.9990.9050.8180.999
F1-Score0.9990.9800.9990.9270.8570.947
Table 3. Comparison of performance between mainstream MCUs and PIM AI chips: 16 MHz/64 MHz/72 MHz are the main frequency of these types of MCU.
Table 3. Comparison of performance between mainstream MCUs and PIM AI chips: 16 MHz/64 MHz/72 MHz are the main frequency of these types of MCU.
Mainstream MCUsTime (ms)Power (mW)
STM32G0x1 (64 MHz) [43]24.3118.81
STM32F103xx (72 MHz) [44]17.29108.24
Another edge AI chip [45]3864
PIM AI chip48.66
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jin, J.; Qiu, X.; Lu, C. Edge Artificial Intelligence for Electrical Anomaly Detection Based on Process-In-Memory Chip. Electronics 2024, 13, 4255. https://doi.org/10.3390/electronics13214255

AMA Style

Jin J, Qiu X, Lu C. Edge Artificial Intelligence for Electrical Anomaly Detection Based on Process-In-Memory Chip. Electronics. 2024; 13(21):4255. https://doi.org/10.3390/electronics13214255

Chicago/Turabian Style

Jin, Jianzi, Xiang Qiu, and Cimang Lu. 2024. "Edge Artificial Intelligence for Electrical Anomaly Detection Based on Process-In-Memory Chip" Electronics 13, no. 21: 4255. https://doi.org/10.3390/electronics13214255

APA Style

Jin, J., Qiu, X., & Lu, C. (2024). Edge Artificial Intelligence for Electrical Anomaly Detection Based on Process-In-Memory Chip. Electronics, 13(21), 4255. https://doi.org/10.3390/electronics13214255

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop