Next Article in Journal
Recent Progress in Fabrication and Physical Properties of 2D TMDC-Based Multilayered Vertical Heterostructures
Previous Article in Journal
High-Level Design Optimizations for Implementing Data Stream Sketch Frequency Estimators on FPGAs
 
 
Article
Peer-Review Record

A Deep Learning Method Based on the Attention Mechanism for Hardware Trojan Detection

Electronics 2022, 11(15), 2400; https://doi.org/10.3390/electronics11152400
by Wenjing Tang 1, Jing Su 1,*, Jiaji He 2 and Yuchan Gao 1
Reviewer 1:
Reviewer 2: Anonymous
Electronics 2022, 11(15), 2400; https://doi.org/10.3390/electronics11152400
Submission received: 5 June 2022 / Revised: 28 July 2022 / Accepted: 28 July 2022 / Published: 31 July 2022
(This article belongs to the Section Circuit and Signal Processing)

Round 1

Reviewer 1 Report

The authors propose to use the attention mechanism to enhance the detection of hardware trojans over various architectures of neural networks.

Unfortunately, there are two major flaws in the paper :

a) The source of data is custom and very poorly documented. Moreover, it seems that it includes FPGA and it is not clear how the authors have added a trojan without modifying to much the FPGA (especially the place and route mechanism). In this form, it is impossible to judge if the detection is really due to the trojan or to another side effect.

b) Authors only compare to themselves

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

1 – Abstract needs to be re-written to highlight the main summary of the paper. It also needs to be sent to a professional English reviewer. Several sentences are very long like the last one

2-  Define Hardware Trojan in the Introduction

3 – Figure 1 has no references. Please check

4- How were the hyperparameters of the CNN model tuned? Have the authors used optimization techniques?

5 – In Table 3, how were these parameters chosen?

6 – Where is the proposed model in Tables 4 and 5?

7 – There is no comparison with related work.

8 – Add a paragraph to describe the limitations of the work

9 – The manuscript needs a heavy English editing

Author Response

Response to Reviewer 2 Comments

Point 1: Abstract needs to be re-written to highlight the main summary of the paper. It also needs to be sent to a professional English reviewer. Several sentences are very long like the last one

Response 1: We thank the reviewer for the comments and have rewritten the summary. The updated version is attached below. If required, we will seek potential English flourishing services.

The chip manufacturing of integrated circuits requires the participation of multiple parties, which largely increases the possibility of hardware Trojan insertion and poses a great threat to the entire hardware device landing. However, traditional hardware Trojan detection methods require gold chips, so the detection cost is relatively high. The attention mechanism can extract data with more adequate features, which can enhance the expressiveness of the network. This paper combines the attention module to multilayer perceptron and convolutional neural network for hardware Trojan detection based on side channel information, and evaluates the detection results by implementing specific experiments. The results show that the proposed method significantly outperforms machine learning classification methods such as SVM and KNN in terms of accuracy, precision, recall and F1 value, and outperforms MLP and CNN models. In addition, The proposed method is effective in detecting data containing one or multiple hardware Trojan, and shows high sensitivity to the size of the dataset.

Point 2: Define Hardware Trojan in the Introduction

Response 2: We thank the reviewer for the comments. In the Introduction, we briefly introduced the hardware Trojan, but may not be very detailed, so we revised the sentence for clarity, and the updated version is attached below.

Hardware Trojan refers to the intentional modification of the original circuit or implantation of dangerous codes by the attacker during the design and manufacture of the integrated circuit chip. Hardware Trojan makes the circuit chip have redundant circuits that do not belong to the function of the chip, so that its circuit has some uncontrollable behaviors, such as destroying the original circuit logic structure of the underlying chip and modifying the parameters. Hardware Trojan have characteristics of destructiveness, latency, mutability and parasitism [3,4].

Point 3: Figure 1 has no references. Please check

Response 3: We thank the reviewer for the detailed feedback. The reference for Figure 1 is A Survey on Machine Learning Against Hardware Trojan Attacks: Recent Advances and Challenges, which we have added and highlighted in the original text.

Point 4: How were the hyperparameters of the CNN model tuned? Have the authors used optimization techniques?

Response 4: We thank the reviewer for the comments. Parameter tuning of CNN adopts manual fine-tuning. In this research, the Adam optimizer was used with a learning rate of 0.001.

Point 5: In Table 3, how were these parameters chosen?

Response 5: We thank the reviewer for the comments. We refer to the machine learning model in Machine Learning Techniques for Hardware Trojan Detection to set the parameters in Table 3, and make manual fine-tuning according to the actual situation. We have added this reference and highlighted it in the original paper.

Point 6: Where is the proposed model in Tables 4 and 5?

Response 6: MLP-Att and CNN-Att are the methods proposed in this paper. For your convenience, we bold and highlight them in Table 4. The method used in Table 5 is MLP-Att, which is explained in the presentation.

Point 7: There is no comparison with related work.

Response 7: We thank the reviewer for the comments. In Table 4, we compare seven representative machine learning methods and evaluate their accuracy, precision, recall and F1-score. A comparison between related jobs was added in Section 5.3 and highlighted them in the original text. The updated version is attached below.

Table 7. Classification results of network - related methods.

Model name

Accurary(%)

Precision(%)

Recall(%)

F1-score(%)

CNN-Att

85.17

85.50

85.36

84.22

MLI-Att

84.87

85.13

84.87

84.89

[45]

74.01

75.33

72.06

77.84

[46]

81.85

79.82

79.29

80.46

[47]

84.79

81.99

80.06

79.52

 

  1. Madden, K.; Harkin, J.; McDaid, L.; Nugent, C. Adding security to networks-on-chip using neural networks. In 2018 IEEE Symposium Series on Computational Intelligence (SSCI), Bangalore, India, 18-21 November 2018; pp. 1299-1306. 10.1109/SSCI.2018.8628832
  2. Reshma, K.; Priyatharishini, M.; Nirmala Devi, M. Hardware Trojan Detection Using Deep Learning Technique. In Soft Computing and Signal Processing . Advances in Intelligent Systems and Computing; Wang, J.; Reddy, G.; Prasad, V.; Reddy, V., Eds.; Springer: Singapore, 2019; Volume 898, pp. 671-680. https://doi.org/10.1007/978-981-13-3393-4_68
  3. Hu, T.; Dian, S.; Jiang, R. Hardware Trojan detection based on long short-term memory neural network.Computer Engineering 2020, 46(7), 110-115. DOI:10.19678/j.issn.1000-3428.0055589.

Point 8: Add a paragraph to describe the limitations of the work

Response 8: We thank the reviewer for the comments. In our original submission, we have included the limitations of this work, as follow in Section 6: ” However, the improved method proposed in this paper has a significant time overhead in finding the optimal parameters and requires relatively high hardware computing pow-er, memory bandwidth and data storage. The detection cost is magnified to some extent. In the later work, we will further analyze the characteristics of the experimental data, explore more available information on the existing basis, reduce the detection cost in many aspects, and further improve the generality of the Hardware Trojan detection method.

Point 9: The manuscript needs a heavy English editing.

Response 9: Thank you for your valuable and thoughtful comments. We have carefully checked and improved the English writing in the revised manuscript.

Author Response File: Author Response.pdf

Reviewer 3 Report

1) Authors should compare their study with other machine learning method for hardware Trojan detection on real chips.

2) Authors should explain training model in detail. 

 

Author Response

Response to Reviewer 3 Comments

Point 1: Authors should compare their study with other machine learning method for hardware Trojan detection on real chips.

Response 1: We thank the reviewer for the comments. In Table 4, we compare seven representative machine learning methods and evaluate their accuracy, precision, recall and F1-score. A comparison between related jobs was added in Section 5.3 and highlighted them in the original text. The updated version is attached below.

Table 7. Classification results of network - related methods.

Model name

Accurary(%)

Precision(%)

Recall(%)

F1-score(%)

CNN-Att

85.17

85.50

85.36

84.22

MLI-Att

84.87

85.13

84.87

84.89

[45]

74.01

75.33

72.06

77.84

[46]

81.85

79.82

79.29

80.46

[47]

84.79

81.99

80.06

79.52

 

  1. Madden, K.; Harkin, J.; McDaid, L.; Nugent, C. Adding security to networks-on-chip using neural networks. In 2018 IEEE Symposium Series on Computational Intelligence (SSCI), Bangalore, India, 18-21 November 2018; pp. 1299-1306. 10.1109/SSCI.2018.8628832
  2. Reshma, K.; Priyatharishini, M.; Nirmala Devi, M. Hardware Trojan Detection Using Deep Learning Technique. In Soft Computing and Signal Processing . Advances in Intelligent Systems and Computing; Wang, J.; Reddy, G.; Prasad, V.; Reddy, V., Eds.; Springer: Singapore, 2019; Volume 898, pp. 671-680. https://doi.org/10.1007/978-981-13-3393-4_68
  3. Hu, T.; Dian, S.; Jiang, R. Hardware Trojan detection based on long short-term memory neural network.Computer Engineering 2020, 46(7), 110-115. DOI:10.19678/j.issn.1000-3428.0055589.

Point 2: Authors should explain training model in detail. 

Response 2: We thank the reviewer for the comments.

We reinterpreted the CNN-Att model as follows:

The Hardware Trojan detection model base on MLP-Attention proposed in this paper adds the attention mechanism between the input layer and the first hidden layer, which obtains more critical features and improves the efficiency of Hardware Trojan detection. As shown in Figure 4, the MLP-Attention network structure includes an input layer, multiple hidden layers, and an output layer. The layers are connected in the same fully-connected way. First, the training data is fed into the input layer of MLP-Attention to obtain the attention distribution of the input data and the importance of each feature. For this layer, we set the number of the units to 3. Then, the attention distribution from the first layer is stitched with the product of the input data to obtain the intermediate data with the attention influence factor. This data is then processed again by the intermediate hidden layer and then to the final output layer for classification prediction. For the first middle layer, we set the number of the units to 20. For the second middle layer, we set the number of the units to 50. For the output layer, we set the number of the units to 2 or more. The detection of Hardware Trojan is a binary classification problem, so we use the softmax activation function in the last output layer to obtain the probability of Trojan insertion, and the expression of softmax is:

                                                                                             

The probability sum of the classifications is 1. The output result with the Trojan insertion circuit is 1, and conversely, the output result without the Trojan insertion circuit is 0. After 50 iterations, the model reaches the convergence state. Finally, in the testing phase, we input the test set into the previously trained MLP-Attention model for Hardware Trojan classification detection and obtain the final detection results.

 

We reinterpreted the CNN-Att model as follows:

The CNN-Attention based Hardware Trojan detection model proposed in this paper is shown in Figure 5. The input layer receives data. After obtaining the side channel input information, three convolutional layers are used to extract the side channel information features. The activation function used in each convolutional layer is the Relu activation function and stride is 2. A pooling layer follows each convolutional layer to filter the extracted side-channel information and the useless features. The pooling function of the pooling layer is usually set to reduce the di-mensionality of the convolutional output and make the data compact. The pooling layer chosen for this method is the highest pooling layer, which is used to reduce the estimated mean bias caused by errors in the training process of the convolutional layers. After the convolution and pooling layers, we add the attention module to give higher weights to more important feature information to extract side channel information better. We use softmax function to activate the output data of the third convolution layer and multiply it by itself to obtain more attention information. A fully connected layer follows the attention module. Finally, the probability of the presence or absence of Trojan insertion is obtained using the softmax activation function. For multiple trojan, the total classification probability is still 1. In the training process, the number of iterations is set to 80, and the model parameters are shown in Table1.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

I still don't have enough information about the experimental setup. The authors mention that they measure side channel information but don't report precisely what this information is and how it is captured. If there is some kind of manual measurement, is it done in double blind ? Can you really publish a paper based on a neural network where you don't describe foroughfully what the dataset is and how it has been built ?

In Table 4, authors evaluate both CNN and MLP. It seems difficult to use such different algorithms on the same dataset. CNN require some kind of 1D/2D or 3D structure while MLP are used for datasets where we don't have such structure.

In Table 7, authors compare their results to other works but it is not clear if the results are for different datasets or for the same. And if it is the same, it is not clear if it is the original dataset or the dataset built by the authors. The only acceptable method is that the authors apply their neural network to the dataset used in other works and compare their performance to the original performance.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Authors have addressed my comments

Author Response

Thank you again for your comments.

Round 3

Reviewer 1 Report

Dear author, it looks like there may be two different contributions in your paper:

a) A new dataset in the field of Hardware Trojan

b) The use of the attention mechanism to enhance the detection of Hardware Trojans

For contribution a) you should give all the very details of this dataset, including :

What is is really measured ? (Power, Voltage, Intensity, Electromagnetic field with the number of channels for each variable, the number of samples, the sample frequency and so on).

What is inside the chip when you make the measurements ? (with all details)

How is the Trojan tiggered ? If there is some kind of manual operation (e.g. with the probes), how can you be sure that the experiment is not biased by the guy who makes the measurements ?

You should give enough details so that someone else can reproduce the experiment to claim a scientific contribution. I don't see how you could give enough details in less than 3-4 pages just for this point.

For contribution b) you should give all the parameters used in the neural networks (which is not the case presently) and apply your contribution to the dataset used in previous works to compare to the results reported in previous works.

If you just compare your new algorithm to previous state-of-the-art algorithms on your new custom-built dataset, unfortunately this has no scientific value.

Finally, the attention mechanism is not new. So unless you adapt this mechanism with some kind of new feature, the contribution is quite limited.

As mentioned since the first review, my analysis is that your paper is not ready at all to be submitted for publication. Minor editing will NOT lead to an acceptation from me.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop