Next Article in Journal
Optimal Schedule the Operation Policy of a Pumped Energy Storage Plant Case Study Zimapán, México
Next Article in Special Issue
EiCSNet: Efficient Iterative Neural Network for Compressed Sensing Reconstruction
Previous Article in Journal
Performance Evaluation of SiC-Based Two-Level VSIs with Generalized Carrier-Based PWM Strategies in Motor Drive Applications
 
 
Article
Peer-Review Record

A Hardware Trojan-Detection Technique Based on Suspicious Circuit Block Partition

Electronics 2022, 11(24), 4138; https://doi.org/10.3390/electronics11244138
by Jiajie Mao 1, Xiaowen Jiang 2,*, Dehong Liu 3, Jianjun Chen 3 and Kai Huang 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Electronics 2022, 11(24), 4138; https://doi.org/10.3390/electronics11244138
Submission received: 22 November 2022 / Revised: 8 December 2022 / Accepted: 10 December 2022 / Published: 12 December 2022
(This article belongs to the Special Issue Advances of Electronics Research from Zhejiang University)

Round 1

Reviewer 1 Report

By applying Sandia Controllability/Observability Analysis Program, K-means clustering, and shortest path calculation, authors introduced a hardware Trojan detection approach. It is attractive that authors could improve the detection speed while maintaining a lower false positive rate when being applied to large circuits. Some following points can be considered when preparing the revised paper.

Advantages and disadvantages of the machine learning based-detection approaches should be presented briefly.

In line 103, authors wrote “You can easily see that the OUTPUT1 of the circuit changes only when INPUT1 = INPUT2”. But in fact, I have not found the OUTPUT1.

This point is not clear “directly classify the signals with controllability/observability value greater than 254 as suspicious signals”. How does authors select the value 254?

The obtained results are very good “Our method have improved the detection speed by more than 10 times, and it only takes 0.43 seconds on average to detect a Trojan benchmark.”. Authors should discuss further about them.

Please revised the manuscript carefully to avoid errors and typos. For example, lines 366 – 368: “The specific detection time for the 914 Trojan platform are shown in Table. 4.Our method have improved the detection speed by more than 10 times, and it only takes 0.43 seconds on average to detect a Trojan benchmark.” => “The specific detection time for the 914 Trojan platform is shown in Table. 4. Our method has improved the detection speed by more than 10 times, and it only takes 0.43 seconds on average to detect a Trojan benchmark.” 

Author Response

Dear Reviewer:

        We are very grateful to you for giving us an opportunity to revise our manuscript. We would like to express our gratitude and sincere thanks to you for your valuable comments on our manuscript entitled “A Hardware Trojan Detection Technique Based on Suspicious Circuit Block Partition” (Manuscript ID: electronics-2081403)

        We have studied your comments carefully and tried our best to revise our manuscript according to the comments. The following are the responses and revisions we have made in response to your questions and suggestions on a point-by-point basis. Thanks again for your hard work. We are looking forward to hearing from you soon.

 

Best wishes!

Sincerely yours,

Authors

 

Point 1: Advantages and disadvantages of the machine learning based-detection approaches should be presented briefly.

 

Response 1: Thanks for Reviewer’s suggestion, we have added the presentation of advantages and disadvantages of the machine learning based-detection approaches in the third paragraph of the section “Machine learning based-detection” in Chapter 2 “Related work”.

 

Point 2: In line 103, authors wrote “You can easily see that the OUTPUT1 of the circuit changes only when INPUT1 = INPUT2”. But in fact, I have not found the OUTPUT1.

 

Response 2: Thanks to Reviewer to reminder, we wanted to express that the first output of the two circuits will be different, but there is no specific explanation of “OUTPUT1” in the text. We have amended it to “It is obvious that OUTPUT 11 and OUTPUT 12 are different only when INPUT 1 = INPUT 2 = 1”.

 

Point 3: This point is not clear “directly classify the signals with controllability/observability value greater than 254 as suspicious signals”. How does authors select the value 254?

 

Response 3: As Reviewer suggested that it is indeed better to explain why we select the value 254. We have added the reason as “It is because 237 TetraMAX limits the value to 254 when considering the SCOAP value, which means that the signal has very poor controllability.” Besides, the reference 29 and 31 use the same value of 254, because in this way, we can also ensure that the clustering of the value of SCOAP is not affected by special values.

 

Point 4: The obtained results are very good “Our method have improved the detection speed by more than 10 times, and it only takes 0.43 seconds on average to detect a Trojan benchmark.”. Authors should discuss further about them.

 

Response 4: Thank you for your valuable comment. As we mentioned, dynamic simulation is a very inefficient method to reduce the false positive rate because of the large number of random test vectors. while static analysis just processes the data, leading to faster speed. And we have added the specific average detection time of each type of benchmarks, making it convincing.

 

Point 5: Please revised the manuscript carefully to avoid errors and typos. For example, lines 366 – 368: “The specific detection time for the 914 Trojan platform are shown in Table. 4.Our method have improved the detection speed by more than 10 times, and it only takes 0.43 seconds on average to detect a Trojan benchmark.” => “The specific detection time for the 914 Trojan platform is shown in Table. 4. Our method has improved the detection speed by more than 10 times, and it only takes 0.43 seconds on average to detect a Trojan benchmark.” 

 

Response 5: Thanks to Reviewer to reminder, we have read our paper carefully and proofread it repeatedly. As for the language problems in our paper, we have modified in the form of revision.

Reviewer 2 Report

This paper describes a logical test-based hardware Trojan detection technique on gate-level netlist. Its goal is to improve the detection speed with regards to previously proposed solutions.

Although the topic is interesting, and the idea seems good and effective; the quality of the paper/English is not good enough. Furthermore the state of the art is not complete enough in my opinion.

 

Weaknesses:

- A careful proofread of the manuscript is required. Among others:

    - Specifiers (or lack of) are repeatedly misused in the sentences. For example, in the introduction, line 17 "The IC supply chain", line 22 "can be exposed to an attacker", line 23 "can insert a malicious circuit".

    - Conjugation errors are also present, which can make the meaning of certain sentences uncertain

 

Detailed comments:

- Figure 1 is in my opinion surprising. I have always encountered papers considering the foundry and third-party IP suppliers begin the most prominent threat. If the IC designer is the attacker, who can be trusted to discover the Trojan? The text referring to figure 1 explains better that the threat is the designer of an IP that will be bought. The text better explains the threat, but the figure is in my opinion misleading.

- I am not used either of the expression "the foundry reverse engineer the Trojan", I would prefer "The foundry must reverse engineer the IC in order to be able to insert a Trojan"

- I do not agree with the affirmation "Logical test (...) triggers hardware Trojans by appling random test vectors". From the very first methods at the end of the 2000s, the choice of patterns has been scrupulously studied. It is a misleading shorthand to say that "Due to the large number of random test vectors, most of the existing methods based on logical testing focus on reducing the number of test vectors.

- Pursuing the same of questionning, I feel that state of the art about post-silicon test-based detection methods is clearly missing. Althoug different in nature, these methods share a lot in common with the so-called "threshold-based detection" methods described in section 2 : finding low controllable signals. The use of SCOAP (or COP) based tools to pursue that goal has already been invastigated in that threat model (The only difference is that, the foundy has to reverse engineer the layout in order to obtain the netlist, but then, the similarities begin). Here is a non exhaustive list of paper that would be of interest :

[1] Chakraborty, et al., MERO: A Statistical Approach for Hardware Trojan Detection, International Conference on Cryptographic Hardware and Embedded Systems (CHES), 2009

[2] Dupuis et al., New Testing Procedure for Finding Insertion Sites of Stealthy Hardware Trojans, Design Automation & Test in Europe (DATE), 2015

[3] Zou et al., Potential Trigger Detection for Hardware Trojans, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2018

[4] Xu et al., A Novel ATPG Method to Increase Activation Probability of Hardware Trojan, 2019

- Section 4 "Threat model" does not describe the threat model, but the spesification of the hardware Trojan. The threat model include also who can put the trojan, why, where? etc.

- I would like more detailed data results for Table 4, the time for each benchmark, not only an average.

 

Editorial comments:

- Some spaces are omitted before references.

- Reference 20 and 28 refer to the same paper.

 

 

Nitpicking details:

- The authors sometimes use the term hardware Trojans and sometimes Trojans, please choose one denomination and stick to it.

- Figure 3(b) not only shoes a Trojan trigger, but also the Trojan payload.

- I am not sure the Table 1 is necessary, one phrase could summarize the same information.

 

Author Response

Dear Editor:  

    We are very grateful to you for giving us an opportunity to revise our manuscript. We would like to express our gratitude and sincere thanks to you for your valuable comments on our manuscript entitled “A Hardware Trojan Detection Technique Based on Suspicious Circuit Block Partition” (Manuscript ID: electronics-2081403)

    We have studied your comments carefully and tried our best to revise our manuscript according to the comments. The following are the responses and revisions we have made in response to your questions and suggestions on a point-by-point basis. Thanks again for your hard work. We are looking forward to hearing from you soon.

 

Best wishes!

Sincerely yours,

Authors

 

Point 1: - A careful proofread of the manuscript is required. Among others:

- Specifiers (or lack of) are repeatedly misused in the sentences. For example, in the introduction, line 17 "The IC supply chain", line 22 "can be exposed to an attacker", line 23 "can insert a malicious circuit".

    - Conjugation errors are also present, which can make the meaning of certain sentences uncertain.

 

Response 1: Thanks to Reviewer to reminder, we have read our paper carefully and proofread it repeatedly. As for the language problems in our paper, we have modified in the form of revision.

 

Point 2: - Figure 1 is in my opinion surprising. I have always encountered papers considering the foundry and third-party IP suppliers begin the most prominent threat. If the IC designer is the attacker, who can be trusted to discover the Trojan? The text referring to figure 1 explains better that the threat is the designer of an IP that will be bought. The text better explains the threat, but the figure is in my opinion misleading.

 

Response 2: As Reviewer suggested that it is indeed better to express the threat as third-party IP rather than IP/IC designer. We have modified our expression in Figure 1.

 

Point 3: - I am not used either of the expression "the foundry reverse engineer the Trojan", I would prefer "The foundry must reverse engineer the IC in order to be able to insert a Trojan"

 

Response 3: Thanks for Reviewer’s suggestion, it will be better to express like that. We have read our paper repeatedly, and have modified some unreasonable expressions.

 

Point 4: - I do not agree with the affirmation "Logical test (...) triggers hardware Trojans by applying random test vectors". From the very first methods at the end of the 2000s, the choice of patterns has been scrupulously studied. It is a misleading shorthand to say that "Due to the large number of random test vectors, most of the existing methods based on logical testing focus on reducing the number of test vectors.

 

Response 4: Thank you for your valuable comment, the affirmation "Logical test (...) triggers hardware Trojans by applying random test vectors" is indeed not accurate enough. Logical test triggers hardware Trojans by applying test vectors, but not the random test vectors.

 

Point 5: - Pursuing the same of questionning, I feel that state of the art about post-silicon test-based detection methods is clearly missing. Althoug different in nature, these methods share a lot in common with the so-called "threshold-based detection" methods described in section 2 : finding low controllable signals. The use of SCOAP (or COP) based tools to pursue that goal has already been invastigated in that threat model (The only difference is that, the foundy has to reverse engineer the layout in order to obtain the netlist, but then, the similarities begin). Here is a non exhaustive list of paper that would be of interest :

[1] Chakraborty, et al., MERO: A Statistical Approach for Hardware Trojan Detection, International Conference on Cryptographic Hardware and Embedded Systems (CHES), 2009

[2] Dupuis et al., New Testing Procedure for Finding Insertion Sites of Stealthy Hardware Trojans, Design Automation & Test in Europe (DATE), 2015

[3] Zou et al., Potential Trigger Detection for Hardware Trojans, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2018

[4] Xu et al., A Novel ATPG Method to Increase Activation Probability of Hardware Trojan, 2019

 

Response 5: We are grateful that you share the list of paper with us. We have read these papers and have added the state of the art about post-silicon test-based detection methods in Chapter 1.

 

Point 6: - Section 4 "Threat model" does not describe the threat model, but the spesification of the hardware Trojan. The threat model includes also who can put the trojan, why, where? etc.

 

Response 6: Thank you for your valuable comment, it should include also who can put the trojan, why, where. Therefore, we introduce the most serious threat is the third-party IP. IP vendor directly insert hardware Trojan into RTL to achieve the purpose of changing the function, leaking information and reducing performance and reliability. However, it's hard to find actual instances of hardware trojans, so the researchers mainly used the hardware Trojan benchmarks of Trust-Hub as threat model.

 

Point 7: - I would like more detailed data results for Table 4, the time for each benchmark, not only an average.

 

Response 7: Thank you for your valuable comment, I am sorry that the length of our paper does not allow me to list the detection time of all the 914 Trojan benchmarks, but I have listed the average time of each type of benchmarks. Because hardware Trojan benchmarks in TRIT-TC and TRRIT-TS are based on eight circuits, the detection time of the same circuit is very close, so I think this might be an acceptable result.

 

Point 8: - Some spaces are omitted before references.

- Reference 20 and 28 refer to the same paper.

 

Response 8: Thank you for reading our article patiently and carefully, we have added these spaces before references, and have checked the reference.

 

Point 9: - The authors sometimes use the term hardware Trojans and sometimes Trojans, please choose one denomination and stick to it.

 

Response 9: Thank you for your advice, we have used the term hardware Trojans throughout the article.

 

Point 10: - Figure 3(b) not only shows a Trojan trigger, but also the Trojan payload.

 

Response 10: It is true that Figure 3(b) is a whole hardware Trojan inserted into a circuit. We have modified our expression.

 

Point 11: - I am not sure the Table 1 is necessary, one phrase could summarize the same information.

 

Response 11: Thank you for your advice, this table is indeed redundant, we have deleted it.

Round 2

Reviewer 1 Report

The manuscript has been revised.

Reviewer 2 Report

All the changes made, and responses to my remarks suit me.

 

I would have personally merged tables 3 and 4, but the paper can be accepted as it is.

Back to TopTop