**2. Background**

#### *2.1. Threats to Validity*

We included the studies that (1) deal with methods to attack machine learning or deep learning models, (2) protect models' intellectual property from attacks or provide methods to identify stolen models, and (3) discuss the mentor–student training schema and its limitations, such as the number of layers reduction, speedup gain, and accuracy

reduction. We have used multiple combination strings such as 'DNN distillation', 'mentor student training', 'teacher student training', 'DNN attacks', 'machine learning attacks', 'watermarking in DNN', 'DNN protection', 'DNN intellectual property', and 'ML and DL models protection' to retrieve the peer-reviewed articles of journal conference proceedings, book chapters, and reports. We have targeted the five databases, namely IEEE Xplore, SpringlerLink, Scopus, arXiv digital library, and ScienceDirect. Google Scholar was also largely used for searching and tracking cited papers based on the topics of interest. The title and abstract were screened to identify potential articles; then, the experimental results were carefully reviewed in order to identify relevant baselines and successful methods.
