Next Article in Journal
The Influence of Magnetic Field Orientation on the Efficacy and Structure of Ni-W-SiC Coatings Produced by Magnetic Field-Assisted Electrodeposition
Previous Article in Journal
Sunlight-Driven Photodegradation of RB49 Dye Using TiO2-P25 and TiO2-UV100: Performance Comparison
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Magnetic Prediction of Doped Two-Dimensional Nanomaterials Based on Swin–Resnet

School of Computer Science and Technology, Changchun Normal University, Changchun 130032, China
*
Authors to whom correspondence should be addressed.
Coatings 2024, 14(10), 1271; https://doi.org/10.3390/coatings14101271
Submission received: 11 September 2024 / Revised: 1 October 2024 / Accepted: 2 October 2024 / Published: 3 October 2024

Abstract

:
Magnetism is an important property of doped two-dimensional nanostructures. By introducing dopant atoms or molecules, the electronic structure and magnetic behavior of the two-dimensional nanostructures can be altered. However, the complexity of the doping process requires different strategies for the preparation and testing of various types, layers, and scales of doped two-dimensional materials using traditional techniques. This process is resource-intensive, inefficient, and can pose safety risks when dealing with chemically unstable materials. Deep learning-based methods offer an effective solution to overcome these challenges and improve production efficiency. In this study, a deep learning-based method is proposed for predicting the magnetism of doped two-dimensional nanostructures. An image dataset was constructed for deep learning using a publicly available database of doped two-dimensional nanostructures. The ResNet model was enhanced by incorporating the Swin Transformer module, resulting in the Swin–ResNet network architecture. A comparative analysis was conducted with various deep learning models, including Resnet, Res2net, ResneXt, and Swin Transformer, to evaluate the performance of the optimized model in predicting the magnetism of doped two-dimensional nanostructures. The optimized model demonstrated significant improvements in magnetism prediction, with a best accuracy of 0.9.

1. Introduction

In recent years, there have been significant breakthroughs in the theoretical design and experimental preparation techniques of two-dimensional materials [1]. It has been discovered that when the thickness of layered materials is reduced to the limit, their properties differ completely from bulk materials [2]. These two-dimensional nanostructures exhibit distinct physical and chemical properties compared to three-dimensional materials, such as the pronounced confinement effects of electrons in two dimensions [3,4,5]. Leveraging these unique optical, electrical, and magnetic characteristics, two-dimensional nanostructures have found initial applications in fields such as biopharmaceuticals [6], aerospace [7], energy storage [8], chip manufacturing [9], and quantum computing [10], demonstrating wide-ranging applications and research value.
The doping of two-dimensional nanostructures refers to the introduction of different atoms or molecules through doping techniques to alter the electronic structure and physical properties of the materials, enabling precise control over their properties. Compared to other defect-engineering techniques, such as material boundaries [11,12,13,14], line defects [15,16], vacancies [17,18], and adatoms [19,20], doping [21] enables precise manipulation of material properties. By selecting specific doping elements, the electronic structure and physical properties, such as conductivity and magnetism [22,23], can be precisely adjusted. Additionally, doping techniques are applicable to nearly all two-dimensional materials and can achieve diverse performance modulation by selecting different doping elements. Moreover, doping atoms are typically more securely embedded within the material lattice, enhancing the chemical and physical stability of the materials. However, due to the complexity of the doping process and the unique properties of two-dimensional nanostructures, traditional experimental research on such materials often faces difficulties and limitations. Designing different approaches is necessary for the fabrication and testing of different types of doped two-dimensional materials with different layers and sizes. This process requires significant resources and is inefficient. For chemically unstable two-dimensional materials, manual operations can easily lead to accidents [24].
Magnetism is an important property of doped two-dimensional nanomaterials. Two-dimensional magnetic materials [25,26] have attracted widespread attention in fields such as spintronics [26], magnetic catalysts [27], magnetic storage devices [26], sensors [9], and quantum computing [10] due to their potential applications. To gain a deeper understanding of the magnetic behavior of doped two-dimensional nanomaterials, research on predicting their magnetism has become crucial. The significance of magnetism prediction lies in using theoretical simulation and computation to predict the magnetic behavior of doped two-dimensional nanomaterials, providing guidance and a theoretical foundation for experimental studies.
In recent years, with the rapid development of deep learning technology, its application in the field of material magnetism has become increasingly widespread. Deep learning techniques establish large-scale material datasets, preprocess and extract features from the data, and then use relevant algorithms for model training to construct models that predict material properties, greatly improving prediction efficiency. Compared to traditional density functional theory (DFT) calculations and experimental testing methods, using deep learning-based magnetic prediction models is a more efficient approach that significantly reduces computational costs.
In this study, a deep learning-based image dataset was constructed using publicly available databases of doped two-dimensional nanomaterials. Improvement research was conducted on the ResNet model to better predict the magnetism of doped two-dimensional nanomaterials. This study aims to provide better tools and methods for exploring and predicting the magnetic properties of doped two-dimensional nanomaterials. It demonstrates great feasibility and application value in predicting the magnetism of doped two-dimensional nanomaterials, and holds significant importance in advancing research on the magnetism of doped two-dimensional nanomaterials.
The main contributions of this study are summarized as follows:
(1)
This research proposes a deep learning-based Swin–ResNet network, which enhances feature extraction capabilities for predicting the magnetism of doped two-dimensional nanomaterials by replacing the conventional 3 × 3 convolution in ResNet with Swin Transformer (SwinT) modules.
(2)
To address the issues of limited data and complex structures in doped two-dimensional nanomaterials, the model structure was optimized, and three-dimensional coordinate data were processed to improve data fitting. This optimization allows the model to maintain high efficiency and accuracy even with limited data.
(3)
Comparative experiments with various deep learning models demonstrate the unique superiority of the Swin–ResNet model. It effectively handles the complex task of predicting the magnetism of doped two-dimensional nanomaterials, achieving a prediction accuracy of 90%, surpassing traditional methods and other deep learning models.

2. Related Works

With the rapid development of deep learning techniques, their applications in the field of materials science have become increasingly widespread [28,29,30]. Deep learning can assist material researchers in predicting the properties of doped two-dimensional nanostructures, guiding the design of new materials, and optimizing material structures.
Deep learning-based methods for material property prediction can accurately and efficiently handle large amounts of experimental and theoretical data to predict material properties, lifespan, catalytic activity, and more. In recent years, with the development of high-throughput computing and experimental techniques, the scale of material datasets has been continuously expanding. Ying [31] proposed an interactive representation learning method based on molecular attribute graphs, utilizing graph neural network (GNN) models to predict molecular properties. Pin [32] combined large-scale computational databases with deep learning to construct a GNN-based model for predicting material properties accurately and with wide applicability. Kamal [33] introduced atom-level graph neural networks (ALGNN) based on line graph representation, achieving accurate prediction of material properties and improving generality by inputting line graphs into neural networks. Kevin [34] discussed the application of convolutional neural networks in atomic systems, including material discovery, molecular dynamics simulations, and catalyst design. Tian [35] presented a material property prediction method based on crystal graph convolutional neural networks, which accurately predicts material properties and provides interpretable results. Gyoung [36] constructed a graph convolutional neural network-based model for accurately predicting thermoelectric properties and analyzing the influence of doping elements. Andrew T [37] introduced a deep learning approach for molecular docking, converting the molecular docking problem into an image recognition problem and using convolutional neural networks for prediction. Currently, deep learning techniques are increasingly applied in the field of materials science [38,39,40].
Currently, there has been further application of deep learning-based techniques for identifying and predicting the magnetic properties of doped two-dimensional nanomaterials [41,42,43,44,45,46]. However, research specifically focused on magnetic properties is still relatively limited. The study of predicting the magnetism of doped two-dimensional nanomaterials not only holds theoretical significance but also provides new directions and opportunities for the development of materials science. Khan [41] proposed a novel data-driven deep learning model for predicting the solutions of Maxwell’s equations in low-frequency electromagnetic devices. By introducing a probabilistic model, the prediction accuracy and quantification of uncertainty were improved. Kwon [42] demonstrated the importance of inferring the magnetic Hamiltonian parameters from magnetic domain images and showed that deep learning can quantize the magnetic Hamiltonian. Training deep neural networks with Monte Carlo-generated domain configurations verified the effectiveness of this method, suggesting that deep learning techniques can act as a bridge between experimental and theoretical methods. Demirpolat [43] introduced a deep learning architecture to predict the effect of magnetic effects on the heat transfer coefficient and used the long short-term memory (LSTM) and convolutional neural network long short-term memory (CNN-LSTM) deep learning models to predict the h value of nanofluids. Pollok [44] employed the ResNeXt-50 convolutional neural network model to inverse predict the properties of individual hard magnetic materials under a given magnetic field. Li [45] designed a deep neural network framework to represent the density functional theory Hamiltonian of magnetic materials for efficient electronic structure calculations.
In summary, researchers have made progress in using machine learning and deep learning methods to predict the magnetism of doped two-dimensional nanomaterials. However, several challenges remain. Firstly, the development of databases for doped two-dimensional nanomaterials is relatively recent, and there are few publicly available databases. Secondly, most of the data for doped two-dimensional nanomaterials are structural data, represented in the form of atomic three-dimensional coordinates. This leads to challenges such as complex model structures and poor data fitting during model training. Lastly, the majority of mainstream deep learning models are designed for image data, and there are few models specifically designed for three-dimensional data. Therefore, research on doped two-dimensional nanomaterials is still at the basic level of transfer learning. Addressing these challenges is crucial for further advancements in predicting the magnetism of doped two-dimensional nanomaterials.

3. Materials and Methods

3.1. Residual Network

Residual Network [47,48] (ResNet) is a deep convolutional neural network architecture designed to address the issues of vanishing and exploding gradients during the training of deep networks, as illustrated in Figure 1. The core concept of ResNet is the introduction of residual connections, which facilitate information flow through direct connections across layers, making the network easier to train.
The ResNet architecture comprises multiple residual blocks, each formed by several convolutional layers. Based on the variations in the number of residual blocks and convolutional layers, ResNet can have different depths. The bottleneck residual block is a variant that employs 1 × 1 convolutions to create a bottleneck structure, as illustrated in Figure 1. When the input and output dimensions are consistent, the input can be directly added to the output. However, when dimensions differ, down-sampling is required, typically achieved through pooling, which does not increase parameters while using zero-padding to adjust dimensions.
During training, the convolutional layers within the bottleneck residual blocks learn the residual function, enabling the network to learn the identity mapping as a residual, thereby alleviating the optimization challenges associated with training deep networks.

3.2. Swin–ResNet

We built a Swin–ResNet network based on the ResNet50 backbone, with a structure similar to the traditional ResNet, as depicted in Figure 2. We replaced the conv 3 × 3 module in the Bottleneck with the Swin Transformer [49] (SwinT) module to enhance its feature extraction capability. The SwinT module was adjusted from the original Swin Transformer to adapt to the ResNet network. The experimental results demonstrate that replacing the traditional convolutional module with the SwinT module in the improved Swin–ResNet increased the accuracy to 90% compared to the traditional Resnet.
The modified Bottleneck is shown in Figure 2, which maintains the same structure as the traditional ResNet Bottleneck, with only the second convolution replaced by the SwinT module. While the input data in the traditional Swin Transformer is preprocessed into one-dimensional data before training, the traditional ResNet uses two-dimensional data for easier convolution operations. To adapt to the training process of ResNet, we applied Reshape to transform the data dimensions in the SwinT module. Additionally, LayerNorm was shifted and placed after Window Attention (WA), Multi-Layer Perception (MLP), and Shift Window Attention (SWA), and after the SwinT module’s processing, Conv2D convolution was used to convert it into two-dimensional data.

3.3. Evaluation Indicators

In this experiment, accuracy [50,51] (ACC) is used as the primary evaluation metric to assess the performance of the proposed model. Accuracy is computed by constructing a confusion matrix of size n × n, which records the predictions of a classifier with n categories. The confusion matrix provides detailed information about the performance of the classifier by comparing the predicted categories with the actual categories. It includes elements such as true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN). Accuracy is the ratio of correctly identified samples to the total number of samples and is calculated using the following formula.
A C C = T P + T N T P + T N + F P + F N × 100 % ,
In addition, this experiment also utilized precision [52,53,54], recall [53,54,55,56], and computed the F1-score [57,58] as further evaluation metrics for model performance. Precision [52,53,54] measures the proportion of correctly predicted positive samples among all predicted positive samples. The calculation formula for precision is as follows, typically represented as a percentage, reflecting the accuracy of all positive predictions made by the classifier.
P r e c i s i o n = T P T P + F P × 100 % ,
Recall [53,54,55,56] is used to measure the proportion of actual positive samples correctly identified as positive by the classifier. It evaluates the classifier’s ability to identify all positive samples. The calculation formula for recall is as follows, representing the proportion of true positive predictions among all actual positive samples. A high recall indicates that the classifier effectively captures the majority of positive samples in the dataset.
R e c a l l = T P T P + F N × 100 % ,
The F1-score [57,58] combines precision and recall to provide a single evaluation of the effectiveness of the classifier. It offers a comprehensive assessment by balancing precision (the ability to accurately identify positive samples) and recall (the ability to identify all positive samples). The calculation formula for the F1-score is as follows, ranging from 0% to 100%. A high F1-score indicates a good balance between precision and recall. By considering both precision and recall, the F1-score provides a comprehensive evaluation of the overall performance of the classifier.
F 1 s c o r e = 2 × P r e c i s i o n × R e c a l l p r e c i s i o n + R e c a l l ,
These metrics have different emphases in various application scenarios and can be used together to comprehensively evaluate and optimize model performance. Accurate magnetic predictions can guide researchers in selecting the most promising materials for further study, reducing the need for costly and time-consuming experimental synthesis and characterization. This can accelerate the application of high-performance new materials in advanced technologies like electronics and spintronics. Regarding overfitting, we extensively increased the training data through data augmentation.

3.4. Training Setup

The experiment trained the proposed Swin–ResNet model using the generated dataset. By comparing with the traditional ResNet model and its related improved models, the study investigated the performance of different models in predicting the magnetism of doped two-dimensional nanomaterials. The experimental results indicate that the Swin–ResNet model performs well on key performance indicators and is more suitable for predicting the magnetism of doped two-dimensional nanomaterials compared to other improved ResNet models.
Table 1 presents the software and hardware setup for this experiment, with a server configuration of CPU: 4 Cores, GPU: Tesla V100, and utilizing the Baidu PaddlePaddle [59] platform for deep learning.

4. Results

4.1. Datasets

The dataset for this experiment was obtained from the current publicly available Computational Materials Repository [60] (CMR), which provides the latest data on doped two-dimensional nanomaterials, as shown in Figure 3. Since the downloaded database does not include image data, this study utilized the image processing tool Jmol-16.1 [61,62] to process the corresponding structural data and generate images, thereby constructing an image database of doped two-dimensional nanomaterials for practical experimental applications. The image data of the doped two-dimensional nanomaterials are illustrated in Figure 4.
In the computational materials database, there are around 17,000 data samples of doped two-dimensional nanomaterials. For this experiment, we specifically selected the subset of data that consists of doped two-dimensional nanomaterials with arsenic (As) as the main dopant element. This subset was used for model training, validation, and testing. To enhance the extraction of useful information features, data augmentation techniques were applied to the experimental image data. In this experiment, data augmentation methods, including horizontal flipping, vertical flipping, mirror symmetry, affine transformation [63,64], rotation, Gaussian noise [65], contrast adjustment, scaling, and translation, were applied. These enhanced image data are illustrated in Figure 5. These techniques enhance the model’s generalization ability by generating richer training samples through various transformations of the original training data. Specifically, horizontal and vertical flips increase data diversity by altering image orientation, enabling the model to better adapt to objects from different directions. Mirror symmetry and affine transformations further enrich geometric features, helping the model maintain recognition accuracy under different perspectives and poses. Rotation transformations improve robustness by providing multi-angle image samples. Gaussian noise introduction ensures model stability against noise interference, while contrast adjustment enhances adaptability to varying lighting conditions. Scaling and image translation enhance generalization to changes in object size and position. These data augmentation methods effectively prevent overfitting, increase recognition capability in complex real-world scenarios, and provide a solid foundation for model training.
After augmentation, the resulting image dataset consists of a total of 12,760 images of doped two-dimensional nanomaterials, with a size of 500 × 500 pixels for each image. During the experiment, the image size was uniformly reduced to 224 × 224 pixels. Additionally, the dataset required manual annotation. In this experiment, the magnetic properties of different two-dimensional nanomaterials were identified based on their corresponding magnetic moments in the structural images. The magnetic moment data from a publicly available database were used for annotating the image dataset.

4.2. Training Results and Analysis

In this experiment, the traditional ResNet model and its improved models were selected for comparative training, including ResNet [47], DenseNet [66], Res2Net [67], ResNeXt [68], and the traditional Swin Transformer [49] model. The experimental parameter settings are as follows: 1000 training epochs, learning rate of 0.01, optimization using classic Stochastic Gradient Descent (SGD), Momentum, and RMSProp, Cross Entropy Loss used as the loss function, and a batch size of 64. The final training results are shown in Table 2.
The experimental results indicate that the proposed Swin–ResNet model shows a notable superiority in predicting the magnetism of doped two-dimensional nanomaterials. Specifically, when using the SGD optimizer, Swin–ResNet achieved an accuracy (Acc) of 0.90, significantly higher than other models. The traditional ResNet model had an Acc of 0.82, while DenseNet and ResNeXt reached 0.86 and 0.85, respectively. In comparison, Res2Net and ResNeXt have an Acc of only 0.76 and 0.71, respectively. Moreover, Swin–ResNet also demonstrated high levels in precision, recall, and F1-score, with values of 0.87, 0.88, and 0.85, respectively.
With the Momentum optimizer, Swin–ResNet achieved an Acc of 0.89, outperforming most similar models, with only DenseNet matching this accuracy, while other models hovered around 0.86. Using the RMSProp optimizer, Swin–ResNet recorded an Acc of 0.87932, slightly lower than with Momentum and SGD, but still higher than the other models. Among them, ResNeXt demonstrated a relatively high performance, while DenseNet only reached an accuracy of 0.83.
Additionally, the Swin Transformer model performed poorly in this experiment, with an Acc of approximately 0.4, significantly lower than other models. This suggests that, while the Swin Transformer may excel in other tasks, its performance in predicting the magnetism of doped two-dimensional nanomaterials is not satisfactory. Despite its excellent performance in many vision tasks, its performance in this specific task may be influenced by several factors including task characteristics, dataset scale and diversity, feature extraction capabilities, model complexity and optimization difficulty, and global sequential modeling capabilities.
Firstly, Swin Transformer was originally designed for high-resolution image and vision tasks, while predicting the magnetism of nanomaterials may rely more on specific physical and chemical properties that might not be intuitive in image data. Although Swin Transformer excels in large and diverse datasets, it may struggle to learn effective features from smaller or specialized datasets.
Secondly, Swin Transformer focuses on the hierarchical modeling of visual features. The features required for magnetic prediction tasks might be more complex and domain-specific, which the model might struggle to effectively extract and model, leading to poor performance.
The complexity and optimization difficulty of the model are also notable factors. Swin Transformer’s complex architecture and high computational complexity may pose challenges for optimization in specific domain tasks. Significant performance differences under different optimizers indicate greater difficulty in the optimization process, possibly requiring more tuning efforts.
Lastly, unlike models such as ResNet that use convolution operations to extract local features and leverage weight sharing and local correlations to capture the local structure of nanomaterials, Swin Transformer primarily relies on self-attention mechanisms. This lack of explicit local spatial awareness may hinder its ability to fully utilize local structural information, potentially impacting the performance of magnetic property prediction tasks.
The experiment also included the plotting of Loss and Acc curves for the different models mentioned above, as shown in Figure 6 and Figure 7. The figures display the Loss curves and Acc curves for each model over 1000 training epochs. Figure 6 shows the Loss curves and Figure 7 shows the Acc curves. The curves are plotted for the SGD, Momentum, and RMSProp optimizers.
Figure 6a indicates that, during training with the SGD optimizer, Swin–ResNet starts with a high loss of 2.3791 but shows significant improvement, reducing to 1.0875 after 1000 epochs, demonstrating good convergence and stability. Compared to other models, Swin–ResNet exhibits relatively stable loss variation in later stages, particularly between 300 and 900 epochs, where its loss remains low. Figure 6b shows that with the Momentum optimizer, the initial loss for Swin–ResNet is 2.3588, also displaying steady improvement throughout the training. The final loss of 1.3666 outperforms several models, with Swin–ResNet maintaining a smooth decline despite starting with a higher loss. Figure 6c illustrates that under the RMSProp optimizer, Swin–ResNet begins with a loss of 2.3792, similar to other models. However, it consistently maintains a robust performance in the later stages, ending with a loss of 1.1447. In contrast, models like Res2Net and ResNeXt experience fluctuations at certain epochs, while Swi–ResNet demonstrates consistency and robustness in complex feature learning. Figure 6 demonstrates that Swin–ResNet exhibits superior performance across all three optimizers. The gradual decline and stability of the loss at each epoch highlight the model’s advantages in feature learning and generalization. Compared to other models, Swin–ResNet not only excels in final loss value but also showcases its ability to handle complex data features, exhibiting strong adaptability and robustness throughout the training process.
Figure 7a illustrates that under the SGD optimizer, the Acc of Swin–ResNet shows a significant upward trend as the number of training epochs increases, reaching a peak of 0.9002. This indicates the stability and effectiveness of the model during the training process. In contrast, other models such as DenseNet, Res2Net, ResNet, ResNeXt, and Swin Transformer exhibit slower growth in accuracy under the SGD optimizer, and their accuracy remains lower than that of Swin–ResNet. Figure 7b shows that under the Momentum optimizer, Swin–ResNet’s accuracy also demonstrates stable growth, increasing from 0.3958 to 0.89456. Although DenseNet and ResNeXt occasionally achieve slightly higher accuracy than Swin–ResNet in certain epochs, Swin–ResNet’s overall performance remains superior to other models. Figure 7c shows that, under the RMSProp optimizer, Swin–ResNet’s accuracy increases from 0.3958 to 0.87932. Although the increase is not as remarkable as under the SGD and Momentum optimizers, it still demonstrates the effectiveness of the model under the RMSProp optimizer.
Overall, the Swin–ResNet model exhibits high levels of accuracy under different optimizers, demonstrating its effectiveness and superiority in predicting the magnetism of doped two-dimensional nanomaterials. While other models may occasionally achieve slightly higher accuracy than Swin–ResNet in certain epochs, Swin–ResNet’s overall performance remains better than most other models, particularly under the SGD and Momentum optimizers.

5. Conclusions

The study of magnetism in doped two-dimensional nanomaterials is a critical area, facing challenges such as high complexity, significant resource consumption, risks associated with manual operations, and low efficiency in traditional methods. To address these issues, this paper proposes a deep learning-based method for predicting the magnetism of doped two-dimensional nanomaterials, aiming to improve efficiency and reduce risks. The proposed approach utilizes ResNet50 as the backbone and constructs the Swin–ResNet network by replacing the 3 × 3 convolution in the Bottleneck of the traditional ResNet with the SwinT module, enhancing the feature extraction capability. The SwinT module is a modified version based on the Swin Transformer, tailored to the ResNet network. Through comparative studies with various deep learning models such as ResNet, Res2Net, ResNeXt, and Swin Transformer in predicting the magnetism of doped two-dimensional nanomaterials, the Swin–ResNet model demonstrates high accuracy under different optimizers, especially notable under the SGD and Momentum optimizers. The experimental results show that the improved Swin–ResNet achieves an accuracy of 90%, significantly outperforming other deep learning models. These results demonstrate the effectiveness and superiority of Swin–ResNet in predicting the magnetism of doped two-dimensional nanomaterials, providing an efficient and accurate tool for research in this field, with the potential to drive further development and practical applications.
Future research can build upon and expand the current study in several key areas. Enhancing the model’s scalability and adaptability to accommodate larger and more diverse datasets is essential. Extending the model to predict properties across various types of nanomaterials will significantly increase its applicability and relevance. Additionally, integrating multimodal data sources—such as structural, chemical, and physical data—will provide a more comprehensive perspective and improve prediction accuracy. Incorporating domain-specific knowledge into the learning process can further enhance the model’s interpretability and robustness. Practical applications and experimental validations will be crucial in bridging the gap between theoretical research and real-world implementation. Collaborations with experimental researchers and industry professionals will be vital in translating predictions into practical applications, thereby driving innovation and technological advancement in the field of nanomaterials. By exploring these directions, this study not only lays the groundwork for broader applications but also makes significant contributions to the understanding of nanomaterial magnetism.

Author Contributions

C.Z. and Y.Z.: Writing—original draft, resources, methodology. Y.Z.; supervision, writing—review and editing, funding acquisition, methodology. C.Z. and J.Z.; software, data curation. G.L.; resources, writing—review and editing. F.L.; data curation. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Scientific research project of Jilin Provincial Department of Education, China, grant number: JJKH20230918KJ.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data and material used to prepare this manuscript are not available.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Choudhary, K.; DeCost, B.; Chen, C.; Jain, A.; Tavazza, F.; Cohn, R.; Park, C.W.; Choudhary, A.; Agrawal, A.; Billinge, S.J. Recent advances and applications of deep learning methods in materials science. Phys. Rev. B—Condens. Matter Mater. Phys. 2022, 8, 59. [Google Scholar] [CrossRef]
  2. Liu, W.; Yu, Y.; Peng, M.; Zheng, Z.; Jian, P.; Wang, Y.; Zou, Y.; Zhao, Y.; Wang, F.; Wu, F.; et al. Integrating 2D layered materials with 3D bulk materials as van der Waals heterostructures for photodetections: Current status and perspectives. InfoMat 2023, 5, e12470. [Google Scholar] [CrossRef]
  3. Butler, K.T.; Davies, D.W.; Cartwright, H.; Isayev, O.; Walsh, A. Machine learning for molecular and materials science. Nature 2018, 559, 547–555. [Google Scholar] [CrossRef] [PubMed]
  4. Han, X.; Meng-Juan, M.; Yi-Lin, W. Recent development in two-dimensional magnetic materials and multi-field control of magnetism. Acta Phys. Sin. 2021, 70, 127503. [Google Scholar] [CrossRef]
  5. Jiang, X.-H.; Qin, S.-C.; Xing, Z.-Y.; Zou, X.-Y.; Deng, Y.-F.; Wang, W.; Wang, L. Study on physical properties and magnetism controlling of two-dimensional magnetic materials. Acta Phys. Sin. 2021, 70. [Google Scholar] [CrossRef]
  6. Pramanik, S.; Das, D.S. Future prospects and commercial viability of two-dimensional nanostructures for biomedical technology. In Two-Dimensional Nanostructures for Biomedical Technology; Elsevier: Amsterdam, The Netherlands, 2020; pp. 281–302. [Google Scholar] [CrossRef]
  7. Anirudh, S.; Krishnamurthy, S.; Kandasubramanian, B.; Kumar, P. Probing into atomically thin layered nano-materials protective coating for aerospace and strategic defence application—A review. J. Alloys Compd. 2023, 968, 172203. [Google Scholar]
  8. Chen, J.; Xu, W.; Wang, H.; Ren, X.; Zhan, F.; He, Q.; Wang, H.; Chen, L. Emerging two-dimensional nanostructured manganese-based materials for electrochemical energy storage: Recent advances, mechanisms, challenges, and prospects. J. Mater. Chem. A 2022, 10, 21197–21250. [Google Scholar] [CrossRef]
  9. Li, T.; Yin, W.; Gao, S.; Sun, Y.; Xu, P.; Wu, S.; Kong, H.; Yang, G.; Wei, G. The combination of two-dimensional nanomaterials with metal oxide nanoparticles for gas sensors: A review. Nanomaterials 2022, 12, 982. [Google Scholar] [CrossRef]
  10. Sangshekan, B.; Sahrai, M.; Asadpour, S.H.; Poursamad Bonab, J. Controllable atom-photon entanglement via quantum interference near plasmonic nanostructure. Sci. Rep. 2022, 12, 677. [Google Scholar] [CrossRef]
  11. Kapp, M.W.; Eckert, J.; Renk, O. Interface Engineering at the Nanoscale: Synthesis of Low-Energy Boundaries. Adv. Eng. Mater. 2024, 2400595. [Google Scholar] [CrossRef]
  12. Gao, Z.; Leng, C.; Zhao, H.; Wei, X.; Shi, H.; Xiao, Z. The electrical behaviors of grain boundaries in polycrystalline optoelectronic materials. Adv. Mater. 2024, 36, 2304855. [Google Scholar] [CrossRef] [PubMed]
  13. Xu, K.; Sheng, X.; Mathew, A.; Flores, E.; Wang, H.; Kulkarni, Y.; Zhang, X. Mechanical Behavior and Thermal Stability of Nanocrystalline Metallic Materials with Thick Grain Boundaries. JOM 2024, 76, 2914–2928. [Google Scholar] [CrossRef]
  14. Liebeton, J.; Söffker, D. Experimental analysis of the reflection behavior of ultrasonic waves at material boundaries. arXiv, 2023; arXiv:2402.03363. [Google Scholar]
  15. Bhatt, M.D.; Kim, H.; Kim, G. Various defects in graphene: A review. RSC Adv. 2022, 12, 21520–21547. [Google Scholar] [CrossRef] [PubMed]
  16. Pornprasit, C.; Tantithamthavorn, C.K. Deeplinedp: Towards a deep learning approach for line-level defect prediction. IEEE Trans. Softw. Eng. 2022, 49, 84–98. [Google Scholar] [CrossRef]
  17. Zheng, Y.; Fu, K.; Yu, Z.; Su, Y.; Han, R.; Liu, Q. Oxygen vacancies in a catalyst for VOCs oxidation: Synthesis, characterization, and catalytic effects. J. Mater. Chem. A 2022, 10, 14171–14186. [Google Scholar] [CrossRef]
  18. Yin, K.; Yan, Z.; Fang, N.; Yu, W.; Chu, Y.; Shu, S.; Xu, M. The synergistic effect of surface vacancies and heterojunctions for efficient photocatalysis: A review. Sep. Purif. Technol. 2023, 325, 124636. [Google Scholar] [CrossRef]
  19. Liebhaber, E.; Rütten, L.M.; Reecht, G.; Steiner, J.F.; Rohlf, S.; Rossnagel, K.; von Oppen, F.; Franke, K.J. Quantum spins and hybridization in artificially-constructed chains of magnetic adatoms on a superconductor. Nat. Commun. 2022, 13, 2160. [Google Scholar] [CrossRef]
  20. Friedrich, F.; Odobesko, A.; Bouaziz, J.; Lounis, S.; Bode, M. Evidence for spinarons in Co adatoms. Nat. Phys. 2024, 20, 28–33. [Google Scholar] [CrossRef]
  21. Pei, K. Recent advances in molecular doping of organic semiconductors. Surf. Interfaces 2022, 30, 101887. [Google Scholar] [CrossRef]
  22. Makarov, D.; Volkov, O.M.; Kákay, A.; Pylypovskyi, O.V.; Budinská, B.; Dobrovolskiy, O.V. New dimension in magnetism and superconductivity: 3D and curvilinear nanoarchitectures. Adv. Mater. 2022, 34, 2101758. [Google Scholar] [CrossRef]
  23. Chilton, N.F. Molecular magnetism. Annu. Rev. Mater. Res. 2022, 52, 79–101. [Google Scholar] [CrossRef]
  24. Du, Z.; Yang, S.; Li, S.; Lou, J.; Zhang, S.; Wang, S.; Li, B.; Gong, Y.; Song, L.; Zou, X.; et al. Conversion of non-van der Waals solids to 2D transition-metal chalcogenides. Nature 2020, 577, 492–496. [Google Scholar] [CrossRef] [PubMed]
  25. Hossain, M.; Qin, B.; Li, B.; Duan, X. Synthesis, characterization, properties and applications of two-dimensional magnetic materials. Nano Today 2022, 42, 101338. [Google Scholar] [CrossRef]
  26. Elahi, E.; Dastgeer, G.; Nazir, G.; Nisar, S.; Bashir, M.; Qureshi, H.A.; Kim, D.-k.; Aziz, J.; Aslam, M.; Hussain, K. A review on two-dimensional (2D) magnetic materials and their potential applications in spintronics and spin-caloritronic. Comput. Mater. Sci. 2022, 213, 111670. [Google Scholar] [CrossRef]
  27. Wang, S.; Khazaei, M.; Wang, J.; Hosono, H. Hypercoordinate two-dimensional transition-metal borides for spintronics and catalyst applications. J. Mater. Chem. C 2021, 9, 9212–9221. [Google Scholar] [CrossRef]
  28. Olivecrona, M.; Blaschke, T.; Engkvist, O.; Chen, H. Molecular de-novo design through deep reinforcement learning. J. Cheminformatics 2017, 9, 48. [Google Scholar] [CrossRef]
  29. Viatkin, D.; Garcia-Zapirain, B.; Méndez-Zorrilla, A.; Zakharov, M. Deep learning approach for prediction of critical temperature of superconductor materials described by chemical formulas. Front. Mater. 2021, 8, 714752. [Google Scholar] [CrossRef]
  30. Yu, C.-H.; Wu, C.-Y.; Buehler, M.J. Deep learning based design of porous graphene for enhanced mechanical resilience. Comput. Mater. Sci. 2022, 206, 111270. [Google Scholar] [CrossRef]
  31. You, J.; Liu, B.; Ying, Z.; Pande, V.; Leskovec, J. Graph convolutional policy network for goal-directed molecular graph generation. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 3–8 December 2018; Volume 31. [Google Scholar]
  32. Chen, P.; Chen, J.; Yan, H.; Mo, Q.; Xu, Z.; Liu, J.; Zhang, W.; Yang, Y.; Lu, Y. Leveraging large-scale computational database and deep learning for accurate prediction of material properties. arXiv, 2021; arXiv:2112.14429v1. [Google Scholar]
  33. Choudhary, K.; DeCost, B. Atomistic line graph neural network for improved materials property predictions. NPJ Comput. Mater. 2021, 7, 185. [Google Scholar] [CrossRef]
  34. Ryczko, K.; Mills, K.; Luchak, I.; Homenick, C.; Tamblyn, I. Convolutional neural networks for atomistic systems. Comput. Mater. Sci. 2018, 149, 134–142. [Google Scholar] [CrossRef]
  35. Xie, T.; Grossman, J.C. Crystal graph convolutional neural networks for an accurate and interpretable prediction of material properties. Phys. Rev. Lett. 2018, 120, 145301. [Google Scholar] [CrossRef] [PubMed]
  36. Na, G.S.; Jang, S.; Chang, H. Predicting thermoelectric properties from chemical formula with explicitly identifying dopant effects. NPJ Comput. Mater. 2021, 7, 106. [Google Scholar] [CrossRef]
  37. McNutt, A.T.; Francoeur, P.; Aggarwal, R.; Masuda, T.; Meli, R.; Ragoza, M.; Sunseri, J.; Koes, D.R. GNINA 1.0: Molecular docking with deep learning. J. Cheminform. 2021, 13, 43. [Google Scholar] [CrossRef] [PubMed]
  38. Merchant, A.; Batzner, S.; Schoenholz, S.S.; Aykol, M.; Cheon, G.; Cubuk, E.D. Scaling deep learning for materials discovery. Nature 2023, 624, 80–85. [Google Scholar] [CrossRef] [PubMed]
  39. Gao, Y.; Yu, Z.; Chen, W.; Yin, Q.; Wu, J.; Wang, W. Recognition of rock materials after high-temperature deterioration based on SEM images via deep learning. J. Mater. Res. Technol. 2023, 25, 273–284. [Google Scholar] [CrossRef]
  40. Esmaeili-Falak, M.; Benemaran, R.S. Ensemble deep learning-based models to predict the resilient modulus of modified base materials subjected to wet-dry cycles. Geomech. Eng. 2023, 32, 583–600. [Google Scholar]
  41. Khan, A.; Ghorbanian, V.; Lowther, D. Deep learning for magnetic field estimation. IEEE Trans. Magn. 2019, 55, 1–4. [Google Scholar] [CrossRef]
  42. Kwon, H.Y.; Yoon, H.; Lee, C.; Chen, G.; Liu, K.; Schmid, A.; Wu, Y.; Choi, J.; Won, C. Magnetic Hamiltonian parameter estimation using deep learning techniques. Sci. Adv. 2020, 6, eabb0872. [Google Scholar] [CrossRef]
  43. Demirpolat, A.B.; Baykara, M. Investigation and prediction of ethylene Glycol based ZnO nanofluidic heat transfer versus magnetic effect by deep learning. Therm. Sci. Eng. Prog. 2021, 25, 101034. [Google Scholar] [CrossRef]
  44. Pollok, S.; Bjørk, R.; Jørgensen, P.S. Inverse design of magnetic fields using deep learning. IEEE Trans. Magn. 2021, 57, 1–4. [Google Scholar] [CrossRef]
  45. Li, H.; Tang, Z.; Gong, X.; Zou, N.; Duan, W.; Xu, Y. Deep-learning electronic-structure calculation of magnetic superstructures. Nat. Comput. Sci. 2023, 3, 321–327. [Google Scholar] [CrossRef] [PubMed]
  46. Li, W.; Long, L.-C.; Liu, J.-Y.; Yang, Y. Classification of magnetic ground states and prediction of magnetic moments of inorganic magnetic materials based on machine learning. Acta Phys. Sin. 2022, 71, 060202. [Google Scholar] [CrossRef]
  47. Behar, N.; Shrivastava, M. ResNet50-Based Effective Model for Breast Cancer Classification Using Histopathology Images. CMES Comput. Model. Eng. Sci. 2022, 130, 823–839. [Google Scholar] [CrossRef]
  48. Islam, W.; Jones, M.; Faiz, R.; Sadeghipour, N.; Qiu, Y.; Zheng, B. Improving performance of breast lesion classification using a ResNet50 model optimized with a novel attention mechanism. Tomography 2022, 8, 2411–2425. [Google Scholar] [CrossRef]
  49. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
  50. Arefin, S.; Chowdhury, M.; Parvez, R.; Ahmed, T.; Abrar, A.S.; Sumaiya, F. Understanding APT detection using Machine learning algorithms: Is superior accuracy a thing? In Proceedings of the 2024 IEEE International Conference on Electro Information Technology (eIT), Eau Claire, WI, USA, 30 May–1 June 2024; pp. 532–537. [Google Scholar]
  51. Lee, C.-Y.; Hung, C.-H.; Le, T.-A. Intelligent fault diagnosis for BLDC with incorporating accuracy and false negative rate in feature selection optimization. IEEE Access 2022, 10, 69939–69949. [Google Scholar] [CrossRef]
  52. Peng, J.; Zhao, H.; Zhao, K.; Wang, Z.; Yao, L. CourtNet: Dynamically balance the precision and recall rates in infrared small target detection. Expert Syst. Appl. 2023, 233, 120996. [Google Scholar] [CrossRef]
  53. Miao, J.; Zhu, W. Precision–recall curve (PRC) classification trees. Evol. Intell. 2022, 15, 1545–1569. [Google Scholar] [CrossRef]
  54. Shang, H.; Langlois, J.-M.; Tsioutsiouliklis, K.; Kang, C. Precision/recall on imbalanced test data. In Proceedings of the International Conference on Artificial Intelligence and Statistics, Valencia, Spain, 25–27 April 2023; pp. 9879–9891. [Google Scholar]
  55. Lee, S.; Kim, S. Exploring Prime Number Classification: Achieving High Recall Rate and Rapid Convergence with Sparse Encoding. arXiv, 2024; arXiv:.03363. [Google Scholar]
  56. Hou, Z.; Tipton, E. Enhancing recall in automated record screening: A resampling algorithm. Res. Synth. Methods 2024, 15, 372–383. [Google Scholar] [CrossRef]
  57. Lam, K.F.Y. Confidence Intervals for the F1 Score: A Comparison of Four Methods. arXiv, 2023; arXiv:.14621. [Google Scholar]
  58. Tan, S.C.; Zhu, S. Binary search of the optimal cut-point value in ROC analysis using the F1 score. J. Phys. Conf. Ser. 2023, 2609, 012002. [Google Scholar] [CrossRef]
  59. Ma, Y.; Yu, D.; Wu, T.; Wang, H. PaddlePaddle: An Open-Source Deep Learning Platform from Industrial Practice. Front. Data Comput. 2019, 1, 105–115. [Google Scholar] [CrossRef]
  60. Davidsson, J.; Bertoldo, F.; Thygesen, K.S.; Armiento, R. Absorption versus adsorption: High-throughput computation of impurities in 2D materials. NPJ 2d Mater. Appl. 2023, 7, 26. [Google Scholar] [CrossRef]
  61. Polik, W.F.; Schmidt, J. WebMO: Web-based computational chemistry calculations in education and research. Wiley Interdiscip. Rev. Comput. Mol. Sci. 2022, 12, e1554. [Google Scholar] [CrossRef]
  62. Rodríguez, F.C.; Dal Peraro, M.; Abriata, L.A. Online tools to easily build virtual molecular models for display in augmented and virtual reality on the web. J. Mol. Graph. Model. 2022, 114, 108164. [Google Scholar] [CrossRef] [PubMed]
  63. Ye, S.; Wang, H.; Tan, M.; Liu, F. Recurrent affine transformation for text-to-image synthesis. IEEE Trans. Multimed. 2023, 26, 462–473. [Google Scholar] [CrossRef]
  64. Xiong, Z.; Gao, Y.; Liu, F.; Sun, H. Affine transformation edited and refined deep neural network for quantitative susceptibility mapping. NeuroImage 2023, 267, 119842. [Google Scholar] [CrossRef] [PubMed]
  65. Khmag, A. Additive Gaussian noise removal based on generative adversarial network model and semi-soft thresholding approach. Multimed. Tools Appl. 2023, 82, 7757–7777. [Google Scholar] [CrossRef]
  66. Sanghvi, H.A.; Patel, R.H.; Agarwal, A.; Gupta, S.; Sawhney, V.; Pandya, A.S. A deep learning approach for classification of COVID and pneumonia using DenseNet-201. Int. J. Imaging Syst. Technol. 2023, 33, 18–38. [Google Scholar] [CrossRef]
  67. Chen, Y.; Zheng, Y.; Xu, Z.; Tang, T.; Tang, Z.; Chen, J.; Liu, Y. Cross-domain few-shot classification based on lightweight Res2Net and flexible GNN. Knowl.-Based Syst. 2022, 247, 108623. [Google Scholar] [CrossRef]
  68. He, Y.; Kang, X.; Yan, Q.; Li, E. ResNeXt+: Attention mechanisms based on ResNeXt for malware detection and classification. IEEE Trans. Inf. Forensics Secur. 2023, 19, 1142–1155. [Google Scholar] [CrossRef]
Figure 1. ResNet network structure. (a) ResNet architecture with 50 layers. (b) Bottleneck residual block incorporating three convolutional layers.
Figure 1. ResNet network structure. (a) ResNet architecture with 50 layers. (b) Bottleneck residual block incorporating three convolutional layers.
Coatings 14 01271 g001
Figure 2. Swin–ResNet structure. (a) 50-layer Swin-ResNet, with darker regions indicating enhanced modules; (b) Bottleneck module with the SwinT module, replacing the 3 × 3 convolution with a 3 × 3 SwinT; (c) SwinT module adaptable to ResNet, with data processing at input and output, and modifications to the Swin Transformer Block structure to optimize Window Attention and Shift Window Attention for transformed data.
Figure 2. Swin–ResNet structure. (a) 50-layer Swin-ResNet, with darker regions indicating enhanced modules; (b) Bottleneck module with the SwinT module, replacing the 3 × 3 convolution with a 3 × 3 SwinT; (c) SwinT module adaptable to ResNet, with data processing at input and output, and modifications to the Swin Transformer Block structure to optimize Window Attention and Shift Window Attention for transformed data.
Coatings 14 01271 g002
Figure 3. Computational Materials Repository [60].
Figure 3. Computational Materials Repository [60].
Coatings 14 01271 g003
Figure 4. Generated images of doped two-dimensional nanomaterials. Each image is named in three segments: the first segment denotes the host element, the second the dopant element, and the third the doping method. Duplicate names are distinguished by appending numerical identifiers.
Figure 4. Generated images of doped two-dimensional nanomaterials. Each image is named in three segments: the first segment denotes the host element, the second the dopant element, and the third the doping method. Duplicate names are distinguished by appending numerical identifiers.
Coatings 14 01271 g004
Figure 5. Image data after data augmentation. Each image has multiple augmented versions, with the augmented names prefixed by the augmentation method. Duplicate names are distinguished by appending numerical identifiers.
Figure 5. Image data after data augmentation. Each image has multiple augmented versions, with the augmented names prefixed by the augmentation method. Duplicate names are distinguished by appending numerical identifiers.
Coatings 14 01271 g005
Figure 6. Loss Curves of Different Models Under Various Optimizers. (a) Loss Curves of Models with SGD Optimizer; (b) Loss Curves of Models with Momentum Optimizer; (c) Loss Curves of Models with RMSProp Optimizer.
Figure 6. Loss Curves of Different Models Under Various Optimizers. (a) Loss Curves of Models with SGD Optimizer; (b) Loss Curves of Models with Momentum Optimizer; (c) Loss Curves of Models with RMSProp Optimizer.
Coatings 14 01271 g006
Figure 7. Accuracy Curves of Different Models Under Various Optimizers. (a) Accuracy Curves of Models with SGD Optimizer; (b) Accuracy Curves of Models with Momentum Optimizer; (c) Accuracy Curves of Models with RMSProp Optimizer.
Figure 7. Accuracy Curves of Different Models Under Various Optimizers. (a) Accuracy Curves of Models with SGD Optimizer; (b) Accuracy Curves of Models with Momentum Optimizer; (c) Accuracy Curves of Models with RMSProp Optimizer.
Coatings 14 01271 g007
Table 1. Software and Hardware Setup.
Table 1. Software and Hardware Setup.
Software and HardwareVersion
Python3.9
PaddlePaddle2.2
GPUTesla V100
Video Mem32 GB
CPU4 Cores
Table 2. Magnetic Prediction Training Results Based on Doped Two-Dimensional Nanomaterials.
Table 2. Magnetic Prediction Training Results Based on Doped Two-Dimensional Nanomaterials.
ModelOptimizationLossACCPrecisionRecallF1-Score
Swin–ResNet50Momentum1.169510.894560.79070.79720.7893
SGD1.087530.900150.8750.88480.8552
RMSProp1.14470.879320.87410.87080.8578
DenseNet [66]Momentum0.673320.897320.72730.72730.7273
SGD0.719280.867820.69090.65150.6706
RMSProp0.623820.833710.64580.62120.6333
ResNet50 [47]Momentum0.816850.86830.68180.62120.6501
SGD0.582180.823660.58020.56060.5702
RMSProp1.069630.872770.69890.65150.6744
Res2Net [67]Momentum0.577550.861610.69090.65150.6706
SGD0.708950.762280.28020.33330.3045
RMSProp0.547630.879460.68180.68180.6818
ResNeXt [68]Momentum1.013250.873880.70450.68180.693
SGD1.038780.851560.65150.65150.6515
RMSProp0.879540.889510.70450.68180.693
Swin Transformer [49]Momentum1.655080.409530.05050.09090.0649
SGD1.648140.405130.05050.09090.0649
RMSProp1.661130.405130.05050.09090.0649
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Zhou, C.; Liang, F.; Liu, G.; Zhu, J. Magnetic Prediction of Doped Two-Dimensional Nanomaterials Based on Swin–Resnet. Coatings 2024, 14, 1271. https://doi.org/10.3390/coatings14101271

AMA Style

Zhang Y, Zhou C, Liang F, Liu G, Zhu J. Magnetic Prediction of Doped Two-Dimensional Nanomaterials Based on Swin–Resnet. Coatings. 2024; 14(10):1271. https://doi.org/10.3390/coatings14101271

Chicago/Turabian Style

Zhang, Yu, Chuntian Zhou, Fengfeng Liang, Guangjie Liu, and Jinlong Zhu. 2024. "Magnetic Prediction of Doped Two-Dimensional Nanomaterials Based on Swin–Resnet" Coatings 14, no. 10: 1271. https://doi.org/10.3390/coatings14101271

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop