Figure 1.
Graphical outlines of various attention modules. (
a) SE [
3]; (
b) ECA [
4]; (
c) SCA [
13]; (
d) CBAM [
2]; (
e) ECBAM [
12]; (
f) CA [
5]; (
g) TA [
6]; (
h) DAA [
14].
Figure 1.
Graphical outlines of various attention modules. (
a) SE [
3]; (
b) ECA [
4]; (
c) SCA [
13]; (
d) CBAM [
2]; (
e) ECBAM [
12]; (
f) CA [
5]; (
g) TA [
6]; (
h) DAA [
14].
Figure 2.
Graphical outlines of the proposed LCAM.
Figure 2.
Graphical outlines of the proposed LCAM.
Figure 3.
LCAM framework illustrated with block structure. The three branches arrange from top to bottom are, respectively, Channel Attention Branch (CAB), Vertical Attention Branch (VAB), and Horizontal Attention Branch (HAB).
Figure 3.
LCAM framework illustrated with block structure. The three branches arrange from top to bottom are, respectively, Channel Attention Branch (CAB), Vertical Attention Branch (VAB), and Horizontal Attention Branch (HAB).
Figure 4.
Flow chart illustrating experiments conducted based on hill climbing technique to tune and determine the optimum structure for LCAM.
Figure 4.
Flow chart illustrating experiments conducted based on hill climbing technique to tune and determine the optimum structure for LCAM.
Figure 5.
ROC curves of 1:1 verification for different kernel sizes. (a) IJB-B dataset; (b) IJB-C dataset.
Figure 5.
ROC curves of 1:1 verification for different kernel sizes. (a) IJB-B dataset; (b) IJB-C dataset.
Figure 6.
ROC curves on different combination of branches. (a) IJB-B dataset; (b) IJB-C dataset.
Figure 6.
ROC curves on different combination of branches. (a) IJB-B dataset; (b) IJB-C dataset.
Figure 7.
ECN building block with different integration strategies. (a) ECN block; (b) ECN block integrated with LCAM after the depthwise convolution along with batch normalization operation; (c) ECN block integrated with LCAM after the first pointwise convolution along with PReLU activation function; (d) ECN block integrated with LCAM at the end. Here, ‘DConv’ represents Depthwise Convolution, while ‘1 × 1 Conv2D’ represents pointwise convolution.
Figure 7.
ECN building block with different integration strategies. (a) ECN block; (b) ECN block integrated with LCAM after the depthwise convolution along with batch normalization operation; (c) ECN block integrated with LCAM after the first pointwise convolution along with PReLU activation function; (d) ECN block integrated with LCAM at the end. Here, ‘DConv’ represents Depthwise Convolution, while ‘1 × 1 Conv2D’ represents pointwise convolution.
Figure 8.
ROC curves on different integration strategies. (a) IJB-B dataset; (b) IJB-C dataset.
Figure 8.
ROC curves on different integration strategies. (a) IJB-B dataset; (b) IJB-C dataset.
Figure 9.
Two random face image pairs from the LFW [
40] dataset. The original image pairs are shown in (
a). Corresponding Grad-CAM visualization for the baseline pair is depicted in (
b), and other attention modules are depicted from (
c–
j), while the proposed attention module of this work is illustrated in (
k), all are arranged column-wise from left to right. (
a) Original face image pairs; (
b) ConvFaceNeXt; (
c) SE; (
d) ECA; (
e) CBAM; (
f) ECBAM; (
g) CA; (
h) SCA; (
i) TA; (
j) DAA; (
k) LCAM.
Figure 9.
Two random face image pairs from the LFW [
40] dataset. The original image pairs are shown in (
a). Corresponding Grad-CAM visualization for the baseline pair is depicted in (
b), and other attention modules are depicted from (
c–
j), while the proposed attention module of this work is illustrated in (
k), all are arranged column-wise from left to right. (
a) Original face image pairs; (
b) ConvFaceNeXt; (
c) SE; (
d) ECA; (
e) CBAM; (
f) ECBAM; (
g) CA; (
h) SCA; (
i) TA; (
j) DAA; (
k) LCAM.
Figure 10.
Two random face image pairs from CPLFW [
42] dataset. The original image pairs are shown in (
a). Corresponding Grad-CAM visualization for the baseline pair is depicted in (
b), and other attention modules are depicted from (
c–
j), while the proposed attention module of this work is illustrated in (
k), all arranged column-wise from left to right. (
a) Original face image pairs; (
b) ConvFaceNeXt; (
c) SE; (
d) ECA; (
e) CBAM; (
f) ECBAM; (
g) CA; (
h) SCA; (
i) TA; (
j) DAA; (
k) LCAM.
Figure 10.
Two random face image pairs from CPLFW [
42] dataset. The original image pairs are shown in (
a). Corresponding Grad-CAM visualization for the baseline pair is depicted in (
b), and other attention modules are depicted from (
c–
j), while the proposed attention module of this work is illustrated in (
k), all arranged column-wise from left to right. (
a) Original face image pairs; (
b) ConvFaceNeXt; (
c) SE; (
d) ECA; (
e) CBAM; (
f) ECBAM; (
g) CA; (
h) SCA; (
i) TA; (
j) DAA; (
k) LCAM.
Figure 11.
Two random face image pairs from CALFW [
41] dataset. The original image pairs are shown in (
a). Corresponding Grad-CAM visualization for the baseline pair is depicted in (
b), and other attention modules are depicted from (
c–
j), while the proposed attention module of this work is illustrated in (
k), all arranged column-wise from left to right. (
a) Original face image pairs; (
b) ConvFaceNeXt; (
c) SE; (
d) ECA; (
e) CBAM; (
f) ECBAM; (
g) CA; (
h) SCA; (
i) TA; (
j) DAA; (
k) LCAM.
Figure 11.
Two random face image pairs from CALFW [
41] dataset. The original image pairs are shown in (
a). Corresponding Grad-CAM visualization for the baseline pair is depicted in (
b), and other attention modules are depicted from (
c–
j), while the proposed attention module of this work is illustrated in (
k), all arranged column-wise from left to right. (
a) Original face image pairs; (
b) ConvFaceNeXt; (
c) SE; (
d) ECA; (
e) CBAM; (
f) ECBAM; (
g) CA; (
h) SCA; (
i) TA; (
j) DAA; (
k) LCAM.
Table 1.
Performance results of different kernel sizes. These results are reported in terms of parameters, FLOPs, and verification accuracy for LFW, CALFW, CPLFW, CFP-FF, CFP-FP, AgeDB-30, and VGG2-FP. Moreover, the average accuracy for the seven image-based datasets is shown in the last column. (Bold values are the highest obtained values by the methods).
Table 1.
Performance results of different kernel sizes. These results are reported in terms of parameters, FLOPs, and verification accuracy for LFW, CALFW, CPLFW, CFP-FF, CFP-FP, AgeDB-30, and VGG2-FP. Moreover, the average accuracy for the seven image-based datasets is shown in the last column. (Bold values are the highest obtained values by the methods).
Model | Param. (M) | FLOPs (M) | LFW | CALFW | CPLFW | CFP-FF | CFP-FP | AgeDB-30 | VGG2-FP | Average |
---|
ConvFace NeXt_LCAM | 1.05 | 406.56 | 99.20 | 93.47 | 86.40 | 98.90 | 89.49 | 93.70 | 90.78 | 93.13 |
ConvFace NeXt_L5K | 1.05 | 406.60 | 99.23 | 93.73 | 86.97 | 99.01 | 89.61 | 93.65 | 90.04 | 93.18 |
ConvFace NeXt_L7K | 1.05 | 406.65 | 99.15 | 93.67 | 86.33 | 99.11 | 89.06 | 93.65 | 89.80 | 92.97 |
ConvFace NeXt_L9K | 1.05 | 406.69 | 99.27 | 93.50 | 85.98 | 98.90 | 88.94 | 93.48 | 90.60 | 92.95 |
Table 2.
Verification accuracy of different kernel sizes for IJB-B and IJB-C datasets. (Bold values are the highest obtained values by the methods).
Table 2.
Verification accuracy of different kernel sizes for IJB-B and IJB-C datasets. (Bold values are the highest obtained values by the methods).
Model | IJB-B (TAR@FAR) | IJB-C (TAR@FAR) |
---|
10−5 | 10−4 | 10−3 | 10−5 | 10−4 | 10−3 |
---|
ConvFace NeXt_LCAM | 66.97 | 80.27 | 89.49 | 74.19 | 84.01 | 91.37 |
ConvFace NeXt_L5K | 41.61 | 77.22 | 88.77 | 51.57 | 80.23 | 90.77 |
ConvFace NeXt_L7K | 66.80 | 80.14 | 89.06 | 74.21 | 83.78 | 91.10 |
ConvFace NeXt_L9K | 67.64 | 79.86 | 89.04 | 73.60 | 83.32 | 91.01 |
Table 3.
Performance results on different combination of branches. These results are reported in terms of parameters, FLOPs, and verification accuracy for LFW, CALFW, CPLFW, CFP-FF, CFP-FP, AgeDB-30, and VGG2-FP. Moreover, the average accuracy for the seven image-based datasets is shown in the last column. (Bold values are the highest obtained values by the methods).
Table 3.
Performance results on different combination of branches. These results are reported in terms of parameters, FLOPs, and verification accuracy for LFW, CALFW, CPLFW, CFP-FF, CFP-FP, AgeDB-30, and VGG2-FP. Moreover, the average accuracy for the seven image-based datasets is shown in the last column. (Bold values are the highest obtained values by the methods).
Model | Param. (M) | FLOPs (M) | LFW | CALFW | CPLFW | CFP-FF | CFP-FP | AgeDB-30 | VGG2-FP | Average |
---|
ConvFace NeXt | 1.05 | 404.57 | 99.10 | 93.32 | 85.45 | 98.87 | 87.40 | 92.95 | 88.92 | 92.29 |
ConvFace NeXt_CAB | 1.05 | 405.53 | 99.13 | 93.55 | 86.18 | 98.87 | 89.63 | 93.05 | 89.46 | 92.84 |
ConvFace NeXt_VAB | 1.05 | 405.54 | 99.20 | 93.37 | 86.02 | 98.93 | 89.61 | 92.92 | 90.00 | 92.86 |
ConvFace NeXt_HAB | 1.05 | 405.54 | 99.10 | 93.15 | 85.68 | 99.01 | 88.30 | 93.35 | 89.94 | 92.65 |
ConvFace NeXt_CAB+VAB | 1.05 | 406.06 | 99.05 | 93.00 | 86.33 | 98.84 | 88.91 | 93.02 | 89.74 | 92.70 |
ConvFace NeXt_CAB+HAB | 1.05 | 406.06 | 99.22 | 93.28 | 86.40 | 98.96 | 89.31 | 93.37 | 90.46 | 93.00 |
ConvFace NeXt_VAB+HAB | 1.05 | 405.58 | 99.10 | 93.58 | 85.47 | 98.99 | 87.91 | 93.28 | 90.10 | 92.63 |
ConvFace NeXt_LCAM | 1.05 | 406.56 | 99.20 | 93.47 | 86.40 | 98.90 | 89.49 | 93.70 | 90.78 | 93.13 |
Table 4.
Verification accuracy on different combination of branches for IJB-B and IJB-C datasets. (Bold values are the highest obtained values by the methods).
Table 4.
Verification accuracy on different combination of branches for IJB-B and IJB-C datasets. (Bold values are the highest obtained values by the methods).
Model | IJB-B (TAR@FAR) | IJB-C (TAR@FAR) |
---|
10−5 | 10−4 | 10−3 | 10−5 | 10−4 | 10−3 |
---|
ConvFace NeXt | 66.11 | 79.77 | 88.22 | 73.75 | 83.27 | 90.56 |
ConvFace NeXt_CAB | 65.12 | 80.16 | 89.15 | 72.54 | 83.58 | 91.23 |
ConvFace NeXt_VAB | 65.50 | 79.94 | 88.97 | 73.85 | 83.53 | 90.96 |
ConvFace NeXt_HAB | 64.37 | 79.29 | 88.71 | 72.45 | 82.92 | 90.81 |
ConvFace NeXt_CAB+VAB | 66.59 | 80.03 | 88.93 | 74.06 | 83.63 | 91.16 |
ConvFace NeXt_CAB+HAB | 64.85 | 79.77 | 89.06 | 71.55 | 83.23 | 91.14 |
ConvFace NeXt_VAB+HAB | 66.79 | 79.77 | 88.66 | 73.64 | 83.21 | 90.83 |
ConvFace NeXt_LCAM | 66.97 | 80.27 | 89.49 | 74.19 | 84.01 | 91.37 |
Table 5.
Performance results on different integration strategies. These results are reported in terms of parameters, FLOPs, and verification accuracy for LFW, CALFW, CPLFW, CFP-FF, CFP-FP, AgeDB-30, and VGG2-FP. Moreover, the average accuracy for the seven image-based datasets is shown in the last column. (Bold values are the highest obtained values by the methods).
Table 5.
Performance results on different integration strategies. These results are reported in terms of parameters, FLOPs, and verification accuracy for LFW, CALFW, CPLFW, CFP-FF, CFP-FP, AgeDB-30, and VGG2-FP. Moreover, the average accuracy for the seven image-based datasets is shown in the last column. (Bold values are the highest obtained values by the methods).
Model | Param. (M) | FLOPs (M) | LFW | CALFW | CPLFW | CFP-FF | CFP-FP | AgeDB-30 | VGG2-FP | Average |
---|
ConvFace NeXt_D1 | 1.05 | 406.51 | 99.12 | 93.28 | 86.38 | 99.01 | 90.30 | 92.88 | 90.62 | 93.08 |
ConvFace NeXt_P1 | 1.05 | 408.54 | 99.20 | 93.32 | 86.00 | 99.07 | 88.80 | 93.10 | 89.60 | 92.73 |
ConvFace NeXt_LCAM | 1.05 | 406.56 | 99.20 | 93.47 | 86.40 | 98.90 | 89.49 | 93.70 | 90.78 | 93.13 |
Table 6.
Verification accuracy on different integration strategies for IJB-B and IJB-C datasets. (Bold values are the highest obtained values by the methods).
Table 6.
Verification accuracy on different integration strategies for IJB-B and IJB-C datasets. (Bold values are the highest obtained values by the methods).
Model | IJB-B (TAR@FAR) | IJB-C (TAR@FAR) |
---|
10−5 | 10−4 | 10−3 | 10−5 | 10−4 | 10−3 |
---|
ConvFace NeXt_D1 | 67.56 | 80.39 | 88.91 | 74.34 | 83.45 | 91.04 |
ConvFace NeXt_P1 | 68.09 | 79.62 | 88.27 | 74.27 | 83.11 | 90.42 |
ConvFace NeXt_LCAM | 66.97 | 80.27 | 89.49 | 74.19 | 84.01 | 91.37 |
Table 7.
Performance results of different attention modules plug into ConvFaceNeXt. These results are reported in terms of parameters, FLOPs and verification accuracy for LFW, CALFW, CPLFW, CFP-FF, CFP-FP, AgeDB-30 and VGG2-FP. Moreover, the average accuracy for the seven image-based datasets is shown in the last column. (Bold values are the highest obtained values by the methods).
Table 7.
Performance results of different attention modules plug into ConvFaceNeXt. These results are reported in terms of parameters, FLOPs and verification accuracy for LFW, CALFW, CPLFW, CFP-FF, CFP-FP, AgeDB-30 and VGG2-FP. Moreover, the average accuracy for the seven image-based datasets is shown in the last column. (Bold values are the highest obtained values by the methods).
Model | Atten. Module | Param. (M) | FLOPs (M) | LFW | CA LFW | CP LFW | CFP-FF | CFP-FP | AgeDB-30 | VGG2-FP | Average |
---|
Conv FaceNeXt | Baseline | 1.05 | 404.57 | 99.10 | 93.32 | 85.45 | 98.87 | 87.40 | 92.95 | 88.92 | 92.29 |
SE | 1.06 | 405.54 | 99.03 | 93.05 | 85.63 | 98.80 | 87.96 | 93.00 | 89.70 | 92.45 |
ECA | 1.05 | 405.52 | 99.05 | 93.48 | 86.33 | 98.83 | 88.86 | 93.32 | 89.78 | 92.81 |
CBAM | 1.07 | 407.61 | 99.07 | 93.43 | 86.48 | 99.06 | 88.20 | 93.57 | 89.34 | 92.74 |
ECBAM | 1.05 | 407.58 | 99.27 | 93.47 | 86.22 | 98.81 | 88.99 | 93.13 | 89.10 | 92.71 |
CA | 1.07 | 407.19 | 99.10 | 93.43 | 85.82 | 98.91 | 87.89 | 93.32 | 88.92 | 92.48 |
SCA | 1.05 | 407.39 | 99.25 | 93.50 | 86.33 | 99.03 | 88.87 | 93.77 | 90.48 | 93.03 |
TA | 1.05 | 419.15 | 99.23 | 93.52 | 86.25 | 99.01 | 88.60 | 93.73 | 90.02 | 92.91 |
DAA | 1.10 | 407.50 | 99.13 | 93.55 | 86.43 | 99.01 | 89.56 | 93.35 | 89.88 | 92.99 |
LCAM | 1.05 | 406.56 | 99.20 | 93.47 | 86.40 | 98.90 | 89.49 | 93.70 | 90.78 | 93.13 |
Table 8.
Verification accuracy of ConvFaceNeXt for IJB-B and IJB-C datasets. (Bold values are the highest obtained values by the methods).
Table 8.
Verification accuracy of ConvFaceNeXt for IJB-B and IJB-C datasets. (Bold values are the highest obtained values by the methods).
Model | Atten. Module | IJB-B (TAR@FAR) | IJB-C (TAR@FAR) |
---|
10−5 | 10−4 | 10−3 | 10−5 | 10−4 | 10−3 |
---|
Conv FaceNeXt | Baseline | 66.11 | 79.77 | 88.22 | 73.75 | 83.27 | 90.56 |
SE | 65.57 | 79.95 | 88.63 | 73.52 | 83.44 | 90.76 |
ECA | 67.16 | 79.69 | 88.81 | 73.69 | 83.38 | 90.90 |
CBAM | 61.23 | 79.19 | 88.99 | 71.45 | 83.02 | 91.01 |
ECBAM | 61.71 | 79.20 | 88.81 | 71.15 | 82.63 | 90.74 |
CA | 64.44 | 79.90 | 88.57 | 73.68 | 83.21 | 90.80 |
SCA | 68.12 | 80.90 | 89.28 | 74.90 | 84.41 | 91.48 |
TA | 64.68 | 80.09 | 88.86 | 71.50 | 83.01 | 90.95 |
DAA | 66.64 | 79.86 | 88.84 | 73.48 | 83.33 | 91.06 |
LCAM | 66.97 | 80.27 | 89.49 | 74.19 | 84.01 | 91.37 |
Table 9.
Performance results of different attention modules plug into MobileFaceNet. These results are reported in terms of parameters, FLOPs and verification accuracy for LFW, CALFW, CPLFW, CFP-FF, CFP-FP, AgeDB-30 and VGG2-FP. Moreover, the average accuracy for the seven image-based datasets is shown in the last column. (Bold values are the highest obtained values by the methods).
Table 9.
Performance results of different attention modules plug into MobileFaceNet. These results are reported in terms of parameters, FLOPs and verification accuracy for LFW, CALFW, CPLFW, CFP-FF, CFP-FP, AgeDB-30 and VGG2-FP. Moreover, the average accuracy for the seven image-based datasets is shown in the last column. (Bold values are the highest obtained values by the methods).
Model | Atten. Module | Param. (M) | FLOPs (M) | LFW | CA LFW | CP LFW | CFP-FF | CFP-FP | AgeDB-30 | VGG2-FP | Average |
---|
Mobile FaceNet | Baseline | 1.03 |
473.15
|
99.03
|
93.18
|
85.52
|
98.91
|
87.51
|
93.35
|
88.40
|
92.27
|
SE | 1.04 |
473.83
|
99.07
|
93.75
|
85.80
|
98.87
|
88.06
|
93.40
|
88.80
|
92.54
|
ECA | 1.03 |
473.82
|
99.15
|
93.48
|
86.38
|
99.17
|
89.61
|
93.50
|
90.26
|
93.08
|
CBAM | 1.04 |
475.33
|
99.15
|
93.48
|
86.42
|
98.96
|
89.37
|
93.13
|
90.08
|
92.94
|
ECBAM | 1.03 |
475.31
|
99.18
|
93.33
|
86.65
|
98.93
|
89.86
|
93.50
|
90.26
|
93.10
|
CA | 1.04 |
474.94
|
99.12
|
93.12
|
85.60
|
99.01
|
87.87
|
93.15
|
88.74
|
92.37
|
SCA | 1.03 |
475.14
|
99.20
|
93.42
|
85.87
|
99.21
|
88.29
|
93.80
|
89.46
|
92.75
|
TA | 1.03 |
482.97
|
99.23
|
93.90
|
86.63
|
99.06
|
90.94
|
93.68
|
90.30
|
93.39
|
DAA | 1.06 |
475.21
|
99.27
|
93.43
|
86.98
|
99.03
|
90.51
|
93.88
|
90.40
|
93.36
|
LCAM | 1.03 |
474.56
|
99.18
|
93.38
|
87.10
|
99.10
|
90.83
|
93.42
|
90.94
|
93.42
|
Table 10.
Verification accuracy of MobileFaceNet for IJB-B and IJB-C datasets. (Bold values are the highest obtained values by the methods).
Table 10.
Verification accuracy of MobileFaceNet for IJB-B and IJB-C datasets. (Bold values are the highest obtained values by the methods).
Model | Atten. Module | IJB-B (TAR@FAR) | IJB-C (TAR@FAR) |
---|
10−5 | 10−4 | 10−3 | 10−5 | 10−4 | 10−3 |
---|
Mobile FaceNet | Baseline | 38.53 | 73.92 | 87.74 | 55.10 | 78.17 | 89.79 |
SE |
40.06
|
74.27
|
87.58
|
53.46
|
78.38
|
89.65
|
ECA |
64.91
|
79.97
|
88.97
|
73.73
|
83.96
|
91.14
|
CBAM |
37.48
|
75.47
|
88.60
|
60.21
|
79.97
|
90.51
|
ECBAM |
59.01
|
79.45
|
89.19
|
71.12
|
83.03
|
91.02
|
CA |
19.10
|
63.85
|
86.40
|
28.79
|
67.03
|
87.92
|
SCA |
38.82
|
72.73
|
87.87
|
48.68
|
76.74
|
89.86
|
TA |
53.45
|
78.08
|
89.04
|
62.64
|
81.23
|
90.78
|
DAA |
60.66
|
79.86
|
89.52
|
71.88
|
83.75
|
91.48
|
LCAM |
66.78
|
80.83
|
89.26
|
73.66
|
83.79
|
91.14
|
Table 11.
Performance results of different attention modules plug into ProxylessFaceNAS. These results are reported in terms of parameters, FLOPs and verification accuracy for LFW, CALFW, CPLFW, CFP-FF, CFP-FP, AgeDB-30, and VGG2-FP. Moreover, the average accuracy for the seven image-based datasets is shown in the last column. (Bold values are the highest obtained values by the methods).
Table 11.
Performance results of different attention modules plug into ProxylessFaceNAS. These results are reported in terms of parameters, FLOPs and verification accuracy for LFW, CALFW, CPLFW, CFP-FF, CFP-FP, AgeDB-30, and VGG2-FP. Moreover, the average accuracy for the seven image-based datasets is shown in the last column. (Bold values are the highest obtained values by the methods).
Model | Atten. Module | Param. (M) | FLOPs (M) | LFW | CA LFW | CP LFW | CFP-FF | CFP-FP | AgeDB-30 | VGG2-FP | Average |
---|
Proxyless FaceNAS | Baseline | 3.01 | 873.95 | 98.82 | 92.63 | 84.32 | 98.76 | 86.13 | 92.23 | 87.66 | 91.51 |
SE | 3.03 | 875.03 | 98.97 | 92.98 | 84.68 | 98.84 | 87.46 | 92.92 | 88.18 | 92.00 |
ECA | 3.01 | 875.01 | 98.98 | 93.07 | 85.28 | 99.03 | 88.21 | 92.57 | 88.68 | 92.26 |
CBAM | 3.03 | 878.60 | 98.97 | 92.75 | 84.78 | 98.79 | 86.73 | 93.03 | 87.58 | 91.80 |
ECBAM | 3.02 | 878.56 | 99.15 | 92.95 | 85.50 | 98.83 | 88.84 | 92.67 | 88.64 | 92.37 |
CA | 3.03 | 876.51 | 98.85 | 92.52 | 84.75 | 98.66 | 86.23 | 92.42 | 87.34 | 91.54 |
SCA | 3.01 | 877.10 | 99.10 | 93.22 | 84.85 | 99.03 | 86.47 | 92.90 | 89.18 | 92.11 |
TA | 3.02 | 887.77 | 99.12 | 93.23 | 85.75 | 98.79 | 88.66 | 92.75 | 89.64 | 92.56 |
DAA | 3.06 | 877.21 | 98.92 | 92.85 | 85.17 | 98.83 | 87.36 | 92.30 | 88.84 | 92.04 |
LCAM | 3.02 | 876.23 | 98.93 | 93.23 | 85.65 | 98.91 | 87.71 | 93.22 | 88.96 | 92.37 |
Table 12.
Verification accuracy of ProxylessFaceNAS for IJB-B and IJB-C datasets. (Bold values are the highest obtained values by the methods).
Table 12.
Verification accuracy of ProxylessFaceNAS for IJB-B and IJB-C datasets. (Bold values are the highest obtained values by the methods).
Model | Atten. Module | IJB-B (TAR@FAR) | IJB-C (TAR@FAR) |
---|
10−5 | 10−4 | 10−3 | 10−5 | 10−4 | 10−3 |
---|
Proxyless FaceNAS | Baseline | 53.33 | 75.79 | 86.87 | 63.18 | 78.66 | 88.90 |
SE | 44.14 | 74.34 | 87.01 | 57.90 | 77.85 | 88.91 |
ECA | 55.55 | 77.23 | 88.19 | 68.18 | 80.96 | 90.06 |
CBAM | 53.89 | 75.92 | 87.35 | 65.13 | 79.72 | 89.56 |
ECBAM | 63.02 | 78.29 | 88.10 | 69.77 | 81.70 | 90.23 |
CA | 54.74 | 75.23 | 87.02 | 66.41 | 79.47 | 89.13 |
SCA | 21.12 | 64.65 | 86.48 | 30.38 | 66.55 | 87.86 |
TA | 51.33 | 76.27 | 87.65 | 62.94 | 79.44 | 89.74 |
DAA | 40.51 | 74.34 | 87.33 | 52.39 | 77.17 | 89.17 |
LCAM | 60.03 | 78.52 | 88.78 | 69.77 | 81.79 | 90.72 |