Author Contributions
Conceptualization, K.J., M.M., D.T., W.K., M.Z. and M.W.; methodology, K.J., M.M., D.T. and M.Z.; software, K.J.; validation, M.M., D.T. and W.K.; formal analysis, D.T., W.K., M.Z. and M.W.; investigation, K.J., M.M., D.T., W.K. and M.Z.; resources, M.W.; data curation, K.J., M.M. and M.W.; writing—original draft, K.J., M.M., D.T. and W.K.; writing—review and editing, K.J., M.M., D.T., W.K. and M.Z.; visualization, K.J.; supervision, K.J. and M.W.; project administration, K.J. and M.W.; funding acquisition, M.W. All authors have read and agreed to the published version of the manuscript.
Figure 1.
A schematic diagram of the data fusion workflow in a semantic segmentation task.
Figure 1.
A schematic diagram of the data fusion workflow in a semantic segmentation task.
Figure 2.
Locations of mining areas used for dataset preparation.
Figure 2.
Locations of mining areas used for dataset preparation.
Figure 3.
Exemplary annotation of mining area land cover classes used for machine learning model training. Aquablanca Mine, Spain.
Figure 3.
Exemplary annotation of mining area land cover classes used for machine learning model training. Aquablanca Mine, Spain.
Figure 4.
An illustration of encoder–decoder network for semantic segmentation using multidimensional, satellite images.
Figure 4.
An illustration of encoder–decoder network for semantic segmentation using multidimensional, satellite images.
Figure 5.
Comparison of Model 1’s prediction of the excavation in the Agarak Mine, Karchevan, Armenia, against the provided ground truth mask.
Figure 5.
Comparison of Model 1’s prediction of the excavation in the Agarak Mine, Karchevan, Armenia, against the provided ground truth mask.
Figure 6.
Comparison of Model 1’s prediction of the excavation in the Cordero Rojo Mine, Wyoming, USA, against the provided ground truth mask.
Figure 6.
Comparison of Model 1’s prediction of the excavation in the Cordero Rojo Mine, Wyoming, USA, against the provided ground truth mask.
Figure 7.
Comparison of Model 1’s prediction of the preparatory work area in the Cordero Rojo Mine, Wyoming, USA, against the provided ground truth mask.
Figure 7.
Comparison of Model 1’s prediction of the preparatory work area in the Cordero Rojo Mine, Wyoming, USA, against the provided ground truth mask.
Figure 8.
Comparison of Model 1’s prediction of the excavation in the Tunstead Quarry, Buxton, UK, against the provided ground truth mask.
Figure 8.
Comparison of Model 1’s prediction of the excavation in the Tunstead Quarry, Buxton, UK, against the provided ground truth mask.
Figure 9.
Comparison of Model 1’s prediction of the excavation in the Noranda Mine, Jamaica, against the provided ground truth mask.
Figure 9.
Comparison of Model 1’s prediction of the excavation in the Noranda Mine, Jamaica, against the provided ground truth mask.
Figure 10.
Comparison of Model 2’s prediction of the dumping ground in the Rudnichny Mine, Kemerovo Oblast, Russia, against the provided ground truth mask.
Figure 10.
Comparison of Model 2’s prediction of the dumping ground in the Rudnichny Mine, Kemerovo Oblast, Russia, against the provided ground truth mask.
Figure 11.
Comparison of Model 2’s prediction of the dumping ground in the Novotroitsk Quarries, Donetsk Oblast, Ukraine, against the provided ground truth mask.
Figure 11.
Comparison of Model 2’s prediction of the dumping ground in the Novotroitsk Quarries, Donetsk Oblast, Ukraine, against the provided ground truth mask.
Figure 12.
Comparison of Model 2’s prediction of the dumping ground in the Kostolac Mine, Braničevo, Serbia, against the provided ground truth mask.
Figure 12.
Comparison of Model 2’s prediction of the dumping ground in the Kostolac Mine, Braničevo, Serbia, against the provided ground truth mask.
Figure 13.
Comparison of Model 2’s prediction of the dumping ground in the location of the Kovdor Mine, Murmansk Oblast, Russia, against the provided ground truth mask.
Figure 13.
Comparison of Model 2’s prediction of the dumping ground in the location of the Kovdor Mine, Murmansk Oblast, Russia, against the provided ground truth mask.
Figure 14.
Comparison of Model 3’s prediction of infrastructure, TSF, and dam in the location of the Buenavista Mine, Cananea, Mexico, against the provided ground truth mask.
Figure 14.
Comparison of Model 3’s prediction of infrastructure, TSF, and dam in the location of the Buenavista Mine, Cananea, Mexico, against the provided ground truth mask.
Figure 15.
Comparison of Model 3’s prediction of the infrastructure and TSF in the Globe–Miami Mining District, Arizona, USA, against the provided ground truth mask.
Figure 15.
Comparison of Model 3’s prediction of the infrastructure and TSF in the Globe–Miami Mining District, Arizona, USA, against the provided ground truth mask.
Figure 16.
Comparison of Model 3’s prediction of the infrastructure, TSF, and dam in the Bor Mine, Bor, Serbia, against the provided ground truth mask.
Figure 16.
Comparison of Model 3’s prediction of the infrastructure, TSF, and dam in the Bor Mine, Bor, Serbia, against the provided ground truth mask.
Figure 17.
Example predictions of mining area components within the Bengala Coal Mine (Australia) of Model 1, Model 2, and Model 3 for 2019 (a) and 2021 (b).
Figure 17.
Example predictions of mining area components within the Bengala Coal Mine (Australia) of Model 1, Model 2, and Model 3 for 2019 (a) and 2021 (b).
Figure 18.
Example predictions of the total transformed area within the Bengala Coal Mine (Australia) of Model 4 for 2019 (a) and 2021 (b).
Figure 18.
Example predictions of the total transformed area within the Bengala Coal Mine (Australia) of Model 4 for 2019 (a) and 2021 (b).
Table 1.
Label descriptions (description of labels used for annotation of mining area land cover classes).
Table 1.
Label descriptions (description of labels used for annotation of mining area land cover classes).
Label | Authors Description |
---|
excavation | A pit or simply a space created due to excavation works. Sometimes includes an internal dump or other infrastructure. Watertables of flooded final excavations are considered reclaimed and remain unlabeled. The same applies to parts of old excavations covered with vegetation. |
dumping ground | A site where waste/overburdened materials are disposed of. Both solid and slurry wastes are considered in this class. Both internal and external dumps are labeled. Sometimes includes a whole TSF. Already revegetated/redeveloped parts of dumps remain unlabeled. |
stockpile | Storage area of useful mineral. Depending on the type of extracted material, it may take on different forms and be located both inside and outside the excavation. |
settling pond | Relatively small ponds designed for settling the waste material from process water. It is characteristic of some types of minerals. |
infrastructure | All the infrastructure in the mining area, especially processing plants and infrastructure associated with them. |
preparatory work area | The area on the overburden, in the foreground of the exploitation front, where access works such as deforestation or topsoil removal are visible. |
exploitation slope | A slope associated with the progress of the exploitation. Distinguishable only for some types of surface mines. Can be associated with both useful mineral and overburden extraction. Always within the excavation label. |
transportation slope | A slope associated with material haulage. Distinguishable only for some types of surface mines. Always within the excavation label. |
dam | Dams in mining areas, including earthen dams on larger waste ponds. |
tailing storage facility (TSF) | The slurry/water component of a structure consisting of one or several embankments built to store waste materials from mineral processing. In the form of a slurry containing finely ground rock particles, water, and chemicals. Typically associated with metal ores. |
Table 2.
A comparative analysis of dataset size using splits based on the type of extracted raw material, including coal and lignite (CL), metallic ores (MO), and rock raw materials (RR), as well as splits based on surface mine types, such as opencast (OC), open-pit (OP), and mountain-top (MT).
Table 2.
A comparative analysis of dataset size using splits based on the type of extracted raw material, including coal and lignite (CL), metallic ores (MO), and rock raw materials (RR), as well as splits based on surface mine types, such as opencast (OC), open-pit (OP), and mountain-top (MT).
Split | Model 1 | Model 2 | Model 3 | Model 4 |
---|
|
Test
|
Val
|
Train
|
Test
|
Val
|
Train
|
Test
|
Val
|
Train
|
Test
|
Val
|
Train
|
---|
All | 38 | 41 | 1095 | 39 | 35 | 1019 | 39 | 37 | 1020 | 39 | 41 | 1127 |
RR | 21 | 21 | 588 | 19 | 17 | 502 | 33 | 33 | 378 | 21 | 22 | 595 |
MO | 11 | 12 | 308 | 11 | 12 | 320 | 12 | 12 | 293 | 11 | 12 | 320 |
CL | 7 | 7 | 167 | 7 | 8 | 175 | 6 | 8 | 166 | 7 | 8 | 175 |
MT | 15 | 16 | 172 | 13 | 13 | 146 | 13 | 13 | 151 | 15 | 16 | 172 |
OC | 7 | 6 | 162 | 7 | 5 | 162 | 5 | 5 | 156 | 7 | 7 | 168 |
OP | 24 | 23 | 639 | 22 | 22 | 594 | 22 | 22 | 610 | 24 | 24 | 638 |
Table 3.
A comparative analysis of selected metrics’ values for each model, trained and tested using different band configurations, including RGB (red, green, and blue channels), R+N (red, green, blue, and near-infrared channels), R + N + S (red, green, blue, near-infrared, and Sentinel-1 radar channels), M (all twelve Sentinel-2 multispectral channels), and M + S (all twelve Sentinel-2 multispectral and Sentinel-1 radar channels).
Table 3.
A comparative analysis of selected metrics’ values for each model, trained and tested using different band configurations, including RGB (red, green, and blue channels), R+N (red, green, blue, and near-infrared channels), R + N + S (red, green, blue, near-infrared, and Sentinel-1 radar channels), M (all twelve Sentinel-2 multispectral channels), and M + S (all twelve Sentinel-2 multispectral and Sentinel-1 radar channels).
Split | Model 1 | Model 2 | Model 3 | Model 4 |
---|
|
IoU
|
Rec
|
Prec
|
Acc
|
IoU
|
Rec
|
Prec
|
Acc
|
IoU
|
Rec
|
Prec
|
Acc
|
IoU
|
Rec
|
Prec
|
Acc
|
---|
RGB | 0.694 | 0.574 | 0.604 | 0.977 | 0.644 | 0.694 | 0.744 | 0.968 | 0.720 | 0.311 | 0.489 | 0.987 | 0.848 | 0.910 | 0.899 | 0.978 |
R + N | 0.657 | 0.556 | 0.567 | 0.977 | 0.643 | 0.697 | 0.735 | 0.966 | 0.703 | 0.316 | 0.452 | 0.988 | 0.843 | 0.928 | 0.900 | 0.963 |
R + N + S | 0.699 | 0.580 | 0.554 | 0.972 | 0.625 | 0.675 | 0.743 | 0.964 | 0.716 | 0.271 | 0.461 | 0.987 | 0.786 | 0.931 | 0.840 | 0.945 |
M | 0.710 | 0.579 | 0.564 | 0.972 | 0.636 | 0.685 | 0.787 | 0.966 | 0.718 | 0.272 | 0.467 | 0.985 | 0.807 | 0.938 | 0.852 | 0.958 |
M+S | 0.648 | 0.579 | 0.565 | 0.970 | 0.631 | 0.695 | 0.740 | 0.963 | 0.726 | 0.328 | 0.561 | 0.986 | 0.828 | 0.926 | 0.887 | 0.964 |
Table 4.
A comparative analysis of selected metric values for each model trained and tested on splits based on the type of extracted raw material, including coal and lignite (CL), metallic ores (MO), and rock raw materials (RR), as well as splits based on surface mine types, such as opencast (OC), open-pit (OP), and mountain-top (MT). Additionally, the table shows the results for general models, trained on the combined data from splits based on the type of extracted raw material (CL+MO+RR), and tested on rock raw materials (GRR), coal and lignite (GCL), metallic ores (GMO), as well as splits based on surface mine types (OC+OP+MT), and tested on opencast (GOC), open-pit (GOP), and mountain-top (GMT).
Table 4.
A comparative analysis of selected metric values for each model trained and tested on splits based on the type of extracted raw material, including coal and lignite (CL), metallic ores (MO), and rock raw materials (RR), as well as splits based on surface mine types, such as opencast (OC), open-pit (OP), and mountain-top (MT). Additionally, the table shows the results for general models, trained on the combined data from splits based on the type of extracted raw material (CL+MO+RR), and tested on rock raw materials (GRR), coal and lignite (GCL), metallic ores (GMO), as well as splits based on surface mine types (OC+OP+MT), and tested on opencast (GOC), open-pit (GOP), and mountain-top (GMT).
Split | Model 1 | Model 2 | Model 3 | Model 4 |
---|
|
IoU
|
Rec
|
Prec
|
Acc
|
IoU
|
Rec
|
Prec
|
Acc
|
IoU
|
Rec
|
Prec
|
Acc
|
IoU
|
Rec
|
Prec
|
Acc
|
---|
Type of extracted raw material |
RR | 0.746 | 0.668 | 0.608 | 0.973 | 0.558 | 0.593 | 0.593 | 0.976 | 0.728 | 0.318 | 0.417 | 0.984 | 0.851 | 0.932 | 0.903 | 0.978 |
GRR | 0.630 | 0.614 | 0.711 | 0.970 | 0.578 | 0.711 | 0.636 | 0.963 | 0.732 | 0.347 | 0.374 | 0.984 | 0.858 | 0.945 | 0.899 | 0.979 |
MO | 0.666 | 0.489 | 0.532 | 0.976 | 0.697 | 0.738 | 0.879 | 0.956 | 0.710 | 0.445 | 0.621 | 0.985 | 0.814 | 0.937 | 0.866 | 0.935 |
GMO | 0.752 | 0.842 | 0.830 | 0.985 | 0.755 | 0.837 | 0.870 | 0.963 | 0.723 | 0.561 | 0.610 | 0.985 | 0.823 | 0.940 | 0.872 | 0.941 |
CL | 0.604 | 0.509 | 0.553 | 0.971 | 0.790 | 0.874 | 0.869 | 0.966 | 0.689 | 0.348 | 0.647 | 0.991 | 0.820 | 0.931 | 0.866 | 0.961 |
GCL | 0.634 | 0.628 | 0.684 | 0.968 | 0.804 | 0.871 | 0.902 | 0.973 | 0.725 | 0.533 | 0.638 | 0.992 | 0.821 | 0.953 | 0.852 | 0.959 |
| 0.672 | | | 0.973 | | | | 0.966 | | | | 0.987 | | | | |
| 0.672 | 0.695 | 0.742 | 0.974 | 0.712 | 0.806 | 0.803 | 0.966 | 0.727 | 0.480 | 0.541 | 0.987 | 0.834 | 0.946 | 0.874 | 0.960 |
Surface mine type |
MT | 0.707 | 0.537 | 0.566 | 0.965 | 0.547 | 0.618 | 0.642 | 0.979 | 0.709 | 0.622 | 0.695 | 0.983 | 0.828 | 0.942 | 0.870 | 0.976 |
GMT | 0.811 | 0.912 | 0.827 | 0.975 | 0.605 | 0.646 | 0.638 | 0.989 | 0.729 | 0.423 | 0.568 | 0.986 | 0.818 | 0.942 | 0.858 | 0.972 |
OC | 0.529 | 0.498 | 0.489 | 0.963 | 0.635 | 0.670 | 0.865 | 0.964 | 0.643 | 0.288 | 0.412 | 0.983 | 0.756 | 0.928 | 0.814 | 0.945 |
GOC | 0.605 | 0.561 | 0.535 | 0.956 | 0.675 | 0.751 | 0.805 | 0.964 | 0.777 | 0.459 | 0.695 | 0.993 | 0.796 | 0.931 | 0.844 | 0.952 |
OP | 0.739 | 0.563 | 0.631 | 0.979 | 0.640 | 0.711 | 0.775 | 0.961 | 0.746 | 0.353 | 0.439 | 0.990 | 0.816 | 0.914 | 0.881 | 0.946 |
GOP | 0.711 | 0.564 | 0.549 | 0.971 | 0.647 | 0.715 | 0.754 | 0.963 | 0.756 | 0.557 | 0.597 | 0.985 | 0.838 | 0.922 | 0.896 | 0.963 |
| | | | 0.969 | | | | 0.968 | | | | 0.985 | | | | |
| 0.709 | 0.679 | 0.637 | 0.967 | 0.642 | 0.704 | 0.732 | 0.972 | 0.754 | 0.480 | 0.620 | 0.988 | 0.817 | 0.932 | 0.866 | 0.962 |
Table 5.
The optimal hyperparameters, including learning rate (lr), batch size (bs), number of epochs (epc), input image size (is), optimizer (opt), backbone (bcb), architecture (arc), and loss for comparing models with the optimal band configuration using Intersection over Union values.
Table 5.
The optimal hyperparameters, including learning rate (lr), batch size (bs), number of epochs (epc), input image size (is), optimizer (opt), backbone (bcb), architecture (arc), and loss for comparing models with the optimal band configuration using Intersection over Union values.
Hyparam | Model 1 | Model 2 | Model 3 | Model 1 |
---|
learning rate | | | | |
batch size | 4 | 16 | 4 | 16 |
epochs | 200 | 200 | 200 | 200 |
image size | 128 | 512 | 128 | 512 |
optimizer | adam | adamw | adamw | adam |
backbone | vgg16 | resnet50 | resnet50 | vgg16 |
architecture | linknet | Unet++ | fpn | linknet |
loss | focal | focal | crossentropy | crossentropy |
Table 6.
The selected metric values for chosen classes in Model 1, trained on the entire dataset, evaluated on 5 specific locations from the test set.
Table 6.
The selected metric values for chosen classes in Model 1, trained on the entire dataset, evaluated on 5 specific locations from the test set.
Location | Label | IoU | Rec | Prec | Acc |
---|
Cordero Rojo | Excav | 0.616 | 0.864 | 0.681 | 0.616 |
Prep. area | 0.219 | 0.249 | 0.641 | 0.219 |
Agarak | Excav | 0.674 | 0.920 | 0.7161 | 0.674 |
Tunstead | Excav | 0.493 | 0.522 | 0.899 | 0.493 |
Cerovo | Excav | 0.019 | 0.589 | 0.020 | 0.020 |
Noranda | Excav | 0.231 | 0.606 | 0.272 | 0.231 |
Table 7.
The selected metric values for chosen classes in Model 2, trained on the entire dataset, evaluated on 5 specific locations from the test set.
Table 7.
The selected metric values for chosen classes in Model 2, trained on the entire dataset, evaluated on 5 specific locations from the test set.
Location | Label | IoU | Rec | Prec | Acc |
---|
Novotroitsk | Dump | 0.337 | 0.425 | 0.621 | 0.337 |
Kostolac | Dump | 0.436 | 0.471 | 0.856 | 0.436 |
Kovdor | Dump | 0.507 | 0.775 | 0.595 | 0.507 |
Rudnichny | Dump | 0.712 | 0.816 | 0.848 | 0.712 |
Table 8.
The selected metric values for chosen classes in Model 3, trained on the entire dataset, evaluated on 3 specific locations from the test set.
Table 8.
The selected metric values for chosen classes in Model 3, trained on the entire dataset, evaluated on 3 specific locations from the test set.
Location | Label | IoU | Rec | Prec | Acc |
---|
Globe-Miami | Infra | 0.464 | 0.549 | 0.750 | 0.464 |
TSF | 0.731 | 0.987 | 0.739 | 0.731 |
Bor Mine | Infra | 0.513 | 0.586 | 0.805 | 0.513 |
TSF | 0.651 | 0.861 | 0.727 | 0.651 |
Dam | 0.464 | 0.464 | 0.999 | 0.464 |
Buenavista | Infra | 0.509 | 0.589 | 0.787 | 0.509 |
TSF | 0.610 | 0.659 | 0.896 | 0.610 |
Dam | 0.012 | 0.019 | 0.029 | 0.012 |