A Practical Image Augmentation Method for Construction Safety Using Object Range Expansion Synthesis
Abstract
:1. Introduction
- Realistic data generation: objects and backgrounds are separated, and augmentation and synthesis are performed, while maintaining realistic object placement and size.
- Consideration of complex background elements: the augmentation process takes into account the presence of frequently occurring linear structures in construction environments, such as scaffolding.
- High applicability in construction automation: the generated dataset has strong potential for use in construction robots, automated inspection systems, and AI-based construction monitoring.
2. Literature Review
- Environmental changes: The environment at construction sites can vary considerably with the weather, season, and time of day, and these changes can compromise data consistency.
- Safety and accessibility: Owing to safety concerns, data collection may be restricted to certain areas or certain times. Additionally, many construction sites can be difficult to access, limiting the scope of data collection.
- Lack of diversity: Data collection can be limited to specific projects or sites that may struggle to achieve diversity. This can limit the generalizability of the model.
- Labeling challenges: Labeling data from construction sites can be a time-consuming process that requires expertise, making the accurate labeling of large volumes of data challenging.
3. Methods: Object Range Expansion Synthesis (ORES)
3.1. Distinction of the Proposed Method
3.1.1. In Terms of Quantity and Variety
3.1.2. In Terms of Quality
3.2. Process of Building Synthetic Dataset
3.2.1. Image Labeling and Object Separation from Background (Figure 5A)
3.2.2. Augmentation and Synthesis of Background and Object (Figure 5B)
3.2.3. Test of Object Recognition Model
4. Validation
4.1. Validation Scenario
4.2. Results of Validation
- Object-wise precision and recall analysis: Additional object categories will be introduced to compare precision and recall across multiple object types. This will help determine whether the low recall is a consistent issue across all categories or primarily concentrated in specific object types.
- Augmentation of complex visual conditions: In the current approach, objects and backgrounds are separated and recombined to generate synthetic data. However, the training data lacks scenarios involving occlusion, partial visibility, and lighting variation. To address this, future work will include additional augmentation techniques that simulate changes in viewpoint, illumination, and background context. These enhancements are expected to improve recognition performance in challenging visual environments.
- Integration of real and synthetic data: To reduce the domain gap between synthetic and real-world data, a hybrid training strategy will be explored. By combining a proportion of real images with synthetic ones, it is expected that the model can benefit from the realism of actual data, while leveraging the scalability of synthetic data, thereby enhancing recall and overall robustness.
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Hwang, J.; Kim, J.; Chi, S. Site-optimized training image database development using web-crawled and synthetic images. Autom. Constr. 2023, 151, 104886. [Google Scholar] [CrossRef]
- Rho, J.; Park, M.; Lee, H.-S. Automated construction progress management using computer vision-based CNN model and BIM. Korean J. Constr. Eng. Manag. 2020, 21, 11–19. [Google Scholar]
- Jung, S.-Y.; Lee, S.-K.; Park, C.-I.; Cho, S.-Y.; Yu, J.H. A Method for Detecting Concrete Cracks using Deep-Learning and Image Processing. J. Archit. Inst. Korea Struct. Constr. 2019, 35, 163–170. [Google Scholar]
- Park, M.-G.; Kim, K.-H. Development of an Automatic Classification Model for Construction Site Photos with Semantic Analysis based on Korean Construction Specification. Korean J. Constr. Eng. Manag. 2024, 25, 58–67. [Google Scholar]
- Zhang, X.; Tang, T.; Wu, Y.; Quan, T. Construction Site Fence Recognition Method Based on Multi-Scale Attention Fusion ENet Segmentation Network. In Proceedings of the 35th International Conference on Software Engineering and Knowledge Engineering, Virtual, 1–10 July 2023. [Google Scholar]
- Cortes, C.; Jackel, L.D.; Chiang, W.P. Limits on Learning Machine Accuracy Imposed by Data Quality. In Advances in Neural Information Processing Systems; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1995; pp. 239–246. [Google Scholar]
- Jain, S.; Seth, G.; Paruthi, A.; Soni, U.; Kumar, G. Synthetic data augmentation for surface defect detection and classification using deep learning. J. Intell. Manuf. 2022, 33, 1007–1020. [Google Scholar] [CrossRef]
- Man, K.; Chahl, J. A Review of Synthetic Image Data and Its Use in Computer Vision. J. Imaging 2022, 8, 310. [Google Scholar] [CrossRef]
- An, X.; Zhou, L.; Liu, Z.; Wang, C.; Li, P.; Li, Z. Dataset and benchmark for detecting moving objects in construction sites. Autom. Constr. 2021, 122, 103482. [Google Scholar] [CrossRef]
- Kim, J.; Kim, D.; Lee, S.; Chi, S. Hybrid DNN training using both synthetic and real construction images to overcome training data shortage. Autom. Constr. 2023, 149, 104771. [Google Scholar] [CrossRef]
- Duan, R.; Deng, H.; Tian, M.; Deng, Y.; Lin, J. SODA: A large-scale open site object detection dataset for deep learning in construction. Autom. Constr. 2022, 142, 104499. [Google Scholar] [CrossRef]
- Xiao, B.; Kang, S.C. Development of an image data set of construction machines for deep learning object detection. J. Comput. Civ. Eng. 2021, 35, 05020005. [Google Scholar] [CrossRef]
- Kim, H.; Kim, H.; Hong, Y.W.; Byun, H. Detecting construction equipment using a region-based fully convolutional network and transfer learning. J. Comput. Civ. Eng. 2018, 32, 04017082. [Google Scholar] [CrossRef]
- Hwang, J.; Kim, J.; Chi, S.; Seo, J. Development of training image database using web crawling for vision-based site monitoring. Autom. Constr. 2022, 135, 104141. [Google Scholar] [CrossRef]
- Son, H.; Choi, H.; Seong, H.; Kim, C. Detection of construction workers under varying poses and changing background in image sequences via very deep residual networks. Autom. Constr. 2019, 99, 27–38. [Google Scholar] [CrossRef]
- Kim, J.; Chi, S. Action recognition of earthmoving excavators based on sequential pattern analysis of visual features and operation cycles. Autom. Constr. 2019, 104, 255–264. [Google Scholar] [CrossRef]
- Zeng, T.; Wang, J.; Cui, B.; Wang, X.; Wang, D.; Zhang, Y. The equipment detection and localization of large-scale construction jobsite by far-field construction surveillance video based on improving YOLOv3 and grey wolf optimizer improving extreme learning machine. Constr. Build. Mater. 2021, 291, 123268. [Google Scholar] [CrossRef]
- Xiao, B.; Lin, Q.; Chen, Y. A vision-based method for automatic tracking of construction machines at nighttime based on deep learning illumination enhancement. Autom. Constr. 2021, 127, 103721. [Google Scholar] [CrossRef]
- Braun, A.; Borrmann, A. Combining inverse photogrammetry and BIM for automated labeling of construction site images for machine learning. Autom. Constr. 2019, 106, 102879. [Google Scholar] [CrossRef]
- Neuhausen, M.; Herbers, P.; König, M. Using synthetic data to improve and evaluate the tracking performance of construction workers on site. Appl. Sci. 2020, 10, 4948. [Google Scholar] [CrossRef]
- Assadzadeh, A.; Arashpour, M.; Brilakis, I.; Ngo, T.; Konstantinou, E. Vision-based excavator pose estimation using synthetically generated datasets with domain randomization. Autom. Constr. 2022, 134, 104089. [Google Scholar] [CrossRef]
- Baek, F.; Kim, D.; Park, S.; Kim, H.; Lee, S. Conditional generative adversarial networks with adversarial attack and defense for generative data augmentation. J. Comput. Civ. Eng. 2022, 36, 04022001. [Google Scholar] [CrossRef]
- Angah, O.; Chen, A.Y. Removal of occluding construction workers in job site image data using U-Net based context encoders. Autom. Constr. 2020, 119, 103332. [Google Scholar] [CrossRef]
- Kim, H.; Yi, J.-S. Image generation of hazardous situations in construction sites using text-to-image generative model for training deep neural networks. Autom. Constr. 2024, 166, 105615. [Google Scholar] [CrossRef]
- Hong, S.; Choi, B.; Ham, Y.; Jeon, J.; Kim, H. Massive-Scale construction dataset synthesis through Stable Diffusion for Machine learning training. Adv. Eng. Inform. 2024, 62, 102866. [Google Scholar] [CrossRef]
- Saovana, N.; Khosakitchalert, C. Assessing the Viability of Generative AI-Created Construction Scaffolding for Deep Learning-Based Image Segmentation. In Proceedings of the 2024 1st International Conference on Robotics, Engineering, Science, and Technology (RESTCON), Pataya, Thailand, 16–18 February 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 38–43. [Google Scholar]
- Jeong, I.; Kim, J.; Lim, S.; Hwang, J.; Chi, S. Training Dataset Generation through Generative AI for Multi-Modal Safety Monitoring in Construction. In Proceedings of the International Conference on Construction Engineering and Project Management, Sapporo, Japan, 29 July–1 August 2024; Korea Institute of Construction Engineering and Management: Seoul, Republic of Korea, 2024; pp. 455–462. [Google Scholar]
- Lee, J.G.; Hwang, J.; Chi, S.; Seo, J. Synthetic image dataset development for vision-based construction equipment detection. J. Comput. Civ. Eng. 2022, 36, 04022020. [Google Scholar] [CrossRef]
- Chai, P.; Hou, L.; Zhang, G.; Tushar, Q.; Zou, Y. Generative adversarial networks in construction applications. Autom. Constr. 2024, 159, 105265. [Google Scholar] [CrossRef]
- Soltani, M.M.; Zhu, Z.; Hammad, A. Automated annotation for visual recognition of construction resources using synthetic images. Autom. Constr. 2016, 62, 14–23. [Google Scholar] [CrossRef]
- Jeong, I.; Hwang, J.; Kim, J.; Chi, S.; Hwang, B.G.; Kim, J. Vision-Based Productivity Monitoring of Tower Crane Operations during Curtain Wall Installation Using a Database-Free Approach. J. Comput. Civ. Eng. 2023, 37, 04023015. [Google Scholar] [CrossRef]
- Mahmood, B.; Han, S.; Seo, J. Implementation experiments on convolutional neural network training using synthetic images for 3D pose estimation of an excavator on real images. Autom. Constr. 2022, 133, 103996. [Google Scholar] [CrossRef]
- Xiong, R.; Tang, P. Machine learning using synthetic images for detecting dust emissions on construction sites. Smart Sustain. Built Environ. 2021, 10, 487–503. [Google Scholar] [CrossRef]
- Hong, Y.; Park, S.; Kim, H.; Kim, H. Synthetic data generation using building information models. Autom. Constr. 2021, 130, 103871. [Google Scholar] [CrossRef]
- Bang, S.; Baek, F.; Park, S.; Kim, W.; Kim, H. Image augmentation to improve construction resource detection using generative adversarial networks, cut-and-paste, and image transformation techniques. Autom. Constr. 2020, 115, 103198. [Google Scholar] [CrossRef]
- Taiwo, R.; Bello, I.T.; Abdulai, S.F.; Yussif, A.-M.; Salami, B.A.; Saka, A.; Zayed, T. Generative AI in the Construction Industry: A State-of-the-art Analysis. arXiv 2024, arXiv:2402.09939. [Google Scholar]
Requirements | Detail | ||
---|---|---|---|
Quantity | - Data Volume | - A sufficient amount of training data must be available to ensure effective learning of the object recognition model. - The synthesis of diverse objects and backgrounds must consider changes in environmental conditions. | |
Quality | Diversity | - Object Variety | - Synthesized datasets must include a wide range of object types relevant to the construction domain. |
- Background Variety | - Backgrounds should represent various construction environments, including different lighting, weather conditions, and perspectives. | ||
- Environmental Changes | - In the synthesized images, the size and scale of objects should be suitably adjusted to fit the environment. - The synthesized object’s relationship with the background should appear natural, and its placement should realistically align with the environment. | ||
Reality | - Scale Matching | - Inserted objects must be resized appropriately to match the spatial context of the background. | |
- Visual Coherence | - Object placement must appear natural in relation to the background in terms of position, depth, and perspective. | ||
- Scene Integration | - The lighting, shadows, and geometry between objects and backgrounds must be visually consistent to simulate real-world integration. |
Method | Description Based on Requirements | Authors | ||
---|---|---|---|---|
Real Data | Actual field collection | Data collected from actual sites has high realism, but collecting a diverse set of data requires a lot of labor. | [9,11,12,13] | |
Web crawling | The data collected in practice has high realism, but there are limitations in obtaining the desired variety of data. | [14] | ||
Augmented Data | Classic augmentation | Simple enhancement through pixel value modification | It is possible to increase the amount of training data with less labor, but there are limitations in enhancing the diversity of backgrounds and objects. | [15,16,17,18] |
Synthetic augmentation | 3D rendering | Implementing the backgrounds and objects of construction sites in 3D requires a lot of labor and has lower realism compared to actual sites. | [19,20,21] | |
GAN | GAN-generated images can be realistic, but building GAN models is labor-intensive and controlling the generated images is challenging. | [22,23] | ||
Generative AI | Pre-trained text to image models can efficiently generate photorealistic images from textual prompts. While less labor intensive the output quality and domain relevance rely on prompt design and precise control over image details remains limited. | [24,25,26,27] | ||
Object infusion | Enhances diversity by inserting objects and backgrounds, making it efficient for creating large datasets. | [28] |
Criteria | ORES (Object Range Expansion Synthesis) | 3D Rendering | GANs | Stable Diffusion, Text-to-Image |
---|---|---|---|---|
Labor Requirements | Utilizes 2D datasets and basic image processing tools | Requires domain expertise in 3D modeling, simulation management | Involve deep learning model design, tuning, and implementation expertise | Requires prompt engineering and basic understanding of model behavior |
Data Preparation Time | Open datasets can be used with minimal preprocessing | Preparation includes 3D asset creation, lighting setup | Require curation and annotation of training datasets | Preparation mainly involves designing and refining prompts; optional post-processing may be added |
Data Synthesis Time | Supports fast batch processing; parallel generation achievable with low computing resources | Each scene must be rendered individually, which is computationally intensive | Generation time depends on training stability and convergence | Typically, within seconds to minutes |
Cost Efficiency | Requires no specialized hardware or licensed software | Involves substantial investment in 3D modeling tools, rendering software, and computer infrastructure | Significant GPU resources and training time for model development | Can utilize publicly available pretrained models; large-scale deployment may still require GPU resources |
Criteria | ORES (Object Range Expansion Synthesis) | Existing Methods (3D Modeling/GANs/Generative AI) |
---|---|---|
Data Preparation Time | Utilizes open datasets; minimal preprocessing enables rapid implementation | Requires time-consuming preparation (3D modeling, prompt engineering, data cleaning) |
Hardware Requirements | Can run on standard PCs; supports parallel processing without high-end GPUs | Demands high-performance GPUs and large-scale computation resources |
Design Complexity | Simplified design with fixed object position and scale | GANs suffer from instability; generative AI highly sensitive to prompt quality |
Visual Coherence | Maintains object size, position, and background consistency for realistic images | 3D rendering requires complex light/material tuning; GANs often lack background realism |
Scalability | Easily extensible via template-based insertion across object and background types | Domain-specific retraining or model redesign required |
Practical Usability | Applicable in short-term and small-to-medium-scale projects | Typically, requires large research teams and advanced infrastructure |
Method | Description Based on Requirements | Features |
---|---|---|
Object infusion using rendering | 3D-rendered objects can be inserted into 2D backgrounds to capture various angles, but realism in placement and angles is low | Allows for multiple angles but lacks realism, often resulting in unrealistic placements |
Object infusion using deep learning | Generates realistic data using deep learning algorithms but requires significant time and effort to develop | Provides high realism but demands substantial resources and time |
Object range expansion synthesis (ORES) | Matches various objects and backgrounds from an open dataset to synthesize new objects into existing images, creating diverse variations | Leverages open datasets to automatically generate diverse variations, ensuring high efficiency in data creation |
Authors | Method | Quantity | Environment and Object |
---|---|---|---|
[1] | Framework for automatically generating a training dataset by creating images and labeling target objects using web crawling and virtual reality technologies. | Web crawling + synthetic image dataset and training dataset comprising 99,800 images in 42 min | Construction site and heavy equipment |
[10] | Insertion of workers in various poses modeled in 3D into 2D construction site backgrounds, resulting in a 50% reduction in the actual data needed. | Hybrid dataset 10,000 (worker) (Real5000 + Syn5000) | Construction site and worker |
[14] | Automatic collection of desired images from the Internet, reflecting various visual characteristics of objects (same object, different manufacturers, etc.) for training images. Automatic image labeling using an image segmentation model. Completely random cross-over sampling of the foreground and background. | Automatically creates a training dataset comprising 5864 images in 53.5 min | Construction site |
[19] | Generating 2D synthetic images of building elements (e.g., columns, walls, and slabs) using building information modeling (BIM). Synthetic datasets are used to train dense neural network (DNN) architectures. The developed model effectively and accurately positions building elements in real construction images. | - | Building |
[20] | Data generation in a 3D environment to improve the recognition performance of workers in hazardous situations on construction sites. Rendering using the cycles rendering engine, a mini batch size of 64 based on the Darknet 53 model (pretrained on ImageNet). | 8 different scenes Total 3835 frames 32 tracked subjects | Construction site and worker |
[21] | Generating a large-scale labeled dataset for excavator pose estimation using domain randomization with a gaming engine. | 12,000 synthetic images (training dataset) 3000 real images (validation dataset) | Construction site and excavator |
[23] | Creation of synthetic images through the removal of workers from construction site images and inpainting using a U-Net model optimized using Adam. Applying context encoders to remove duplicated objects in images and inpainted background context, with areas requiring context inpainting being predefined in size and location within the image. Application of the U-Net deep learning architecture for direct image-to-image conversion to relax the fixed size and location constraints of context inpainting. | 5846 construction images | Panoramic view of a construction site where objects have been removed |
[25] | Synthetic images were generated using stable diffusion with construction-related prompts. Context-based labeling was applied to enhance dataset quality. | 150,000 | Construction task, CNN trained with context-based labeling |
[26] | A text-to-image model was used to generate images of 27 construction hazard scenarios based on structured prompts. | 3585 images across 27 scenarios | Construction accidents, object relations captured |
[27] | Pretrained generative AI was used to create scaffolding images. The model learned distinctive features, though image diversity was limited. | - | Scaffolding, segmentation performance evaluated |
[30] | Generation by combining 3D models of construction equipment and various background images taken at construction sites. Training of a machine learning-based excavator detection model using only synthetic datasets. Improving detection accuracy while reducing the time required for image annotation. | 3D model with 16 backgrounds 765 (no. of positive images) | Construction site and excavator (modeling) |
[31] | Development of a framework for object recognition during curtain wall installation work from tower cranes. Insertion of labeled images of curtain wall panels and crane hooks created with the Unity engine. Creation of training data and understanding crane movements using the intersection over union (IoU) tracker. | Generate 300,000 training images per hour | Curtain wall work site via crane and crane hook and curtain wall panel |
[32] | Building an excavator database using 3D modeling tools and game engines. Building a synthetic image database including the location and pose of excavators. | - | - |
[33] | Building a training dataset by generating dust in a 3D environment and inserting data created in a construction background with inserted dirt. | 3860 synthetic dust images (training dataset) 1015 real images (test dataset) | Construction site with dust inserted |
[34] | Building virtual data from real building images through BIM and augmented data using a GAN. | Inside the building | |
[35] | Data collection using UAVs. Use of GAN-based image inpainting and concepts of object class probability and relative size. Method of removing objects (e.g., excavators) from site images and reconstructing new objects (e.g., mobile cranes) in areas where objects have been removed. | 544 (training dataset) 112 (test dataset) | Construction site and heavy equipment (taken by UAV) |
[36] | Photorealistic images of construction accidents were synthesized using stable diffusion. Prompts were crafted to reflect real-world safety incidents. | 2324 fall accident cases analyzed using GPT-4, 300 synthetic images generated | Safety incidents, object detection, and action recognition |
Our research | Efficiently generates high-quality synthetic data by separating backgrounds and objects and inpainting. Allows realistic synthesis of various object forms in original locations, expanding object range and diversity with minimal real data and without extra labeling or 3D modeling. | Generate 30,000 synthetic images | Scaffolding Worker ladder |
Method | Ref. No. | Limitations | Synthetic Data Generation | ||||
---|---|---|---|---|---|---|---|
Object | Specify Realistic Compositing Location | Annotation Automation | Object Range Expansion Synthesis | ||||
Paner | Linear | ||||||
3D Modeling + Object Infusion | [1] | Unnaturalness from web-crawled objects Including background | Heavy equipment | ✗ | ∆ Random located except upper part | ✗ | ✗ |
[10] | Modeled worker differs from real images. | Worker | ✗ | ✗ | ◯ | ✗ New 3D model creation required | |
[30] | Synthetic images created with random backgrounds differ from real images. | Excavator | ✗ | ✗ Located without considering background | ◯ | ✗ | |
Object Infusion | [14] | Synthetic images do not account for ground conditions and environments and construction sites. | Excavators | ✗ | ✗ Random located | ✗ | ✗ |
[24] | Image quality is highly dependent on prompt design, making detailed control difficult. | Small construction tools (bucket, cord reel, hammer, tacker) | ✗ | ∆ | ✗ | ✗ | |
[35] | Discrepancy between the remaining original background and the composite background of the separated object | Heavy equipment | ✗ | ∆ Approximate positioning of the composite | ✗ | ||
Object Range Expansion Synthesis (Our research) | Limited in handling dynamic environmental factors such as lighting changes, occlusion | Worker | Frame scaffolding, ladder, mobile scaffolding | ◯ Accurate placement based on existing object location information | ◯ | ◯ |
Epoch | Average | mAP50 | mAP55 | mAP60 | mAP65 | mAP70 |
---|---|---|---|---|---|---|
10 | 35.55 | 94.68 | 92.39 | 85.34 | 70.18 | 32.47 |
50 | 43.84 | 98.62 | 97.60 | 97.54 | 92.04 | 68.50 |
100 | 44.9 | 98.63 | 98.63 | 97.33 | 94.46 | 72.51 |
150 | 44.29 | 98.74 | 98.74 | 98.36 | 92.93 | 71.05 |
200 | 44.61 | 98.70 | 98.70 | 98.35 | 93.22 | 71.90 |
250 | 44.41 | 98.74 | 98.63 | 98.29 | 92.92 | 71.28 |
Epoch | mAP50 | F1score | Precision | Recall |
---|---|---|---|---|
250 | 98.74 | 64.59 | 99.24 | 54.55 |
Method | Dataset | Object | Algorithm | Precision | Recall | F1 Score |
---|---|---|---|---|---|---|
3D rendering [20] | 8 different scenes Total 3835 frames 32 tracked subjects | Construction site and worker | CNN YOLOv3 | 94% | - | - |
3D rendering + object infusion [1] | Web crawled + synthetic image database. Training database comprising 99,800 images in 42 min | Construction site and heavy equipment | YOLOv5 Batch size 16, Faster R-CNN | - | - | 63.11% |
3D rendering + object infusion [10] | Hybrid dataset 10,000 (worker) (real5000 + syn5000) | Construction site and worker | DNN YOLOv3 | 67.8% | - | - |
3D rendering + object infusion [30] | 3D model with 16 backgrounds 765 (no. of positive images) | Construction site and excavator (modeling) | - | 75% | 98% | - |
Web crawling + object detection [14] | Automatically creates 5864 data (179 s) | Construction site and heavy equipment | Faster R-CNN | 92.71% | 88.14% | - |
Deep learning + object infusion [35] | 76,320 synthetic data (6 different construction sites) | Construction site and heavy equipment (taken by UAV) | Faster R-CNN | 57.13% | 60.22%, | - |
Object range expansion synthesis (ORES) | 30,000 synthetic data | Scaffolding | YOLACT | 99.24% | 54.55% | 64.59% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kim, J.; Wang, I.; Yu, J.; Lee, S. A Practical Image Augmentation Method for Construction Safety Using Object Range Expansion Synthesis. Buildings 2025, 15, 1447. https://doi.org/10.3390/buildings15091447
Kim J, Wang I, Yu J, Lee S. A Practical Image Augmentation Method for Construction Safety Using Object Range Expansion Synthesis. Buildings. 2025; 15(9):1447. https://doi.org/10.3390/buildings15091447
Chicago/Turabian StyleKim, Jaemin, Ingook Wang, Jungho Yu, and Seulki Lee. 2025. "A Practical Image Augmentation Method for Construction Safety Using Object Range Expansion Synthesis" Buildings 15, no. 9: 1447. https://doi.org/10.3390/buildings15091447
APA StyleKim, J., Wang, I., Yu, J., & Lee, S. (2025). A Practical Image Augmentation Method for Construction Safety Using Object Range Expansion Synthesis. Buildings, 15(9), 1447. https://doi.org/10.3390/buildings15091447