Next Article in Journal
Character-Defining Elements Comparison and Heritage Regeneration for the Former Command Posts of the Jinan Campaign—A Case of Chinese Rural Revolutionary Heritage
Previous Article in Journal
Post-Handover Housing Quality Management and Standards in Korea
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Ultraviolet Radiation Transmission in Building’s Fenestration: Part II, Exploring Digital Imaging, UV Photography, Image Processing, and Computer Vision Techniques

by
Damilola Adeniyi Onatayo
1,*,
Ravi Shankar Srinivasan
1 and
Bipin Shah
2
1
UrbSys (Urban Building Energy, Sensing, Controls, Big Data Analysis, and Visualization) Laboratory, M.E. Rinker, Sr. School of Construction Management, University of Florida, Gainesville, FL 32608, USA
2
Winbuild Inc., Fairfax, VA 22030, USA
*
Author to whom correspondence should be addressed.
Buildings 2023, 13(8), 1922; https://doi.org/10.3390/buildings13081922
Submission received: 19 June 2023 / Revised: 30 June 2023 / Accepted: 14 July 2023 / Published: 28 July 2023
(This article belongs to the Section Construction Management, and Computers & Digitization)

Abstract

:
The growing demand for sustainable and energy-efficient buildings has highlighted the need for reliable and accurate methods to detect fenestration deterioration and assess UV radiation transmission. Traditional detection techniques, such as spectrophotometers and radiometers, discussed in Part I, are often expensive and invasive, necessitating more accessible and cost-effective solutions. This study, which is Part II, provides an in-depth exploration of the concepts and methodologies underlying UV bandpass-filtered imaging, advanced image processing techniques, and the mechanisms of pixel transformation equations. The aim is to lay the groundwork for a unified approach to detecting ultraviolet (UV) radiation transmission in fenestration glazing. By exploiting the capabilities of digital imaging devices, including widely accessible smartphones, and integrating them with robust segmentation techniques and mathematical transformations, this research paves the way for an innovative and potentially democratized approach to UV detection in fenestration glazing. However, further research is required to optimize and tailor the detection methods and approaches using digital imaging, UV photography, image processing, and computer vision for specific applications in the fenestration industry and detecting UV transmission. The complex interplay of various physical phenomena related to UV radiation, digital imaging, and the unique characteristics of fenestration glazing necessitates the development of a cohesive framework that synergizes these techniques while addressing these intricacies. While extensively reviewing existing techniques, this paper highlights these challenges and sets the direction for future research in the UV imaging domain.

1. Introduction

Architectural elements, such as fenestration components (i.e., windows), play a significant role in determining the thermal performance of a building thereby influencing its Energy Performance rating. Transmission of ultraviolet (UV) radiation, which is known to have substantial consequences for human health, is an important part of fenestration performance. Furthermore, UV radiation is important in evaluating the deterioration of fenestration glazing, which can provide useful information about the thermal efficiency of the glazing materials [1,2,3]. In light of this, there is a rising demand for the establishment and processing of visual data in contexts such as building condition assessments [4]. Spectrophotometers and radiometers are widely used to measure UV transmission in fenestration and to determine the level of deterioration of windows in buildings, but they have significant limitations [1,2,3]. Spectrophotometers are precise, but they are expensive, hard to operate, and can only deal with small samples at a time, which makes them unsuitable for everyday use in the built environment [1,2,3,5,6]. Radiometers are also costly and can be influenced by external factors, impeding their extensive application [1,2,3,7,8,9]. The present circumstances highlight an increasing demand for a tool that is both economical and capable of capturing wide-field views with high resolution [10].
To address this need, one prospective solution is the combination of digital and optical devices; this combination has emerged as a potential solution for transforming the built environment [11]. These techniques have found widespread utility in tasks such as land surveys, structural damage monitoring, structural health assessment, and defects study [11,12]. A particularly noteworthy advancement is the introduction of backlighting illumination in fenestration, which has amplified these technologies’ capabilities, facilitating the examination of outlines and the interior of transparent objects [13]. Consequently, developments such as digital image processing (DIP), computer vision, and computer-aided diagnosis systems have taken digital imaging applications to new heights [14]. These techniques capture multiple snapshots of a given element and process them computationally, thus reducing errors from flawed engineering judgments. With images carrying crucial information reflecting object features, their quality becomes paramount [15]. As a result, there is an ongoing pursuit among researchers for novel methods to improve the precision and consistency of image-based evaluations [13].
Further advancements in image processing and analysis, such as deep learning and computer vision techniques, have been game-changers. They have been particularly successful in applications such as object detection, helping to automate the diagnostic process and navigate issues of ambiguity and information overload [16]. Their adoption in defect target detection in transparent objects and glasses is on the rise with researchers [17,18]. Notably, Convolutional Neural Networks (CNNs) are gaining traction for defect target detection because of their ability to independently learn target characteristics, thereby improving detection capabilities [19].
These new developments can effectively address the shortcomings of traditional spectrophotometers and radiometers, promising more efficiency, affordability, and precision in different applications. As researchers continue to explore these promising tools and methods, UV transmission measurement and fenestration deterioration detection seem poised for significant improvement and innovation. Despite the potential of these advancements, the commercial and research applications of machine vision-based ultraviolet radiation detection and deterioration detection mechanisms are still nascent, given the emerging nature of this technology. This delineates a research gap that the present study aims to bridge.
This paper’s aim is to review and investigate applications of digital imaging, UV photography, image pre-processing, processing, and pixel transformation equations, and to highlight any existing gaps in the literature. For this study, the Scopus comprehensive literary database was utilized to analyze relevant literature systematically. Although none of the available reviews centered on the entirety of machine vision-based detection, the tools, methods, and applications discussed in these papers provided valuable information about its viability to replace existing traditional methods (using spectrophotometers and radiometers [20,21,22,23,24,25,26,27,28,29,30]).
The objectives of this review study are as follows:
  • Explore digital imaging and ultraviolet image capture process and enabling technologies.
  • Identify the present application of UV radiation detection using digital cameras.
  • Discuss the Computer Vision process and integration of image analysis for improving raw data, enhancing visual inspection, and enabling more reliable measurements and assessments.
  • Explore existing image pixel transformation equations and the mathematical relationships to image data conversion.
The novel focus of this research is on the overall process and the assessment of its subprocesses. It considers aspects such as capturing and analyzing digital imaging, factors affecting image quality and accuracy, and the potential essential process for optimizing the performance of computer vision and deep learning algorithms. The study aims to establish a comprehensive understanding of the mathematical relationships governing the conversion of image data into practical and actionable information, laying the groundwork for future research and development in the field of UV transmission measurement. The ultimate goal is to aid future researchers in selecting the most relevant techniques for UV imaging and integrating them into a comprehensive machine vision system for radiation detection in the built environment. These kinds of objectives have never been pursued before; therefore, this study is a novel attempt to provide the groundwork for the objectives.
The structure of the paper follows a systematic format, with the research methods presented in Section 2. Section 2 further elaborates on the literature retrieval process and the review procedure. The research findings are presented in Section 3 and Section 4, with Section 4 divided further into image analysis, analyzing UV images, computer vision, and image segmentation overview. The discussion and conclusion are presented in Section 5 and Section 6.

2. Methodology

The exploration of ultraviolet (UV) imaging is an intricate and multifaceted discipline, necessitating a comprehensive review and understanding of the existing body of scholarly work. Capitalizing on reliable scientific research platforms, namely, Scopus, this inquiry has chosen to bypass the Google Scholar search engine, given its incomplete Boolean operations in advanced search functions and undisclosed search result algorithms. To capture the central research concerns in this field, an inclusive search formula was crafted—TITLE-ABS-KEY (ultraviolet radiation with camera) AND PUBYEAR > 1999 AND PUBYEAR < 2024 AND (LIMIT-TO (SUBJAREA, “ENGI”) OR LIMIT-TO (SUBJAREA, “MATE”) OR LIMIT-TO (SUBJAREA, “COMP”) OR LIMIT-TO (SUBJAREA, “ENER”)). The expansive scope and reliability of the Scopus databases positions them as fitting resources for the research at hand.
Keeping the time frame between 1999 and 2023, the search parameters were limited to journal papers, recognized as credible and certified repositories of scholarly knowledge. Upon incorporating keywords germane to “Cameras” research areas, the search yielded 529 bibliographic records. After a manual pruning exercise, these records were distilled to a selection of 172 key pieces of literature. To capture the interrelations within the relevant literature, VOSviewer applications was brought into play. VOSviewer software harnesses co-occurrence clustering to identify frequently appearing keywords, typically signaling central research issues as illustrated in Figure 1.
Figure 2 showcases the network of knowledge related to keywords associated with ultraviolet radiation detection with cameras.
Given that the final selection of the literature comprised 172 research articles, Microsoft Excel was employed to systematically organize and analyze the data. Relevant information extracted from each study was meticulously recorded in different columns and separate sheets, each dedicated to a specific section of this literature review. This approach facilitated a structured and thorough analysis and aided in the synthesis of the findings.

3. Digital Imaging and UV Photography

The potency of photography as a research tool is undeniable, given its capacity to uncover facets of a subject, unseen to the naked eye [30]. The advent and subsequent advancements in digital imaging and related technologies have revolutionized the sphere of photography, which has opened new possibilities and methods for this expressive medium [31]. Digital imaging, an optical methodology, brings into focus the observable attributes of a subject’s surface, courtesy of an array of advanced technologies and strategies. The objective of digital imaging is the creation of superior-quality images, a pursuit largely hinging on the processes of optical illumination and image acquisition [32]. The first step is the design of an optical illumination framework capable of accentuating pertinent features of an object while obscuring the undesirable ones [13]. This necessitates a meticulous examination of the object’s characteristics, the interplay between light and object, and the choice of an apt light source in order to ensure optimal illumination. Following this, the image is recorded by means of a camera sensor and an optical lens.
Influencing the quality of the image are factors such as the lens and sensor selected, each boasting unique attributes, including resolution, sensitivity, and dynamic range. Typically, industrial cameras incorporate a photosensitive apparatus based on CCD or CMOS chips, both exhibiting distinct advantages and limitations [13,33]. Upon image capture, it must be transformed from an optical signal to an electrical counterpart and subsequently digitized to facilitate computer processing [13]. In certain scenarios, an astutely designed field of view coupled with an appropriate photosensitive sensor may be of paramount importance. Thus, the entire process of digital imaging, from illumination design to image acquisition and conversion, requires meticulous planning and execution, with each step significantly impacting the final image quality and, consequently, the data obtained for research purposes.

3.1. Digital Image Acquisition

The academic research domain has seen a wide array of camera types being deployed, each possessing distinct characteristics concerning their sensitivity, lens, saved format, and processing software, tailored to fit the specific requirements of varying research fields. To illustrate, Cucci et al. [30] capitalized on a Canon EOS 6D camera, featuring an ISO sensitivity of 6400, along with a Rayfact 105 mm f4.5 UV lens and a Micro Nikkor 105 mm f4 lens. They utilized a Xenon lamp as an ancillary source, with the processing of images carried out via RawDigger x64 and Excel 2010 software for their UV imaging-based machine vision system. In contrast, Al-Mallahi et al. [34] implemented two cameras: a UV 1-CCD camera (SONY, XC-EU50) accompanied by a Pentax B2528-UV lens, and a Canon Power shot A-80. Pixel-segmented algorithms formed the basis of the image-processing method in both systems. In the context of agricultural applications, Maharlooei et al. [35] employed a Canon EOS Rebel T2i DSLR digital camera combined with a 1000 W tungsten halogen lamp as an external light source. The images were processed using MATLAB (MathWorks, Natick, MA, USA) software, R2014a version [35]. Inanici [10] explored the realm of High Dynamic Range (HDR) imaging using a Nikon Coolpix 5400 camera with a fisheye lens. The camera captured images in RadianceRGBE and LogLuv TIFF formats, employing diverse external light sources, including daylight, incandescent lamp, projector, and fluorescent, metal halide, and high-pressure sodium lamps. Images were processed using the Photosphere software.
In their analysis of granite-forming minerals, Ramil et al. [36] used a DSLR Nikon D100 camera with an F Micro-Nikkor 60 mm f/2.8D lens. Two tungsten lamps served as external sources and images were processed via customized MATLAB® software. Pedreschi et al. [36] conducted research in the agricultural domain using a Power Shot G3 (Canon, Ota-ku, Japan) camera capturing images in TIFF format with four fluorescent lamps (Philips, Natural Daylight) as an external light source. Canon Remote Capture Software (version 2.7.0) was used for image processing. Sena et al. [37] utilized an MS3100 camera (Duncan Technologies, Inc., Auburn, CA, USA) with a 7 mm focal length lens and an f-stop of 3.5 for agricultural research [38] as shown in Table 1. The external light source comprised four 150 W and six 50 W halogen lamps, and the MATLAB toolbox was used for image processing. These instances reflect the diverse range of cameras and configurations used in academic research, emphasizing the importance of the selection of suitable tools and methodologies catered to the distinct domain and requirements of each study. This is especially pertinent given the challenges in image acquisition [27]. Commercially available cameras may introduce undesired modifications to captured images, such as alterations in contrast, brightness, sharpness, gain controls, and white balance, that could potentially lead to incorrect interpretations of image data [39]. Consequently, this may compromise image reliability and reproducibility, making it challenging to compare images captured under varying conditions or times [39]. Several methods have been proposed to address these challenges, including capturing images of opaque objects using photodetectors positioned at different locations [40].
Dyer et al. [39] evaluated the impact of uneven illumination on reflected images, noticing that the upper section of the image was darker than the lower section [39]. To mitigate such irregularities in illumination, researchers can enhance experimental procedures and apply post-processing methods. Ensuring a consistent aperture size during image capture ensures that luminance values remain constant under different conditions [10]. Mathai et al. [42] demonstrated the inspection of 3D transparent objects with a system employing two light sensors without moving the object. Fast sensors and algorithms were used to map the differences between views. By employing innovative strategies such as using photodetectors at different locations and maintaining a constant aperture size, researchers can navigate these challenges and unlock new possibilities for digital imaging in diverse academic domains. As the field of digital imaging continues to evolve, it is anticipated that future research will continue to push the boundaries of what is possible, generating new knowledge and tools that can further enhance the quality and reproducibility of digital images.

3.2. UV Imaging

The advent of photometry utilizing photographic means is by no means a novelty, with film-based images and charge-coupled device (CCD) products frequently employed for luminance evaluations [10]. As digital imaging technology has evolved, its application has spanned across diverse scientific fields, an upward trend evident in Figure 3.
Ubiquitous devices such as smartphones and cameras hold the potential for ultraviolet (UV) measurement, and despite the limitations of current UV radiation measurements, these instruments show promise [43,44,45]. Owing to its short wavelength, UV light can penetrate deeply, finding use in applications such as the detection of certificates and surface scratches on metals [13,34]. The utility of a digital camera extends beyond mere imaging; it also serves as an effective instrument for measuring light pollution, providing both photographic documentation and functioning as a light meter [46]. The integration of digital images into commonplace devices such as mobile phones and automobiles, as well as in industrial and scientific applications, is becoming increasingly common, often facilitating engineering solutions [47]. Figure 3 illustrates the surge in interest in digital imaging and UV photography based on keyword searches on Scopus. UV imaging cameras largely resemble those used for visible light, their key difference being the representation of information as a map of UV intensities across an area. The interpretation of such information is contingent on the application. Many digital cameras and smartphones employ silicon-based complementary metal-oxide-semiconductor (CMOS) sensors capable of detecting UV radiation, as shown in Figure 4 [48,49,50]. To produce high-quality images within the ultraviolet spectrum, manufacturers typically employ filters to block unwanted visible and infrared (IR) radiation [27].
Several layers constitute these sensors, including Dust-reduction, Anti-aliasing, Hot Mirror filters, Microlens, and Color Filter Array (CFA). The sensor size has implications for image quality and magnification. Frequent sensor sizes include Full Frame Sensors and APS-C Sensors, the latter being smaller than 24 × 36 mm and having a crop factor. Cameras featuring 12 MP or higher and high-quality lenses can produce superior UV images for a variety of applications [27]. The image quality is largely determined by the number of captured pixels, the effectiveness of the system’s light-focusing capacity, and the efficiency of the CCD in converting light into electrical signals [51].
Given that UV photography captures images in the UV light spectrum, which is not visible to the human eye, the RGB (red-green-blue) system is extensively employed to represent digital color images. This is predicated on the assumption that the human eye possesses red, green, and blue cones that perceive color [52]. To achieve high-quality reproducible outcomes in UV photography, it is crucial to adhere to fundamental principles, such as restricting light source wavelengths, to ensure that only the desired wavelengths are captured by the camera [27]. While most digital cameras incorporate a filter within the sensor, it may not effectively block all reflected UV light, necessitating an additional filter on the camera lens [30].
Advances in camera technology have increased the use of UV-filtered cameras and UV photography, making UV image capture more accessible and economical. The technology has spurred interest in exploring hidden features and patterns in a variety of subjects that UV light can expose, such as plants, animals, minerals, art, and forensic evidence [43]. Unlike photometers, which measure light intensity at a fixed point, UV cameras can capture UV light from various sources and angles, yielding more dynamic and diverse images. The most suitable UV wavelengths for photography range from 300 nm to 400 nm as they are close to visible light and can create vivid colors and contrasts [27]. These wavelengths are less harmful to living organisms when compared to shorter wavelengths, which can inflict significant damage to humans [53].
Successful UV photography requires an appropriate filter that blocks all visible light and IR wavelengths and transmits only UV wavelengths [54]. Known as a UV bandpass filter, it is positioned in front of the camera lens to filter incoming light.
The filtered light then reaches the camera sensor, a specialized device capable of detecting UV light. The sensor, typically a CCD or CMOS sensor sensitive to UV light within the range of 100 to 400 nm, converts the light into an electrical signal. This signal is then processed by the camera’s electronics and stored as a digital image. The camera lens plays a crucial role in focusing the UV light onto the sensor. Normal lenses, incapable of focusing properly due to the shorter wavelength of UV light, may result in blur. Therefore, lenses specifically designed for UV light are necessary to ensure image accuracy [29].
Recent studies have demonstrated that smartphone cameras equipped with filters admitting only certain wavelengths can measure significant amounts of light in the deep UVA and UVB ranges when modified for use with monochromatic and solar sources in the lab [20,55]. It is worth noting that UV filters are visually opaque, causing a delay between positioning the camera, focusing, and photographing. This delay can be minimized using filter holders that allow swift filter placement, or filter holders that attach to the lens and permit filters to be easily flipped into place, similar to those manufactured by Canon, Kodak, and Nikon [3,56]. It is crucial to dispel the misconception that imaging in both the visible and ultraviolet spectrums necessitates two cameras; the latest generation of cameras is designed for multi-purpose functionality [57]. The extent of use of camera sources in use in UV detection is detailed in Figure 5.
Because of its impact on dark noise and dark current, temperature plays a pivotal role in camera sensor calibration [48]. This effect is typically measured by altering the ambient temperature and capturing and analyzing a series of dark noise images. However, ref. [21] observed no significant increase in dark noise due to temperature variations in image sensors, noting that sensors are typically well-shielded from routine daily temperature changes, which exert minimal influence on their performance [21,22].

4. Status of UV Radiation Detection Using Cameras

The utility of camera-based systems in detecting ultraviolet (UV) radiation has been widely recognized and scrutinized within the research community, as summarized in Table 2 [2,20,22,30,58,59,60,61,62,63,64]. This growing body of evidence attests to the capacity of image sensors to yield meaningful UV irradiance data, suggesting that prevalent devices such as smartphones and digital cameras could be deployed as pragmatic, accessible tools for UV radiation research.
The research carried out by Turner et al. [60] reveals interesting findings when the outer filters of various smartphones were removed to ascertain the volume of UV radiation, specifically UVB, that these devices could detect. Similarly, ref. [20] investigated the prospect of utilizing an image sensor, equipped with supplementary filters, for UV measurements. Other researchers, such as [55], have employed this technology innovatively to construct a UV detection system using a modified Raspberry Pi camera (PiCam) and JAI Camera systems, intended to remotely monitor sulfur dioxide emissions from volcanoes. These compact, widely available technologies present a compelling opportunity for physicists, researchers, and facility managers to collect solar irradiance data and conduct analyses [25]. The responsiveness of smartphone image sensors to the UV waveband has been verified [20]. Notably, the irradiance reaching the camera’s image sensor is moderated by the camera’s lens and additional filters, which are subsequently converted into an electrical signal and further transformed into a digital reading between 0 and 255 [21]. With the ready availability of programming tools such as Python and the computer vision algorithm OpenCV, the analysis of images captured for UV radiation detection becomes increasingly feasible. These tools facilitate the examination of fenestration transmission and degradation data, having important implications for energy efficiency and occupant comfort in buildings.
In their seminal work, Igoe et al. [58] studied the response of smartphone image sensors to clear sky solar irradiance at 305 nm, a short wavelength within the UV spectrum, under various air masses up to 9.6. Their findings corroborated that a cost-effective and portable smartphone camera sensor can successfully detect low-level direct UV irradiance at 305 nm. In [55], the study team reengineered a Raspberry Pi camera sensor by removing the lens, filter, microlens, and Bayer filter layers. They attached a UV transmissive lens and a bandpass filter to allow for the capture of images within the UV region and subsequently examined the linearity and sensitivity of the reconfigured camera sensor to UV radiation. They showed promising results for remote sensing of SO2 emissions from power station smokestacks. This underpins the potential for various UV imaging applications. Other researchers, such as Garcia et al. [59] have also investigated ways to derive a linear relationship between UV incident radiance and camera response.
Numerous studies have also highlighted various camera configurations for UV imaging. Pratt et al. [61] employed a DSLR camera equipped with a Lifepixel special bandpass filter to capture images of skin regions prone to cancer. Tamburello et al. [62] used Alta U260 cameras equipped with Asahi Spectra 10 nm FWHM XBPA310 and XBPA330 bandpass filters for imaging the sulfur dioxide flux distribution. Dawes et al. [2] used a Sony Xperia Z1 smartphone camera with two filters to explore the impact of thick optical materials on the camera’s ability to measure and monitor UVA light where direct illumination is obstructed. Igoe et al. [53] evaluated clear sky irradiances using an LG L3 smartphone camera model with a CVI Melles Griot bandpass filter for wavelengths of 340 and 380 nm, normalized using a calibrated Microtops sunphotometer. Gibbons et al. [63] used UV photography to highlight the damage to facial skin caused by previous UV exposure. They employed the Polaroid CU5 and Faraghan Medical Camera Systems with a 35-mm single-lens camera to capture images of the damage to the skin. The study demonstrated the potential of UV photography in identifying damage to the skin. These pieces of evidence underline the expansive potential of UV cameras for a plethora of UV imaging applications, from skin damage assessment to the examination of historical artifacts under UV light, fostering an era of accessibility and ease in UV radiation studies.

5. Image Analysis

Image analysis, a methodical examination of images via computational systems, follows a series of key steps, which are outlined in Figure 6 [65].
The historical trajectory of image processing, with applications extending across various fields, can be traced back to the 1920s when newspapers began to employ this technology for printing purposes. A significant milestone was reached in the 1960s with the amalgamation of image processing and computational technology, opening the door to explorations within arenas such as remote sensing [67]. Fast forward to the dawn of the 21st century, advancements in areas such as industrial inspection and forensic applications were evident, signaling the integration of image processing and computer-based applications into different domains across diverse disciplines [65].
Image analysis extracts quantitative metrics from an image’s pixel values and spatial position, without modifying the original image. The conclusions drawn from image analysis leverage the data garnered from the characteristics of the image [68]. However, despite the seeming simplicity of human visual tasks, their computational analogs may present substantial challenges, or even prove to be infeasible. This issue becomes particularly pronounced in the realm of image analysis, where the extraction of valuable information from images holds more importance than altering their appearance. Tasks such as object recognition exemplify this complexity [69]. A prevalent methodology in image analysis is isolating the region of interest (ROI) from the remaining segments of the image to gather quantifiable data regarding the subject under study [70]. The overarching goal of image analysis lies in the extraction of meaningful information from images, with the specifics of the information dependent on the level of abstraction and complexity. Simple statistical measures of the image or its regions, such as histograms, moments, or integral images, can yield some forms of information. These measures can illuminate global or local properties of the image, encompassing aspects such as color, brightness, intensity, or variance. More advanced methodologies that detect and describe structures within the image, such as lines, circles, edges, or contours, can offer additional insights. These methods can leverage spatial, geometric, or gradient information to identify and locate features of interest [71,72]. Techniques in image processing can serve to enhance images to amplify their quality or to derive more information from them, such as image sharpening, thresholding, smoothing, and edge enhancement [71,72].
Conversely, image analysis techniques can also supply inputs or feedback for image processing techniques. For instance, edge detection can aid in the identification of structures within an image, facilitating further processing [73]. Image segmentation, another useful technique, seeks to divide an image into meaningful regions that represent different objects or concepts, employing a variety of methods that vary in their criteria and complexity [74]. In certain technical applications, such as forensic and medical imaging, it is often beneficial to convert the image into a monochrome grayscale format [27]. Image enhancement facilitates the extraction of additional information that might not be directly discernible prior to enhancement. The use of an image histogram, for example, provides a quantitative analysis of large amounts of data within an image. In an 8-bit grayscale image, the intensity level of a pixel, ranging from 0 (black) to 255 (white), is indicative of its brightness. Various factors, including exposure time, dark current offset, dark noise, and incoming radiation rate, can influence the pixel’s intensity or brightness [24].

5.1. Image Preprocessing and Digital Processing: A Comprehensive Examination

Image preprocessing serves to prepare images for further analysis by making them more ready to machine understanding [75]. This operation is conducted at the most rudimentary level of significance, with both input and output represented as images of intensity [76]. Primary fields of application encompass segmentation in conjunction with automated counting and inspection [77]. As depicted in Figure 7, the most prevalent image pre-processing techniques employed in the built environment include grey-level transformation, histogram processing, and diverse filtering-based algorithms.
Image processing entails the handling and scrutiny of images using mathematical algorithms and computational instruments, integrating an array of techniques and methods for image quality enhancement, valuable information extraction, and transformation of images into alternative representations, as illustrated in Figure 8.
Instead of directly dealing with the picture pixels, feature extraction is performed to prevent redundant data from the input image from influencing the classification [77]. Pixel values can be used to estimate the color of the light source [78].
While digital image processing surpasses its analog counterpart and encompasses numerous algorithms, it has been primarily confined to operations within the spatial domain. Various spatial filtering algorithms operate by modifying the image through different patterns. For instance, direct grayscale transformation alters each pixel with a specific function, improving the image [13]. While spatial filtering techniques are often quick and easy to implement, they may not cut it when faced with more sophisticated image processing tasks. The Fourier transform can convert an image from the spatial domain to the frequency domain, with its inverse transform capable of reversing the process [79].
Figure 8. A schematic diagram of the process of digital image processing Adapted from [80].
Figure 8. A schematic diagram of the process of digital image processing Adapted from [80].
Buildings 13 01922 g008
The primary focus in image analysis typically lies in an object or a cluster of objects existing in the real world. For instance, in photography, the object could be a human face, a building, or a landscape, whereas in quality control, the object might represent products, defects, and their surroundings [81]. Hence, identifying and analyzing these objects becomes crucial for image processing applications. In a majority of images, edges denote object boundaries, rendering them useful for segmentation, registration, and object identification. Studies indicate that approximately 90% of edges resemble each other in gray value and color images [82]. However, edge detection in color images can present challenges due to color space and intensity variations. Edge detection algorithms, such as the Sobel, Prewitt, and Roberts edge detection algorithms, are designed to address these concerns [83].
Image processing has been extensively utilized in a wide array of manufacturing sectors to enhance visual inspection [84]. Moreover, the technique has found substantial use in various industry sectors, including the glass industry, owing to its accuracy and efficiency [85]. A computer-based system capable of analyzing the image of a glass surface can identify and classify defects and determine product acceptability. Such algorithms have also been applied to sample images of sheets and packaging [86,87,88].
An optical noncontact method was used by [89] to detect amplitude defects on transparent polymeric or glass objects. This method employs light with a low degree of spatial and temporal coherence to avoid unwanted interference phenomena originating from the surface coating. The central concept of the image processing technique hinges on the understanding that amplitude defects influence the intensity of light passing through the examined object, showing the potential of its applicability in radiation transmission in glasses. A statistical method based on pixel intensity variation calculation was employed by [90], with the results indicating that the less defective a glass sheet is, the lower the overall variance.

Image De-Noising

Noise can be introduced to the image during capture and transmission. Noise can distort the true image information and severely compromise the visual effects [91]. A noise image can be modeled as follows:
M (x, y) = N (x, y) + O (x, y)
where N (x, y) is the original image pixel value, O (x, y) is the noise in the image, and M (x, y) is the ensuing noise image [92], it is crucial to reduce impulse noises for image processing and computer vision analysis [91,93]. Several strategies have been proposed to address and mitigate the presence of noise in images. One such strategy involves the examination of dark noise patterns, which are typically skewed significantly to the right with an extended tail. Given that pixel values cannot be negative, lognormal patterns have proven to be an effective model for analyzing this type of data [21,94]. To diminish the impact of stochastic noise, such as dark noise, an averaging approach has been proposed. This method involves the aggregation of pixel values across a multitude of images, thus exploiting the inherent randomness of noise across different images to even out the overall noise distribution [95]. Consequently, the resultant image, which is an average of the original images, exhibits a reduction in noise levels while still retaining the vital information encapsulated in the original images. Another technique that has been found to be effective in minimizing noise is sensor recalibration [96]. This process involves adjusting the sensor settings to optimize its performance, thus reducing the introduction of noise during image capture. Another approach involves using a median filter, which is a nonlinear filter widely used in digital image processing for its ability to preserve edges and reduce impulse noise [91,97,98].

5.2. Analyzing UV Images: Computational Tools, Adaptive Thresholding, and Programming Libraries

Igoe et al.’s [58] investigation focused on determining the distribution of pixel values across distinct color channels by analyzing ultraviolet (UV) images captured via smartphone technology. The captured images were processed through the freely available ImageJ software (imagej.nih.gov), thus enabling a study of the pixel value variation in each color channel and across the UV images [99]. The analysis yielded a high correlation (0.98) between the values obtained from the Microtops validation instrument and the smartphone-derived image values. The distribution of pixel values in UV images typically exhibits log-normal characteristics, thus making the geometric mean and standard deviations essential parameters for proper image analysis and interpretation [100,101]. Further refinement of the image analysis process can be achieved through the application of an adaptive thresholding technique. This technique involves the determination of a threshold value that enables the selection of pixels corresponding to the image of the light source. This adaptive threshold was computed as the upper bound of the 4th standard deviation from the geometric mean, adhering to the principles of geometric statistics [58].
Libraries within the Python programming language have come to the fore as vital instruments for data analysis and visualization across various domains, inclusive of image processing and computer vision techniques. These Python libraries play an instrumental role in image data interpretation and analysis. The OpenCV library, an offshoot of the Computer Vision suite, is broadly employed to interpret image data and transform pixel values into grayscale representations [102]. NumPy, another Python library, is critical for handling and organizing image array data, hence facilitating efficient data manipulation [103,104]. The SciPy library complements NumPy in creating a robust numerical computing environment for scientific research. SciPy is used to designate and implement median filters for tasks related to image processing [103].
For the generation of graphs and data visualization, the Matplotlib library serves as an invaluable tool. Researchers often harness the power of toolkits within Matplotlib to develop 3D graphs, thereby gaining a holistic understanding of image data [24]. Scikit-learn is another Python library that provides efficient implementations of advanced algorithms for various applications, including data preprocessing, feature extraction, model selection, and performance evaluation [104]. Scikit-learn presents a broad array of tools and techniques, thus fostering a detailed analysis and interpretation of image data. In addition, Scikit-learn integrates numerous algorithms that employ statistical inference techniques like hypothesis testing and confidence intervals to ensure the reliability of the results [104,105].
The process of surface product defect image processing has also seen the utilization of Scilab, a free and open-source platform for numerical computation and scientific programming. Scilab provides several modules for image processing and computer vision, including IPCV2 and SciCV. These modules offer functions for tasks such as image analysis, enhancement, transformation, filtering, registration, stitching, object detection, tracking, and deep learning [87,106,107]. Scilab has also been adopted for analyzing images of defects during the inspection process based on image processing, as detailed in [88].

5.3. Expanding the Boundaries of Image Analysis: An Exploration of Computer Vision Techniques

Computer vision techniques constitute an extensive array of methodologies, central to which is the procurement of images via camera technologies, followed by their processing and dissection through computerized algorithms. Notably, these techniques are non-invasive, swift in their operation, and characterized by their ease of implementation, thereby making them an optimal choice for in situ applications [36,108]. As a subdomain of artificial intelligence (AI), computer vision is dedicated to devising methodologies to equip computer systems with the capacity to interpret single or multiple images and consequently generate numerical or symbolic outputs [109]. This expansive field features a diverse array of technologies, such as digital image processing, electromagnetic instrumentation, and the integration of mechanical and optical sensors, contributing to its multidisciplinary nature [37,70,110]. The standard workflow within computer vision encompasses multiple stages, including image acquisition, processing, analysis, and subsequent conversion into numerical representations, facilitating a comprehensive transformation of visual data into actionable insights, as detailed in Figure 9 [111].
The core strength of computer vision lies in its powerful capabilities for image processing and analysis, enabling the objective measurement and evaluation of intensity values through a plethora of algorithms and methods [37]. Computer vision tasks can be systematically classified into four distinct categories [112]. The first category, classification, involves attributing a label to an input image. These labels can be numerical (such as 1, 2) or textual (such as cracked or not cracked) [113], typically employing deep neural networks trained on voluminous labeled datasets such as ImageNet. The second category, detection, focuses on recognizing an object within an image using a bounding box, thereby providing supplementary information by locating the object and defining its position using coordinates in the input image [114]. Object detection, a widely applicable technique, is used in surveillance systems [115] and vehicle detection [116], inspection, and classification of local defects in float glass [19]. Segmentation, the third category, provides the most granular data about an image. With the aid of deep learning algorithms, every pixel in an image can be categorized and labeled, facilitating the identification of pixel clusters belonging to different classes [117]. Semantic segmentation endeavors to attribute a class label to every pixel in an image [64]. The final category, instance segmentation, is a specialized form of image segmentation focusing on the detection of instances of objects and delineating their boundaries. This requires the identification and segregation of individual objects within an image, incorporating the detection of each object’s boundaries and the assignment of a distinct label to each object. As opposed to semantic segmentation, which only offers pixel-level segmentation of objects based on their category, instance segmentation imparts a more exhaustive understanding of an image by identifying individual objects’ boundaries and precise locations [118,119].
An exemplification of the practical application of computer vision methodologies can be found in the research conducted by [84], who proposed methodologies for the detection of surface defects in ceramic glass based on digitized images. The researchers employed threshold methods to obtain binary images, followed by the fitting of Markov random field models to these binary textures, thereby completing the computer vision-based detection process. Their work demonstrates the invaluable role of computer vision in enhancing our understanding of complex visual data.

5.4. Image Segmentation Overview

The challenge of delineating transparent objects within visual scenes represents a notable research frontier within the field of computer vision, as transparent objects demonstrate a propensity for blending into their surroundings due to their inherent reliance on background textures and colors for their visible characteristics [37,120]. This task necessitates the application of image segmentation, a foundational procedure within computer vision that subdivides an input image into discrete, subpixel-level regions or outlines [121] as discussed in Table 3. The objective of this operation is to transform or simplify the original image representation into a more accessible and analytically meaningful format, facilitating subsequent image analysis and interpretation [122].
As discussed earlier, instance segmentation, a somewhat unexplored field, involves detecting and demarcating each unique object in an image, with potential for practical applications in areas such as item retrieval and scenario inference. In contrast, semantic segmentation classifies images at a pixel level, focusing on each pixel’s category, without distinguishing between instances of the same class [119,123,124,125].
Histogram-based threshold algorithms have consistently been acknowledged as potent tools for image segmentation. Such techniques involve the conversion of the image to grayscale, followed by the computation of a histogram derived from all the image pixels. Image clusters can then be identified by examining the peaks and troughs in the histogram. As an example, ref. [126] employed a bimodal thresholding technique, a histogram-based approach, to fine-tune the global threshold value of scattering images [124,125]. This approach facilitated the determination of an optimal cut-off point corresponding to the first inflection in the image’s histogram.
Table 3. Analysis of Various Image Segmentation Techniques.
Table 3. Analysis of Various Image Segmentation Techniques.
TechniquesAdvantagesDisadvantagesAuthorsNew Proposed Technique
Histogram-based thresholdSimple and fastSensitive to noise and illumination changes[127,128]Otsu–Thresholding
[129]
K means segmentationEasy to implement and interpret. Tighter clusters than hierarchical methods.Requires prior knowledge of a number of clusters and initial centroids.[130]FCM with advanced optimization techniques [129]
Watershed segmentationEffective for separating touching objectsProne to over-segmentation and noise sensitivity[128]Marker Controlled
Watershed Segmentation [129]
Neural networks approachHigh accuracy and flexibilityRequire large amounts of training data and computational resources[128,130]Deep learning techniques [129]
Region-based segmentationRobust to noise and intensity variationsMay fail to detect boundaries or merge regions incorrectly[130]Hemitropic region-growing algorithm [129]
Region-based segmentation techniques rely on pixel characteristics such as color or intensity to group adjacent pixels into regions. These techniques, which include thresholding, region-growing, and split-and-merge methods, are particularly effective for noisy images, where they tend to outperform edge-based approaches [131,132]. Watershed segmentation is another extensively used method for image segmentation [133]. This technique aims to identify pixel and region similarity. For each pixel detected, the algorithm calculates the region to which the pixel should belong. The watershed transform function is initially defined, and the algorithm proceeds from there. However, one drawback of this approach is that it often results in over-segmentation of the image [129].
The advent of neural network-based segmentation methodologies has revolutionized the field, with techniques inspired by the human brain’s learning and decision-making capabilities [134]. These methods transform the segmentation task into a problem that can be addressed using neural networks, which can classify each pixel into one or more classes based on predefined categories [135]. By extracting vital features and subsequently segmenting the image, researchers can effectively analyze images, including UV bandpass-filtered images [129].
Lastly, recent research has demonstrated promising results with innovative segmentation techniques in the glazing industry. Peng et al. introduced a distributed online defect inspection system for float glass fabrication based on a downward threshold reliant on an adaptive surface and the OTSU algorithm [18]. This distributed online defect inspection system demonstrated proficiency in rapidly detecting glass defects such as bubbles. Perng et al. proposed an algorithm consisting of two distinct phases—pre-training and testing—for scrutinizing defects in LED [136]. This was based on wavelet analysis and artificial neural networks in conjunction with the fuzzy k-nearest neighbor with an approach for the identification of glass defects [17,137].

6. Existing Pixels Transformation Equations

The digital value of a pixel in a digital camera at a specific wavelength depends on spectral exposure, which is the amount of energy that reaches the sensor area corresponding to that pixel [138]. However, this relationship is not necessarily linear, contradicting the assumptions often made by many vision algorithms. It is often the case that a non-linear mapping exists between the brightness registered by the camera and the scene’s radiance [139]. This spectral exposure is influenced by the number of photons of a certain wavelength that impinge on the detector area, which in turn reflects the scene’s radiance that the camera captures [140]. To estimate the sensor irradiance, Iλ from the digital values (Z) and the exposure time (Dt) of the camera, ref. [141] devised an equation that incorporates both linear and nonlinear image processes of a camera:
f ( y ) = I n   I λ + I n   Δ t
They determined the camera response function (f) by plotting digital values (y) against the natural logarithm of the irradiance and exposure time I n   I λ + I n   Δ t .
The research by Dawes et al. [2] developed a program to analyze images taken by smartphones and calculate for each image the average pixel intensity value and standard deviation for each color channel: red, green, and blue. Having calculated these values, they underwent a transformation using the subsequent function:
K = I n ( K J c o s 4 ( s z A ) )
In this equation, K′ is the transformed pixel values, calculated from the measured pixel values (K), the solar zenith angle (szA), and the Sun-Earth distance correction factor (J) [142], compensating for variations in solar illumination caused by changes in SZA and Earth’s orbit.
Turner et al. [60] advanced a method for UV irradiance estimation from images captured by a smartphone under varied solar zenith angles and air masses. Utilizing a Python script to analyze images, they estimated the direct irradiance (Ismartphone) with a function that connects the digital information to solar irradiance. The function is:
I n   I   ( s m a r t p h o n e ) = I n ( R E ^ 2 c o s 4 ( s z A ) )
Here R is the signal of the red channel, E2 is the factor that corrects for the Earth-Sun distance, and szA is the solar zenith angle. Concurrently, ref. [139] also proposed a formula to relate image irradiance ε and scene radiance L in a typical image formation system.
ε = L π 4 ( d h ) 2 c o s 4  
where h is the imaging lens’s focal length, d is the aperture’s diameter, and ∅ is the angle that the principal ray makes with the optical axis. However, each sensor has its own response to irradiances, and they all follow a general logarithmic relationship in the UVA range, as observed in laboratory experiments [21,25,43]. The relationship is expressed by this equation:
I n I λ = m I n ( K 1 · 5 c o s θ S Z A ) + o
In this equation, I λ is either the direct UV irradiance measured by a sun photometer or the global UV measured with a radiometer, In is the incident irradiance, K is the average of grayscale (K) pixel values after applying an adaptive threshold to separate them from background noise, θSZA is the solar zenith angle, o is a constant or offset added to each pixel in the image, and m is a scaling factor.
Mei et al. [28] measured the UV from sunlight using smart devices in an environment where sunlight was the only light source. They found that the luminous efficacy of the sun was 93 lmW−1. Laeuv was defined as the scene radiation for the UV part, measured in watts per-steradian-per-square-meter (W sr−1 m−2). They showed that if they knew the aperture, ISO speed, and exposure time of the camera, they could calculate the scene radiation, as shown in Table 4. The formula they used was:
L a e u v = 6 T 2 155 x t
where T is the f-number of the aperture, t is the exposure time in second (s), and x is the ISO speed. This technique does not require computer vision technology. The advantage of this technique lies in its lack of dependency on computer vision technology, thus not requiring significant CPU power. This makes it compatible with a broad spectrum of smartphone cameras, irrespective of their grade, thereby suggesting its widespread market applicability [28].

7. Discussion

This comprehensive assessment of ultraviolet (UV) imaging underscores the critical contributions across a multitude of sectors, with a special emphasis on the glazing industry. The fenestration and glazing industry has been nonexistent in advancements in the application of ultraviolet (UV) digital imaging and analysis, which could benefit in improving the energy efficiency and performance of buildings. UV imaging involves capturing images using light that lies beyond the human visual spectrum. The complexity of UV photography is that it requires specific equipment modifications either to the camera sensor or using of a UV bandpass filter that blocks radiation from other spectrums [54]. The technical aspect of UV imaging requires pre- and post-processing adjustments [71,72,75,76,77].
A principal element in the analysis of these images is the process of image segmentation. A detailed exploration of diverse techniques elucidates the profound implications of this process [119,123,124,125]. Methods such as region-based segmentation, which groups pixels into regions based on certain attributes, and watershed segmentation, focusing on discerning pixel and regional similarities, are discussed at length in [131,132,133]. While these techniques harbor considerable advantages, they are not exempt from limitations. The propensity of the watershed method to result in over-segmentation, for instance, is a notable drawback [129]. However, the advent of neural network-based segmentation methodologies introduces a transformative approach by reimagining image segmentation as a solvable problem through neural networks [135]. The glazing industry has particularly benefited from the evolution of these segmentation methodologies, with novel techniques demonstrating substantial potential. Noteworthy advancements include a distributed online defect inspection system for float glass fabrication [18]. The method uses several image processing algorithms to detect defects such as bubbles, lards, and optical distortion in real-time images of glass. UV imaging is also being used to detect surface defects, such as scratches and cracks, that are not easily visible under normal lighting conditions [17]. As glass is a transparent medium, defects can be detected through transmission lighting type. This technology can improve the quality control process in the manufacturing of windows and glazing solutions, ensuring better performance and durability [17]. In [143], researchers proposed an intelligent vision system for the automatic detection of glass fragments in cups for food packaging and domestic use and for the detection of defect deformation in glass plates. These cutting-edge techniques, powered by advanced analytical tools such as wavelet analysis and artificial neural networks, underscore the progressive integration of machine vision within the domain of image analysis.
The review extends into the transformation of pixel values, specifically focusing on the interplay between spectral exposure and sensor irradiance [138]. The non-linearity that often exists between the brightness recorded by a camera sensor and the scene’s radiance is a vital consideration in image analysis. The study also sheds light on the role of several mathematical constructs that facilitate these transformations, thus adjusting for illumination variations and camera characteristics [28,60,142]. The theoretical framework surrounding these transformation equations finds practical application in diverse scenarios, such as estimating UV irradiance from smartphone-generated images under varying solar zenith angles and air masses. Existing research also posits a general logarithmic correlation between sensor responses to irradiances, particularly within the UVA range [21,25,43]. The leveraging of common smartphone devices for UV measurement under sunlight-dominant conditions highlights the pragmatic adaptability of these techniques. This approach does not require sophisticated computer vision technology, hence broadening its compatibility with a wide range of smartphones and indicating potential for extensive application [28].
No research on detecting UV radiation transmission in fenestration using Digital UV imaging exists. It is presumed that the advancements in digital imaging and UV imaging can lead to efficient methods for detecting and measuring UV radiation in fenestration glazing. These techniques can offer numerous advantages over traditional methods such as spectrophotometers and radiometers in the circumstances listed below.
  • Cost-effective and accessible UV detection: Digital imaging methods, particularly those utilizing cameras, have made UV detection more cost-effective and accessible [22,28]. High-resolution cameras can be used to measure UV radiation with potential in fenestration glazing, eliminating the need for expensive spectrophotometers or radiometers.
  • Non-invasive and real-time analysis: Digital imaging techniques provide a non-invasive approach to analyzing and allowing for real-time assessment of images [36,108]. This non-destructive approach enables more efficient monitoring and decision-making without damaging the glazing materials and also ensures an increase in the sample size of measurements [37].
  • Advanced image processing techniques: The development of advanced image processing algorithms and techniques will improve the accuracy and precision of UV radiation detection [121,133,142]. These techniques, such as segmentation and pixel transformation equations, provide better insights into the complex relationship between pixel values and UV radiation [2,139].

8. Conclusions

The detection of UV transmission in fenestration using digital imaging, UV photography, image processing, and computer vision techniques holds significant potential for improving the performance and energy efficiency of buildings. As the demand for sustainable and energy-efficient buildings continues to grow, the need for reliable and accurate methods of detecting UV radiation transmission through fenestration becomes increasingly important. In this context, the utilization of digital imaging and UV photography presents a promising avenue for the development of advanced inspection and analysis techniques in the fenestration and glazing industry. The application of image processing and computer vision methods, such as segmentation algorithms and neural network-based approaches, has been shown to be effective in various fields, including detecting defects in transparent materials such as glass. These techniques have the potential to greatly enhance the quality control processes in the manufacturing of fenestration and glazing solutions.
It is also essential to underscore that examples of image acquisition processes in this literature collectively illuminate the intricate interplay between the choice of camera, lens, processing software, and lighting conditions, all of which are critical to the success of digital imaging in academic research. The diversity of camera and software configurations adopted in these studies underlines the importance of the meticulous, domain-specific selection of imaging tools. It also points to the growing need for robust, reproducible methods that can overcome the inherent challenges posed by image acquisition, including unwanted modifications to the captured images and the consequential difficulties in image comparison and interpretation.
The use of digital imaging and UV photography in the fenestration industry has the potential to replace traditional spectrophotometer and radiometer-based approaches. This transition could lead to more efficient, cost-effective, and accurate methods for detecting UV radiation transmission in fenestration systems. However, it is important to acknowledge that there are still challenges to be addressed in the implementation of these technologies in the fenestration and glazing industry. Further research is required to optimize and specifically tailor the detection methods and approaches using digital imaging, UV photography, image processing, and computer vision for specific applications in the fenestration industry. This includes the development of algorithms and models that can accurately and efficiently detect UV radiation transmission in various types of fenestration systems. The broad spectrum of image segmentation techniques discussed, including region-based, watershed, and neural network-based methods, will prove instrumental in processing UV bandpass-filtered images of fenestration glazing. Further refinement of these methods will be crucial to address issues such as over-segmentation, and the development of bespoke algorithms tailored to the unique characteristics and challenges posed by fenestration glazing. Further research will be required to integrate these varied elements into a cohesive system, combining theoretical constructs, empirical investigation, and practical implementation. There will be a need to design new experiments to assess the performance and accuracy of these methodologies, specifically in the context of fenestration glazing. This future direction, therefore, promises to usher in a new era in the application of UV bandpass-filtered imaging, offering profound implications for building design and energy efficiency.

Author Contributions

D.A.O.: Conceptualization, Visualization, Writing—original draft, Writing—review and editing. R.S.S.: Supervision, Resources, Methodology, Writing—review and editing. B.S.: Conceptualization, Resources, Supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the University of Florida Rinker School of Construction Management and the University of Florida Graduate School.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tuchinda, C.; Srivannaboon, S.; Lim, H.W. Photoprotection by Window Glass, Automobile Glass, and Sunglasses. J. Am. Acad. Dermatol. 2006, 54, 845–854. [Google Scholar] [CrossRef] [PubMed]
  2. Dawes, A.J.; Igoe, D.P.; Rummenie, K.J.; Parisi, A.V. Glass Transmitted Solar Irradiances on Horizontal and Sun-Normal Planes Evaluated with a Smartphone Camera. Measurement 2020, 153, 107410. [Google Scholar] [CrossRef] [Green Version]
  3. Parisi, A.V.; Sabburg, J.; Kimlin, M.G. Scattered and Filtered Solar UV Measurements; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2004; Volume 17. [Google Scholar] [CrossRef] [Green Version]
  4. Starzyńska-Grześ, M.B.; Roussel, R.; Jacoby, S.; Asadipour, A. Computer Vision-Based Analysis of Buildings and Built Environments: A Systematic Review of Current Approaches. ACM Comput. Surv. 2022, 55, 284. [Google Scholar] [CrossRef]
  5. Almutawa, F.; Buabbas, H. Photoprotection: Clothing and Glass. Dermatol. Clin. 2014, 32, 439–448. [Google Scholar] [CrossRef]
  6. Duarte, I.; Rotter, A.; Malvestiti, A.; Silva, M. The Role of Glass as a Barrier against the Transmission of Ultraviolet Radiation: An Experimental Study. Photodermatol. Photoimmunol. Photomed. 2009, 25, 181–184. [Google Scholar] [CrossRef] [PubMed]
  7. Reule, A.G. Errors in Spectrophotometry and Calibration Procedures to Avoid Them. J. Res. Natl. Bur. Stand A Phys. Chem. 1976, 80A, 609. [Google Scholar] [CrossRef]
  8. Heo, S.; Hwang, H.S.; Jeong, Y.; Na, K. Skin Protection Efficacy from UV Irradiation and Skin Penetration Property of Polysaccharide-Benzophenone Conjugates as a Sunscreen Agent. Carbohydr. Polym. 2018, 195, 534–541. [Google Scholar] [CrossRef]
  9. Diffey, B.L. Physics in Medicine & Biology Ultraviolet Radiation Physics and the Skin Ultraviolet Radiation Physics and the Skin. Phys. Med. Biol. 1980, 25, 405–426. [Google Scholar]
  10. Inanici, M.N. Evaluation of High Dynamic Range Photography as a Luminance Data Acquisition System. Light. Res. Technol. 2006, 38, 123–136. [Google Scholar] [CrossRef]
  11. Prabaharan, T.; Periasamy, P.; Mugendiran, V. Studies on Application of Image Processing in Various Fields: An Overview. IOP Conf. Ser. Mater. Sci. Eng. 2020, 961, 012006. [Google Scholar] [CrossRef]
  12. Valença, J.; Puente, I.; Júlio, E.; González-Jorge, H.; Arias-Sánchez, P. Assessment of Cracks on Concrete Bridges Using Image Processing Supported by Laser Scanning Survey. Constr. Build. Mater. 2017, 146, 668–678. [Google Scholar] [CrossRef]
  13. Ren, Z.; Fang, F.; Yan, N.; Wu, Y. State of the Art in Defect Detection Based on Machine Vision. Int. J. Precis. Eng. Manuf.-Green Technol. 2022, 9, 661–691. [Google Scholar] [CrossRef]
  14. Giger, M.L.; Karssemeijer, N.; Armato, S.G. Computer-Aided Diagnosis in Medical Imaging. IEEE Trans. Med. Imaging 2001, 20, 1205–1207. [Google Scholar] [CrossRef] [Green Version]
  15. Farhang, S.H.; Rezaifar, O.; Sharbatdar, M.K.; Ahmadyfard, A. Evaluation of Different Methods of Machine Vision in Health Monitoring and Damage Detection of Structures. J. Rehabil. Civ. Eng. 2021, 9, 93–132. [Google Scholar] [CrossRef]
  16. Lecun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  17. Liu, H.G.; Chen, Y.P.; Peng, X.Q.; Xie, J.M. A Classification Method of Glass Defect Based on Multiresolution and Information Fusion. Int. J. Adv. Manuf. Technol. 2011, 56, 1079–1090. [Google Scholar] [CrossRef]
  18. Peng, X.; Chen, Y.; Yu, W.; Zhou, Z.; Sun, G. An Online Defects Inspection Method for Float Glass Fabrication Based on Machine Vision. Int. J. Adv. Manuf. Technol. 2008, 39, 1180–1189. [Google Scholar] [CrossRef]
  19. Yang, J.; Wang, W.; Lin, G.; Li, Q.; Sun, Y.; Sun, Y. Infrared Thermal Imaging-Based Crack Detection Using Deep Learning. IEEE Access 2019, 7, 182060–182077. [Google Scholar] [CrossRef]
  20. Igoe, D.; Parisi, A.V. Evaluation of a Smartphone Sensor to Broadband and Narrowband Ultraviolet A Radiation. Instrum. Sci. Technol. 2015, 43, 283–289. [Google Scholar] [CrossRef]
  21. Igoe, D.; Parisi, A.; Carter, B. Characterization of a Smartphone Camera’s Response to Ultraviolet A Radiation. Photochem. Photobiol. 2013, 89, 215–218. [Google Scholar] [CrossRef] [Green Version]
  22. Turner, J.; Parisi, A.V.; Igoe, D.P.; Amar, A. Detection of Ultraviolet B Radiation with Internal Smartphone Sensors. Instrum. Sci. Technol. 2017, 45, 618–638. [Google Scholar] [CrossRef]
  23. Igoe, D.P.; Parisi, A.; Carter, B. Smartphone Based Android App for Determining UVA Aerosol Optical Depth and Direct Solar Irradiances. Photochem. Photobiol. 2014, 90, 233–237. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Igoe, D.P.; Parisi, A.V.; Amar, A.; Rummenie, K.J. Median Filters as a Tool to Determine Dark Noise Thresholds in High Resolution Smartphone Image Sensors for Scientific Imaging. Rev. Sci. Instrum. 2018, 89, 015003. [Google Scholar] [CrossRef] [PubMed]
  25. Fung, C.H.; Wong, M.S. Improved Mobile Application for Measuring Aerosol Optical Thickness in the Ultraviolet—A Wavelength. IEEE Sens. J. 2016, 16, 2055–2059. [Google Scholar] [CrossRef]
  26. Tetley, C.; Young, S. Digital Infrared and Ultraviolet Imaging Part 2: Ultraviolet. J. Vis. Commun. Med. 2008, 31, 51–60. [Google Scholar] [CrossRef] [PubMed]
  27. Davies, A. Digital Ultraviolet and Infrared Photography; Taylor & Francis: Abingdon, UK, 2017. [Google Scholar]
  28. Mei, B.; Li, R.; Cheng, W.; Yu, J.; Cheng, X. Ultraviolet Radiation Measurement via Smart Devices. IEEE Internet Things J. 2017, 4, 934–944. [Google Scholar] [CrossRef]
  29. Prutchi, D. Exploring Ultraviolet Photograph: Bee Vision, Forensic Imaging, and Other Near-Ultraviolet Adventures with Your DSLR; Amherst Media: Amherst, MA, USA, 2016; p. 127. [Google Scholar]
  30. Cucci, C.; Pillay, R.; Herkommer, A.; Crowther, J. Ultraviolet Fluorescence Photography—Choosing the Correct Filters for Imaging. J. Imaging 2022, 8, 162. [Google Scholar] [CrossRef]
  31. Chapman, G.H.; Thomas, R.; Thomas, R.; Koren, I.; Koren, Z. Improved Correction for Hot Pixels in Digital Imagers. In Proceedings of the 2014 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFT), Amsterdam, The Netherlands, 1–3 October 2014; pp. 116–121. [Google Scholar] [CrossRef]
  32. Cheng, Y.; Fang, C.; Yuan, J.; Zhu, L. Design and Application of a Smart Lighting System Based on Distributed Wireless Sensor Networks. Appl. Sci. 2020, 10, 8545. [Google Scholar] [CrossRef]
  33. Bigas, M.; Cabruja, E.; Forest, J.; Salvi, J. Review of CMOS Image Sensors. Microelectron. J. 2006, 37, 433–451. [Google Scholar] [CrossRef] [Green Version]
  34. Al-Mallahi, A.; Kataoka, T.; Okamoto, H.; Shibata, Y. Detection of Potato Tubers Using an Ultraviolet Imaging-Based Machine Vision System. Biosyst. Eng. 2010, 105, 257–265. [Google Scholar] [CrossRef]
  35. Maharlooei, M.; Sivarajan, S.; Bajwa, S.G.; Harmon, J.P.; Nowatzki, J. Detection of Soybean Aphids in a Greenhouse Using an Image Processing Technique. Comput. Electron. Agric. 2017, 132, 63–70. [Google Scholar] [CrossRef]
  36. Ramil, A.; López, A.J.; Pozo-Antonio, J.S.; Rivas, T. A Computer Vision System for Identification of Granite-Forming Minerals Based on RGB Data and Artificial Neural Networks. Measurement 2018, 117, 90–95. [Google Scholar] [CrossRef]
  37. Pedreschi, F.; León, J.; Mery, D.; Moyano, P. Development of a Computer Vision System to Measure the Color of Potato Chips. Food Res. Int. 2006, 39, 1092–1098. [Google Scholar] [CrossRef]
  38. Sena, D.G.; Pinto, F.A.C.; Queiroz, D.M.; Viana, P.A. Fall Armyworm Damaged Maize Plant Identification Using Digital Images. Biosyst. Eng. 2003, 85, 449–454. [Google Scholar] [CrossRef]
  39. Dyer, J.; Verri, G.; Cupitt, J. Multispectral Imaging in Reflectance and Photo-Induced Luminescence Modes: A User Manual; Academia: San Francisco, CA, USA, 2013. [Google Scholar]
  40. Zhang, L.; Zhang, L.; Wang, Y. Shape Optimization of Free-Form Buildings Based on Solar Radiation Gain and Space Efficiency Using a Multi-Objective Genetic Algorithm in the Severe Cold Zones of China. Sol. Energy 2016, 132, 38–50. [Google Scholar] [CrossRef]
  41. Girolami, A.; Napolitano, F.; Faraone, D.; Braghieri, A. Measurement of Meat Color Using a Computer Vision System. Meat Sci. 2013, 93, 111–118. [Google Scholar] [CrossRef]
  42. Mathai, A.; Guo, N.; Liu, D.; Wang, X. 3D Transparent Object Detection and Reconstruction Based on Passive Mode Single-Pixel Imaging. Sensors 2020, 20, 4211. [Google Scholar] [CrossRef]
  43. Turner, J.; Igoe, D.; Parisi, A.V.; McGonigle, A.J.; Amar, A.; Wainwright, L. A Review on the Ability of Smartphones to Detect Ultraviolet (UV) Radiation and Their Potential to Be Used in UV Research and for Public Education Purposes. Sci. Total Environ. 2020, 706, 135873. [Google Scholar] [CrossRef]
  44. De Oliveira, H.J.S.; de Almeida, P.L.; Sampaio, B.A.; Fernandes, J.P.A.; Pessoa-Neto, O.D.; de Lima, E.A.; de Almeida, L.F. A Handheld Smartphone-Controlled Spectrophotometer Based on Hue to Wavelength Conversion for Molecular Absorption and Emission Measurements. Sens. Actuators B Chem. 2017, 238, 1084–1091. [Google Scholar] [CrossRef]
  45. Azzazy, H.M.E.; Elbehery, A.H.A. Clinical Laboratory Data: Acquire, Analyze, Communicate, Liberate. Clin. Chim. Acta 2014, 438, 186–194. [Google Scholar] [CrossRef] [PubMed]
  46. Hiscocks, P.D. Measuring Camera Shutter Speed; Toronto Centre: Toronto, ON, Canada, 2010. [Google Scholar]
  47. Chapman, G.H.; Thomas, R.; Thomas, R.; Coelho, K.J.; Meneses, S.; Yang, T.Q.; Koren, I.; Koren, Z. Increases in Hot Pixel Development Rates for Small Digital Pixel Sizes. Electron. Imaging 2016, art00013. [Google Scholar] [CrossRef] [Green Version]
  48. Zhang, L.; Li, J.; Lin, L.; Du, Y.; Jin, Y. The Key Technology and Research Progress of CMOS Image Sensor. In Proceedings of the 2008 International Conference on Optical Instruments and Technology: Advanced Sensor Technologies and Applications, Beijing, China, 16–19 November 2008; Volume 7157, p. 71571B. [Google Scholar] [CrossRef]
  49. Nehir, M.; Frank, C.; Aßmann, S.; Achterberg, E.P. Improving Optical Measurements: Non-Linearity Compensation of Compact Charge-Coupled Device (CCD) Spectrometers. Sensors 2019, 19, 2833. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Jurkovic, J.; Korosec, M.; Kopac, J. New Approach in Tool Wear Measuring Technique Using CCD Vision System. Int. J. Mach. Tools Manuf. 2005, 45, 1023–1030. [Google Scholar] [CrossRef]
  51. Grimes, D.R. Ultraviolet Radiation Therapy and UVR Dose Models. Med. Phys. 2015, 42, 440–455. [Google Scholar] [CrossRef] [PubMed]
  52. Alala, B.; Mwangi, W.; Okeyo, G. Image Representation Using RGB Color Space. Int. J. Innov. Res. Dev. 2014, 3. [Google Scholar]
  53. Igoe, D.; Parisi, A.V. Broadband Direct UVA Irradiance Measurement for Clear Skies Evaluated Using a Smartphone. Radiat. Prot. Dosim. 2015, 167, 485–489. [Google Scholar] [CrossRef] [Green Version]
  54. Parisi, A.V.; Turnbull, D.J.; Kimlin, M.G. Dosimetric and Spectroradiometric Investigations of Glass-Filtered Solar UV. Photochem. Photobiol. 2007, 83, 777–781. [Google Scholar] [CrossRef]
  55. Wilkes, T.C.; McGonigle, A.J.S.; Pering, T.D.; Taggart, A.J.; White, B.S.; Bryant, R.G.; Willmott, J.R. Ultraviolet Imaging with Low Cost Smartphone Sensors: Development and Application of a Raspberry Pi-Based UV Camera. Sensors 2016, 16, 1649. [Google Scholar] [CrossRef] [Green Version]
  56. Tetley, C. The Photography of Bruises. J. Vis. Commun. Med. 2005, 28, 72–77. [Google Scholar] [CrossRef]
  57. Salman, J.; Gangishetty, M.K.; Rubio-Perez, B.E.; Feng, D.; Yu, Z.; Yang, Z.; Wan, C.; Frising, M.; Shahsafi, A.; Congreve, D.N.; et al. Passive Frequency Conversion of Ultraviolet Images into the Visible Using Perovskite Nanocrystals. J. Opt. 2021, 23, 054001. [Google Scholar] [CrossRef]
  58. Igoe, D.P.; Amar, A.; Parisi, A.V.; Turner, J. Characterisation of a Smartphone Image Sensor Response to Direct Solar 305 Nm Irradiation at High Air Masses. Sci. Total Environ. 2017, 587–588, 407–413. [Google Scholar] [CrossRef]
  59. Garcia, J.E.; Dyer, A.G.; Greentree, A.D.; Spring, G.; Wilksch, P.A. Linearisation of RGB Camera Responses for Quantitative Image Analysis of Visible and UV Photography: A Comparison of Two Techniques. PLoS ONE 2013, 8, e79534. [Google Scholar] [CrossRef] [Green Version]
  60. Turner, J.; Igoe, D.P.; Parisi, A.V.; Downs, N.J.; Amar, A. Beyond the Current Smartphone Application: Using Smartphone Hardware to Measure UV Radiation. In Proceedings of the UV Radiation: Effects on Human Health and the Environment, Wellington, New Zealand, 4–6 April 2018. [Google Scholar]
  61. Pratt, H.; Hassanin, K.; Troughton, L.D.; Czanner, G.; Zheng, Y.; McCormick, A.G.; Hamill, K.J. UV Imaging Reveals Facial Areas That Are Prone to Skin Cancer Are Disproportionately Missed during Sunscreen Application. PLoS ONE 2017, 12, e0185297. [Google Scholar] [CrossRef] [Green Version]
  62. Tamburello, G.; Aiuppa, A.; Kantzas, E.P.; McGonigle, A.J.S.; Ripepe, M. Passive vs. Active Degassing Modes at an Open-Vent Volcano (Stromboli, Italy). Earth Planet Sci. Lett. 2012, 359–360, 106–116. [Google Scholar] [CrossRef] [Green Version]
  63. Gibbons, F.X.; Gerrard, M.; Lane, D.J.; Mahler, H.I.M.; Kulik, J.A. Using UV Photography to Reduce Use of Tanning Booths: A Test of Cognitive Mediation. Health Psychol. 2005, 24, 358–363. [Google Scholar] [CrossRef]
  64. Wilkes, T.C.; Pering, T.D.; McGonigle, A.J.S. Semantic Segmentation of Explosive Volcanic Plumes through Deep Learning. Comput. Geosci. 2022, 168, 105216. [Google Scholar] [CrossRef]
  65. Salunkhe, A.A.; Gobinath, R.; Vinay, S.; Joseph, L. Progress and Trends in Image Processing Applications in Civil Engineering: Opportunities and Challenges. Adv. Civ. Eng. 2022, 2022, 6400254. [Google Scholar] [CrossRef]
  66. Sai, V.; Kothala, K. Use of Image Analysis as a Tool for Evaluating Various Construction Materials Recommended Citation. Doctoral dissertation, Clemson University, Clemson, SC, USA, 2018. [Google Scholar]
  67. Masad, E.; Sivakumar, K. Advances in the Characterization and Modeling of Civil Engineering Materials Using Imaging Techniques. J. Comput. Civ. Eng. 2004, 18, 1. [Google Scholar] [CrossRef]
  68. Zhang, B.; Huang, W.; Li, J.; Zhao, C.; Fan, S.; Wu, J.; Liu, C. Principles, Developments and Applications of Computer Vision for External Quality Inspection of Fruits and Vegetables: A Review. Food Res. Int. 2014, 62, 326–343. [Google Scholar] [CrossRef]
  69. Burger, W.; Burge, M.J. Digital Image Processing; Springer: Berlin/Heidelberg, Germany, 2022. [Google Scholar] [CrossRef]
  70. Lukinac, J.; Jukić, M.; Mastanjević, K.; Lučan, M. Application of Computer Vision and Image Analysis Method in Cheese-Quality Evaluation: A Review. Ukr. Food J. 2018, 7, 192–214. [Google Scholar] [CrossRef]
  71. Kheradmand, A.; Milanfar, P. Non-Linear Structure-Aware Image Sharpening with Difference of Smoothing Operators. Front. ICT 2015, 2, 22. [Google Scholar] [CrossRef] [Green Version]
  72. Polesel, A.; Ramponi, G.; Mathews, V.J. Image Enhancement via Adaptive Unsharp Masking. IEEE Trans. Image Process. 2000, 9, 505–510. [Google Scholar] [CrossRef] [Green Version]
  73. Atherton, T.J.; Kerbyson, D.J. Size Invariant Circle Detection. Image Vis. Comput. 1999, 17, 795–803. [Google Scholar] [CrossRef]
  74. Nakagomi, K.; Shimizu, A.; Kobatake, H.; Yakami, M.; Fujimoto, K.; Togashi, K. Multi-Shape Graph Cuts with Neighbor Prior Constraints and Its Application to Lung Segmentation from a Chest CT Volume. Med. Image Anal. 2013, 17, 62–77. [Google Scholar] [CrossRef] [PubMed]
  75. Bai, J.; Feng, X.C. Fractional-Order Anisotropic Diffusion for Image Denoising. IEEE Trans. Image Process. 2007, 16, 2492–2502. [Google Scholar] [CrossRef]
  76. Sonka, M.; Hlavac, V.; Boyle, R. Image Processing, Analysis, and Machine Vision; Cengage Learning: Boston, MA, USA, 2008; p. 829. [Google Scholar]
  77. Crognale, M.; De Iuliis, M.; Rinaldi, C.; Gattulli, V. Damage Detection with Image Processing: A Comparative Study. Earthq. Eng. Eng. Vib. 2023, 22, 333–345. [Google Scholar] [CrossRef]
  78. Gijsenij, A.; Lu, R.; Gevers, T. Color Constancy for Multiple Light Sources. IEEE Trans. Image Process. 2012, 21, 697–707. [Google Scholar] [CrossRef] [PubMed]
  79. Zhang, Z.; Wang, Y.; Wang, K. Fault Diagnosis and Prognosis Using Wavelet Packet Decomposition, Fourier Transform and Artificial Neural Network. J. Intell. Manuf. 2013, 24, 1213–1227. [Google Scholar] [CrossRef]
  80. Luo, C.; Hao, Y.; Tong, Z. Research on Digital Image Processing Technology and Its Application. In Proceedings of the 2018 8th International Conference on Management, Education and Information (MEICI 2018), Shenyang, China, 21–23 September 2018; pp. 587–592. [Google Scholar] [CrossRef] [Green Version]
  81. Chang, L.-M.; Abdelrazig, Y.; Chen, P.-H. Optical Imaging Method for Bridge Painting Maintenance and Inspection; U.S. Department of Transportation: Washington, DC, USA, 2000.
  82. Kamboj, A.; Grewal, K.; Mittal, R. Color Edge Detection in RGB Color Space Using Automatic Threshold Detection. Int. J. Innov. Technol. Explor. Eng. (IJITEE) 2012, 1. [Google Scholar]
  83. Eftekhar, P. Comparative Study of Edge Detection Algorithm. Doctoral dissertation, California State University, Northridge, Los Angeles, CA, USA, 2020. [Google Scholar]
  84. Ai, J.; Zhu, X. Analysis and Detection of Ceramic-Glass Surface Defects Based on Computer Vision. In Proceedings of the World Congress on Intelligent Control and Automation (WCICA), Shanghai, China, 10–14 June 2002; Volume 4, pp. 3014–3018. [Google Scholar] [CrossRef]
  85. Agrawal, S. Glass Defect Detection Techniques Using Digital Image Processing—A Review. Spec. Issues IP Multimed. Commun. 2011, 1, 65–67. [Google Scholar]
  86. Adamo, F.; Attivissimo, F.; Di Nisio, A.; Savino, M. A Low-Cost Inspection System for Online Defects Assessment in Satin Glass. Measurement 2009, 42, 1304–1311. [Google Scholar] [CrossRef]
  87. Awang, N.; Fauadi, M.H.F.M.; Rosli, N.S. Image Processing of Product Surface Defect Using Scilab. Appl. Mech. Mater. 2015, 789–790, 1223–1226. [Google Scholar] [CrossRef]
  88. Rosli, N.S.; Fauadi, M.H.F.M.; Awang, N. Some Technique for an Image of Defect in Inspection Process Based on Image Processing. J. Image Graph. 2016, 4, 55–58. [Google Scholar] [CrossRef] [Green Version]
  89. Kmec, J.; Pavlíček, P.; Šmíd, P. Optical Noncontact Method to Detect Amplitude Defects of Polymeric Objects. Polym. Test. 2022, 116, 107802. [Google Scholar] [CrossRef]
  90. Bandyopadhyay, Y. Glass Defect Detection and Sorting Using Computational Image Processing. Int. J. Emerg. Technol. Innov. Res. 2015, 2, 73–75. [Google Scholar]
  91. Zhu, Y.; Huang, C. An Improved Median Filtering Algorithm for Image Noise Reduction. Phys. Procedia 2012, 25, 609–616. [Google Scholar] [CrossRef] [Green Version]
  92. Shanmugavadivu, P.; Eliahim Jeevaraj, P.S. Laplace Equation Based Adaptive Median Filter for Highly Corrupted Images. In Proceedings of the 2012 International Conference on Computer Communication and Informatics, Coimbatore, India, 10–12 January 2012. [Google Scholar] [CrossRef]
  93. George, J. Automatic Inspection of Potential Flaws in Glass Based on Image Segmentation. IOSR J. Eng. 2013, 3, 20–24. [Google Scholar] [CrossRef]
  94. Baer, R.L. A Model for Dark Current Characterization and Simulation. In Proceedings of the Sensors, Cameras, and Systems for Scientific/Industrial Applications VII, San Jose, CA, USA, 15–19 January 2006; Volume 6068, pp. 37–48. [Google Scholar] [CrossRef]
  95. Pereira, E. dos S. Determining the Fixed Pattern Noise of a CMOS Sensor: Improving the Sensibility of Autonomous Star Trackers. J. Aerosp. Technol. Manag. 2013, 5, 217–222. [Google Scholar] [CrossRef] [Green Version]
  96. Chapman, G.H.; Thomas, R.; Koren, I.; Koren, Z. Hot Pixel Behavior as Pixel Size Reduces to 1 Micron. In Proceedings of the IS and T International Symposium on Electronic Imaging Science and Technology, Burlingame, CA, USA, 29 January–2 February 2017; pp. 39–45. [Google Scholar] [CrossRef]
  97. Guo, Z.Y.; Le, Z. Improved Adaptive Median Filter. In Proceedings of the 2014 10th International Conference on Computational Intelligence and Security, Kunming, China, 15–16 November 2014; pp. 44–46. [Google Scholar] [CrossRef]
  98. Patidar, P. Image De-Noising by Various Filters for Different Noise Sumit Srivastava. Int. J. Comput. Appl. 2010, 9, 975–8887. [Google Scholar]
  99. Gomez-Perez, S.L.; Haus, J.M.; Sheean, P.; Patel, B.; Mar, W.; Chaudhry, V.; McKeever, L.; Braunschweig, C. Measuring Abdominal Circumference and Skeletal Muscle from a Single Cross-Sectional Computed Tomography Image: A Step-by-Step Guide for Clinicians Using National Institutes of Health ImageJ. J. Parenter. Enter. Nutr. 2016, 40, 308–318. [Google Scholar] [CrossRef] [Green Version]
  100. Limpert, E.; Stahel, W.A.; Abbt, M. Log-Normal Distributions across the Sciences: Keys and Clues. Bioscience 2001, 51, 341–352. [Google Scholar]
  101. Andersson, A. Mechanisms for Log Normal Concentration Distributions in the Environment. Sci. Rep. 2021, 11, 16418. [Google Scholar] [CrossRef]
  102. Agam, G. Introduction to Programming with OpenCV. 2006. Available online: https://www.cs.cornell.edu/courses/cs4670/2010fa/projects/Introduction%20to%20Programming%20With%20OpenCV.pdf (accessed on 13 July 2023).
  103. Oliphant, T.E. Python for Scientific Computing. Comput. Sci. Eng. 2007, 9, 10–20. [Google Scholar] [CrossRef] [Green Version]
  104. Abraham, A.; Pedregosa, F.; Eickenberg, M.; Gervais, P.; Mueller, A.; Kossaifi, J.; Gramfort, A.; Thirion, B.; Varoquaux, G. Machine Learning for Neuroimaging with Scikit-Learn. Front. Neuroinform. 2014, 8, 14. [Google Scholar] [CrossRef] [Green Version]
  105. Géron, A. Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow. In Hands-On Machine Learning with R; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2017; p. 510. [Google Scholar]
  106. ATOMS: Image Processing and Computer Vision Toolbox Details. Available online: https://atoms.scilab.org/toolboxes/IPCV (accessed on 15 June 2023).
  107. Image Processing & Computer Vision|Scilab. Available online: https://www.scilab.org/software/atoms/image-processing-computer-vision (accessed on 15 June 2023).
  108. Wu, D.; Sun, D.W. Colour Measurements by Computer Vision for Food Quality Control—A Review. Trends Food Sci. Technol. 2013, 29, 5–20. [Google Scholar] [CrossRef]
  109. Cossio, M.; Cossio, M. The New Landscape of Diagnostic Imaging with the Incorporation of Computer Vision; IntechOpen: London, UK, 2023. [Google Scholar] [CrossRef]
  110. Mendoza, F.; Aguilera, J.M. Application of Image Analysis for Classification of Ripening Bananas. J. Food Sci. 2004, 69, E471–E477. [Google Scholar] [CrossRef]
  111. Brosnan, T.; Sun, D.W. Improving Quality Inspection of Food Products by Computer Vision––A Review. J. Food Eng. 2004, 61, 3–16. [Google Scholar] [CrossRef]
  112. Huo, Y.; Deng, R.; Liu, Q.; Fogo, A.B.; Yang, H. AI Applications in Renal Pathology. Kidney Int. 2021, 99, 1309–1320. [Google Scholar] [CrossRef]
  113. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; ISBN 9780262035613. [Google Scholar]
  114. Ramachandran, S.S.; George, J.; Skaria, S.; Varun, V.V. Using YOLO Based Deep Learning Network for Real Time Detection and Localization of Lung Nodules from Low Dose CT Scans. SPIE 2018, 10575, 105751I. [Google Scholar]
  115. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Computer Vision–ECCV 2016: Proceedings of the 14th European Conference; Amsterdam, The Netherlands, 11–14 October 2016, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar] [CrossRef] [Green Version]
  116. Dikbayir, H.S.; Ibrahim Bulbul, H. Deep Learning Based Vehicle Detection from Aerial Images. In Proceedings of the 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA), Miami, FL, USA, 14–17 December 2020; pp. 956–960. [Google Scholar] [CrossRef]
  117. Chen, C.; Lu, J.; Zhou, M.; Yi, J.; Liao, M.; Gao, Z. A YOLOv3-Based Computer Vision System for Identification of Tea Buds and the Picking Point. Comput. Electron. Agric. 2022, 198, 107116. [Google Scholar] [CrossRef]
  118. Tian, D.; Han, Y.; Wang, B.; Guan, T.; Gu, H.; Wei, W. Review of Object Instance Segmentation Based on Deep Learning. J. Electron. Imaging 2021, 31, 041205. [Google Scholar] [CrossRef]
  119. Hafiz, A.M.; Bhat, G.M. A Survey on Instance Segmentation: State of the Art. Int. J. Multimed. Inf. Retr. 2020, 9, 171–189. [Google Scholar] [CrossRef]
  120. Xu, Y.; Nagahara, H.; Shimada, A.; Taniguchi, R.-I. TransCut: Transparent Object Segmentation from a Light-Field Image. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 13–16 December 2015. [Google Scholar]
  121. Mollazade, K.; Omid, M.; Tab, F.A.; Mohtasebi, S.S. Principles and Applications of Light Backscattering Imaging in Quality Evaluation of Agro-Food Products: A Review. Food Bioprocess Technol. 2012, 5, 1465–1485. [Google Scholar] [CrossRef]
  122. Hornberg, A. Handbook of Machine Vision; Wiley: Hoboken, NJ, USA, 2007; pp. 1–798. [Google Scholar] [CrossRef]
  123. Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path Aggregation Network for Instance Segmentation. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 8759–8768. [Google Scholar] [CrossRef] [Green Version]
  124. Zhu, J.; Wu, A.; Wang, X.; Zhang, H. Identification of Grape Diseases Using Image Analysis and BP Neural Networks. Multimed. Tools Appl. 2020, 79, 14539–14551. [Google Scholar] [CrossRef]
  125. Sakshi; Kukreja, V. Image Segmentation Techniques: Statistical, Comprehensive, Semi-Automated Analysis and an Application Perspective Analysis of Mathematical Expressions. Arch. Comput. Methods Eng. 2022, 30, 457–495. [Google Scholar] [CrossRef]
  126. Qing, Z.; Ji, B.; Zude, M. Predicting Soluble Solid Content and Firmness in Apple Fruit by Means of Laser Light Backscattering Image Analysis. J. Food Eng. 2007, 82, 58–67. [Google Scholar] [CrossRef]
  127. Venkata Ravi Kumar, D.; Naga Satish, G.; Raghavendran, C.V. A Literature Study of Image Segmentation Techniques for Images. Int. J. Eng. Res. Technol. 2018, 4, 1–3. [Google Scholar]
  128. Dhingra, G.; Kumar, V.; Joshi, H.D. Study of Digital Image Processing Techniques for Leaf Disease Detection and Classification. Multimed. Tools Appl. 2018, 77, 19951–20000. [Google Scholar] [CrossRef]
  129. Sarma, R.; Gupta, Y.K. A Comparative Study of New and Existing Segmentation Techniques. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1022, 012027. [Google Scholar] [CrossRef]
  130. Song, Y.; Yan, H. Image Segmentation Techniques Overview. In Proceedings of the AMS 2017—Asia Modelling Symposium 2017 and 11th International Conference on Mathematical Modelling and Computer Simulation, Kota Kinabalu, Malaysia, 4–6 December 2018; pp. 103–107. [Google Scholar] [CrossRef]
  131. Mohammad, N.; Yusof, M.Y.P.M.; Ahmad, R.; Muad, A.M. Region-Based Segmentation and Classification of Mandibular First Molar Tooth Based on Demirjian’s Method. J. Phys. Conf. Ser. 2020, 1502, 012046. [Google Scholar] [CrossRef]
  132. Fan, Y.Y.; Li, W.J.; Wang, F. A Survey on Solar Image Segmentation Techniques. Adv. Mater. Res. 2014, 945–949, 1899–1902. [Google Scholar] [CrossRef]
  133. Stawiaski, J.; Decencière, E.; Bidault, F. Interactive Liver Tumor Segmentation Using Graph-Cuts and Watershed; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  134. Kapoor, L.; Thakur, S. A Survey on Brain Tumor Detection Using Image Processing Techniques. In Proceedings of the 2017 7th International Conference Confluence on Cloud Computing, Data Science and Engineering, Noida, India, 12–13 January 2017; pp. 582–585. [Google Scholar] [CrossRef]
  135. Wei, Y.; Zhang, K.; Ji, S. Simultaneous Road Surface and Centerline Extraction from Large-Scale Remote Sensing Images Using CNN-Based Segmentation and Tracing. IEEE Trans. Geosci. Remote Sens. 2020, 58, 8919–8931. [Google Scholar] [CrossRef]
  136. Perng, D.B.; Liu, H.W.; Chang, C.C. Automated SMD LED Inspection Using Machine Vision. Int. J. Adv. Manuf. Technol. 2011, 57, 1065–1077. [Google Scholar] [CrossRef]
  137. Li, D.; Liang, L.Q.; Zhang, W.J. Defect Inspection and Extraction of the Mobile Phone Cover Glass Based on the Principal Components Analysis. Int. J. Adv. Manuf. Technol. 2014, 73, 1605–1614. [Google Scholar] [CrossRef]
  138. Gayeski, N.; Stokes, E.; Andersen, M. Using Digital Cameras as Quasi-Spectral Radiometers to Study Complex Fenestration Systems. Light. Res. Technol. 2009, 41, 7–23. [Google Scholar] [CrossRef]
  139. Asada, N.; Amano, A.; Baba, M. Photometric Calibration of Zoom Lens Systems. In Proceedings of the 13th International Conference on Pattern Recognition, Vienna, Austria, 25–29 August 1996; Volume 1, pp. 186–190. [Google Scholar] [CrossRef]
  140. Holst, G. CCD Arrays, Cameras, and Displays, 2nd ed.; JCD Publishing: Winter Park, FL, USA, 1998. [Google Scholar]
  141. Debevec, P.E.; Malik, J. Recovering High Dynamic Range Radiance Maps from Photographs; Association for Computing Machinery: New York, NY, USA, 1997. [Google Scholar]
  142. Porter, J.N.; Miller, M.; Pietras, C.; Motell, C. Ship-Based Sun Photometer Measurements Using Microtops Sun Photometers. J. Atmos. Ocean. Technol. 2001, 18, 765–774. [Google Scholar] [CrossRef]
  143. Cabral, J.D.D.; de Araújo, S.A. An Intelligent Vision System for Detecting Defects in Glass Products for Packaging and Domestic Use. Int. J. Adv. Manuf. Technol. 2015, 77, 485–494. [Google Scholar] [CrossRef]
Figure 1. The co-occurrence of authors.
Figure 1. The co-occurrence of authors.
Buildings 13 01922 g001
Figure 2. Network Clustering of keywords associated.
Figure 2. Network Clustering of keywords associated.
Buildings 13 01922 g002
Figure 3. Increasing interest in digital imaging and the growth of research on ultraviolet photography based on keywords search on Scopus.
Figure 3. Increasing interest in digital imaging and the growth of research on ultraviolet photography based on keywords search on Scopus.
Buildings 13 01922 g003
Figure 4. Components of UV imaging camera.
Figure 4. Components of UV imaging camera.
Buildings 13 01922 g004
Figure 5. Level of application of cameras in UV detection.
Figure 5. Level of application of cameras in UV detection.
Buildings 13 01922 g005
Figure 6. Description of image analysis. Adapted from [66].
Figure 6. Description of image analysis. Adapted from [66].
Buildings 13 01922 g006
Figure 7. Prevalent image pre-processing techniques employed in the built environment.
Figure 7. Prevalent image pre-processing techniques employed in the built environment.
Buildings 13 01922 g007
Figure 9. General computer vision methods and corresponding processing levels.
Figure 9. General computer vision methods and corresponding processing levels.
Buildings 13 01922 g009
Table 1. Digital Image Capture feature.
Table 1. Digital Image Capture feature.
Camera TypeSensitivityLensSaved FormatExternal SourceUVProcessing SoftwareAnalysis File FormatDomainAuthor
Nikon Coolpix 5400 -fisheye lensRadianceRGBE and LogLuv TIFFNatural sunlight, warm yellow light, and MH and HPS lampNPhotosphereRadianceRGBEHDR Imaging[10]
Canon EOS 6DISO6400Rayfact 105 mm f4.5 UV lens and a Micro Nikkor 105 mm f4RAW&JPEGY (Xenon lamp)YRawDigger x64 and Excel 2010RAW-[30]
Ultraviolet (UV) 1-CCD camera (SONY, XC-
EU50
-Pentax, B2528-UVRAWN and National, PRF-500WBYpixel-segmented AlgorithmsRAWUltraviolet
imaging-based machine vision system
[34]
Canon, Power shot A-80--RAWN and National, PRF-500WBNpixel-segmented AlgorithmsRAWimaging-based machine vision system[34]
Canon EOS Rebel T2i DSLR digital camera---1000 W tungsten halogen lampNMATLAB
(MathWorks, Natick, MA, USA) software, R2014a version
-Agricultural [35]
DSLR camera, Nikon D100-F Micro-Nikkor
60 mm f/2.8D lens
-Two tungsten
lamps
NMATLAB® customized software-Granite-forming minerals[36]
Power Shot G3
(Canon, Japan)
--TIFFFour fluorescent
lamps (Philips, Natural Daylight)
NCanon Remote Capture
Software (version 2.7.0)
TIFFAgricultural [37]
MS3100 (Duncan
Technologies, Inc., CA, USA)
-7 mm focal length lens with f-stop of 3.5-Halogen lamp MATLAB (MathWorks Inc., MA, USA) toolbox-Agricultural [38]
CANON EOS 450D100EF-S18-55 mm f 3.5–5.6 ISRAWFour fluorescent lamps (Philips Master
Graphica TLD 965
NAdobe Photoshop CS3 software for image analysisRAWAgricultural[41]
Table 2. Present use of a camera to detect UV Radiation.
Table 2. Present use of a camera to detect UV Radiation.
DeviceSpecial Band PassBand Pass TypeCamera SettingsApplicationReferenceWavelengthPrice ($)
Sony Xperia Z1YUG11 broadband transmission filter with the KG05
infrared blocking filter
5248 × 3936 pixels
20.7MP
an exposure
time of 0.125 s.
Determine how thick optical materials affect the camera’s ability to measure and monitor UVA light in places where direct illumination is blocked.[2]-530
LG L3
smartphone
YCVI Melles
Griot
-Evaluated the direct sun clear sky irradiances from narrowband direct sun smartphone-derived images[20]340
and 380 nm.
123.58
Samsung Galaxy SII (camera model GT-19100), iPhone5,
and Nokia Lumia 800
Samsung f/2.6 exposure time 1/17;
iPhone f/2.4 exposure time 1/15 or 1/16;
Nokia f/2.2 exposure time 1/14;
Described how smartphone cameras react to ultraviolet B radiation and show that they can sense this radiation without extra equipment.[22]280 to 320 nm-
Canon EOS 6DYLaLaU UV
pass filter
Rayfact 105 mm f4.5 UV lensImaging of Vase under UV Light[30]320–400 nm
Samsung
Galaxy 5
YCVI Melles Griot-To characterize the ultraviolet A (UVA; 320–400 nm) response of a consumer complementary metal oxide semiconductor (CMOS)-based smartphone image sensor in a controlled laboratory environment[21]380 and
340 nm
70
Sony Xperia Z1YSolar Light Inc7.487 mm lens, 21 MPTo characterize the photobiological important direct UVB solar irradiances at 305 nm in clear sky conditions at high air masses.[58]305 nm530
DSLR (Canon EOS Rebel XTi 400D)YLifepixelF 2.8, ISO 1800, shutter speed 1.2 sTo determine if skin cancer-prone facial regions are ineffectively covered[61]-899
Alta U260 camerasYAsahi Spectra 10 nm FWHM XBPA310 and XBPA330f25 mm, 16 bit 512 × 512 pixelImaging the sulphur dioxide flux distribution of the fumarolic field of La Fossa crater[62]-5800
Polaroid CU5, Faraghan Medical Camera SystemsYN/A35-mm single-lensUsed UV photography to highlight the damage to facial skin caused by the previous UV exposure[63]-39
Raspberry Pi camera moduleYUV transmissive AR-coated
plano-convex lens
F9-12 mm, 10-bit images, at an initial resolution of 1392 × 1040Using low-cost UV cameras to measure how much sulphur dioxide comes out of volcanoes with UV light [64]320 and 330 nm500
Table 4. Pixel Transformation Equation.
Table 4. Pixel Transformation Equation.
WavelengthsCamera InstrumentIrradiance Transformation FunctionIlluminationIlluminance Transformation FunctionScene Radiation FunctionAuthor
-Nikon Camera-SunL = −k∗((0.2127∗R) + (0.7151∗G) + (0.722∗B)) (cd/m2)-[10]
-smartphone-Sun- L a e u v = 6 T 2 155 x t [28]
305 nm, 312 nm and 320 nmsmartphone I n   I   ( s m a r t p h o n e ) = I n ( R E ^ 2 c o s 4 ( s z A ) ) Sun--[60]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Onatayo, D.A.; Srinivasan, R.S.; Shah, B. Ultraviolet Radiation Transmission in Building’s Fenestration: Part II, Exploring Digital Imaging, UV Photography, Image Processing, and Computer Vision Techniques. Buildings 2023, 13, 1922. https://doi.org/10.3390/buildings13081922

AMA Style

Onatayo DA, Srinivasan RS, Shah B. Ultraviolet Radiation Transmission in Building’s Fenestration: Part II, Exploring Digital Imaging, UV Photography, Image Processing, and Computer Vision Techniques. Buildings. 2023; 13(8):1922. https://doi.org/10.3390/buildings13081922

Chicago/Turabian Style

Onatayo, Damilola Adeniyi, Ravi Shankar Srinivasan, and Bipin Shah. 2023. "Ultraviolet Radiation Transmission in Building’s Fenestration: Part II, Exploring Digital Imaging, UV Photography, Image Processing, and Computer Vision Techniques" Buildings 13, no. 8: 1922. https://doi.org/10.3390/buildings13081922

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop