Next Article in Journal
Uneven Terrain Walking with Linear and Angular Momentum Allocation
Next Article in Special Issue
Recognition of Abnormal-Laying Hens Based on Fast Continuous Wavelet and Deep Learning Using Hyperspectral Images
Previous Article in Journal
A Study of an Online Tracking System for Spark Images of Abrasive Belt-Polishing Workpieces
Previous Article in Special Issue
Prediction of Honeydew Contaminations on Cotton Samples by In-Line UV Hyperspectral Imaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Automatic Counterfeit Currency Detection Using a Novel Snapshot Hyperspectral Imaging Algorithm

1
Department of Mechanical Engineering, Advanced Institute of Manufacturing with High Tech Innovations (AIM-HI) and Center for Innovative Research on Aging Society (CIRAS), National Chung Cheng University, 168, University Rd., Min Hsiung, Chia Yi 62102, Taiwan
2
College of Management, National Sun Yat-sen University, 70 Lienhai Rd., Kaohsiung 80424, Taiwan
3
Medical Devices R&D Service Development, Metal Industries Research & Development Centre, 1001 Kaonan Highway, Nanzi District, Kaohsiung 80284, Taiwan
4
Ophthalmology, Kaohsiung Armed Forces General Hospital, 2, Zhongzheng 1st.Rd., Lingya District, Kaohsiung 80284, Taiwan
5
Director of Technology Development, Hitspectra Intelligent Technology Co., Ltd., 4F, No.2, Fuxing 4th Rd., Qianzhen District, Kaohsiung 80661, Taiwan
*
Authors to whom correspondence should be addressed.
Sensors 2023, 23(4), 2026; https://doi.org/10.3390/s23042026
Submission received: 9 January 2023 / Revised: 7 February 2023 / Accepted: 8 February 2023 / Published: 10 February 2023
(This article belongs to the Special Issue Hyperspectral Imaging and Sensing)

Abstract

:
In this study, a snapshot-based hyperspectral imaging (HSI) algorithm that converts RGB images to HSI images is designed using the Raspberry Pi environment. A Windows-based Python application is also developed to control the Raspberry Pi camera and processor. The mean gray values (MGVs) of two distinct regions of interest (ROIs) are selected from three samples of 100 NTD Taiwanese currency notes and compared with three samples of counterfeit 100 NTD notes. Results suggest that the currency notes can be easily differentiated on the basis of MGV values within shorter wavelengths, between 400 nm and 500 nm. However, the MGV values are similar in longer wavelengths. Moreover, if an ROI has a security feature, then the classification method is considerably more efficient. The key features of the module include portability, lower cost, a lack of moving parts, and no processing of images required.

1. Introduction

Counterfeit currency notes are being rapidly circulated in every economy in recent years [1,2]. In the last decade, more than 3.53 lakh cases were reported due to banknote counterfeiting [3]. In the USA alone, a loss of more than 20 billion USD has been projected due to counterfeit checks [4]. With recent developments in counterfeiting technology, the human eye has lost the ability to differentiate between original and counterfeit banknotes [5].
To prevent counterfeiting of banknotes, every monetary authority has its own security features. For example, the national central banks of the euro system use micro-perforations [6]. In the hologram patch of the euro, the “€” symbol is formed by micro-perforations, or micro-holes. The Reserve Bank of India (RBI) uses the Omron anti-photocopying feature, which appears in yellow circles on either side of the text “Reserve Bank of India” [7]. In 1996, the USA added a few security features, including a 3D security ribbon and a color-shifting bell in the ink [8]. When the currency is tilted, the bell and “100” will change their positions.
Even with many anti-counterfeiting techniques, the amount of counterfeit currencies has still increased in recent years [5,9] due to technological evolutions, such as color printing, duplication, and scanning [10]. Current commercially available techniques use ultraviolet (UV) light to detect ink marks that are invisible to the human eyes [5,11]. However, this process is slow and taxing. In contrast with other techniques, it requires a human operator who knows exactly where a security feature is located on a banknote to verify authenticity, and thus is prone to human error.
Even though hyperspectral imaging (HSI) systems have previously used in many other counterfeit detection protocols, one optical method that has not been widely studied for counterfeit currency detection is HSI [12,13,14,15]. In one study, a non-destructive analysis and authentication of written and printed documents was performed with a VIS-HSI imaging technique [16]. In another study, near-infrared hyperspectral imaging (NIR-HSI) was developed to detect fraudulent documents [17]. HSI acquires the spectrum for each pixel in an image [18,19,20,21]. It has been used in many applications, such as cancer detection [22,23,24,25], air pollution monitoring [26,27], nanostructure identification [28,29,30,31], aerospace [32,33,34], food quality maintenance [35], verification [36,37,38], military [39], remote sensing [40,41,42], and agriculture [43].
Automatic recognition is a prerequisite for any counterfeit-currency detection method. In the past, many counterfeit-currency-note detection techniques have been developed, including UV light detection [44], image processing [11,45,46], deep learning [47,48], bit-plane slicing [49], and artificial neural networks [50]. However, most of these techniques have remained within laboratories because they require complex preprocessing. Moreover, HSI is not currently employed in counterfeit currency detection. Therefore, in this study an automatic counterfeit-currency detection application based on HSI is designed and developed.
In the current study, a portable low-cost HSI is designed to capture an image of a Republic of China (ROC) banknote and classify it by extracting three regions of interest (ROIs) from the banknote and measuring the mean gray value (MGV). The Section 2 explains the security features of a Taiwanese banknote, the module designed in this study, and the HSI algorithm built to convert an RGB image into a spectral image. The Section 3 describes the extracted ROIs and compares the MGVs of counterfeit and original banknotes. The Section 4 discusses the advantages, limitations, and lessons learned from this study.

2. Materials and Methods

2.1. Security Features of a Taiwanese Banknote

The national currency of the ROC (also known as Taiwan), called the new Taiwan dollar (NTD), has eight security features, including intaglio printing, optically variable ink (OVI), watermarks, and latent texts, as shown in Figure 1 [51,52,53]. The left side of the Taiwanese 100 NTD banknote has watermarks of orchids and the Arabic numeral of the denomination [54]. In addition, when the note is tilted at an angle of 15°, the Arabic numerical value of the denomination appears to have two different shades. OVI is also used on the bottom left corner of the note, wherein the Arabic numerical value of the denomination appears in two distinct colors (green and purple) when viewed from different angles. Apart from these features, intaglio printing is also added at five separate locations. However, all the aforementioned security features are implemented on the same side of the note.

2.2. Experimental Setup

One of the challenges in designing a machine that can detect counterfeit banknotes by using HSI is that it must be portable and low-cost. Most hyperspectral and multispectral machines built thus far are fixed and expensive because they require components, such as a spectrometer, an optical head, and multispectral or hyperspectral lighting systems [55,56,57,58,59]. The novelty of the current study is all the expensive and heavy components are replaced with low-cost and portable units by developing an HSI algorithm that can convert an RGB image, captured by a digital camera, into a hyperspectral image.
In any optical detection and classification technique, lighting conditions must remain constant, specifically when measuring MGV. Therefore, the second challenge is to design a module such that lighting conditions remain constant while reducing all surrounding light. In such a case, the module must have its own lighting and processing units.
The schematic of the module designed to capture the image is presented in Figure 2. The whole module is comprised of only six components. Raspberry Pi 4 Model B is used as the microprocessor. Raspberry Pi camera version 2 is used to capture an image of the currency. The camera is connected to Raspberry Pi through a Raspberry Pi mobile industry processor interface, camera serial interface, and camera pinout (see Supplementary Materials, Section S1 for all the components used in this study). An Adafruit thin-film transistor (TFT) display, with 320 × 240 16-bit color pixels in a touch screen measuring 2.8 inches, is added to the module to control Raspberry Pi. In addition, a 3000 K chip on-board (COB) light-emitting diode (LED) strip is fixed along the inside border of the module to provide light for the module. The COB LED strip has a uniform spectral response (no cyan dip) across the blue, green, and red color spectrums. However, this lighting system is not even, but is rather a pointed light source. Therefore, a white opal profile diffuser is used to distribute light evenly. The COB LED strip is also connected to an LED dimmer switch to adjust brightness. The final module is shown in Figure 3.

2.3. Visible Snapshot-Based RGB to HSI Conversion Algorithm

The goal of this research is to develop a snapshot-based VIS-HSI imaging algorithm that can convert an RGB image, captured by a point-and-shoot-based image capturing system, into a VIS spectral image. In order to attain this goal, it is necessary to ascertain the connection that exists between the colors included in an RGB image and a spectrometer. A Macbeth chart with 24 basic colors, including red, green, blue, cyan, magenta, yellow, medium light human skin, medium dark human skin, blue sky, foliage, blue flower, bluish green, orange, purplish blue, moderate red, purple, yellow green, orange yellow, white, black, and four different shades of gray, is used as the target colors in order to obtain this relationship. For the purposes of this investigation, the Macbeth chart was selected because it contains the colors that are found in natural settings the most frequently and in the greatest abundance. Due to the fact that a significant amount of research has been conducted using this camera in recent years for the purpose of target color calibration, the Raspberry Pi camera was chosen for the calibration process. The Raspberry Pi camera settings are modified such that any image that is taken will be saved in the sRGB (0–255) color space using the 8-bit JPG file format. To make things easier to understand, the values are first turned into a more compact function ranging from 0 to 1, and then, using the gamma function, they are translated into a linearized RGB value. After that, the values are converted into the color space defined by the CIE 1931 XYZ standard using the conversion matrix (M). On rare occasions, the camera may be subject to a variety of faults, including color shift, nonlinear response, dark current, erroneous color separation of the filter, and other similar issues (see Supplementary Materials, Section S3 for all the conversion formulas used in this study). The individual conversion formulas to convert the 24-color patch image and 24-color patch reflectance spectrum data to XYZ color space on the Raspberry Pi camera side are shown in Equations (1)–(4).
X Y Z = M A T f R s R G B f G s R G B f B s R G B × 100   ,   0 R s R G B G s R G B B s R G B   1
T = 0.4104   0.3576   0.1805 0.2126   0.7152   0.0722 0.0193   0.1192   0.9505
f n = ( n + 0.055 1.055 ) 2.4 ,   n > 0.04045 n 12.92 ,   o t h e r w i s e
M A = X S W X C W             0             0               0           Y S W Y C W             0                   0                   0             Z S W Z C W
On the spectrometer side, to convert the reflection spectral data to XYZ color gamut space, Equations (5)–(8) are used.
X = k 400nm 700nm S λ R λ x ¯ λ d λ
Y = k 400nm 700nm S λ R λ y ¯ λ d λ
Z = k 400nm 700nm S λ R λ z ¯ λ d λ
k = 100 / 400nm 700nm S λ y ¯ λ d λ
The nonlinear response of the camera can be corrected by a third-order equation, and the nonlinear response correction variable is defined as VNon-linear, as shown in Equation (9).
V N o n - l i n e a r = X 3   Y 3   Z 3   X 2   Y 2   Y 2   X   Y   Z   1 T
In the dark current part of the camera, the dark current is usually a fixed value and does not change with the amount of incoming light, so a constant is given as the contribution of the dark current, and the dark current correction variable is defined as VDark, as shown in Equation (10).
V D a r k = a
Finally, VColor is used as the base, as shown in Equation (11), and is multiplied by the nonlinear response correction of VNon-linear; the result is standardized within the third order to avoid excessive correction, and finally, VDark is added to obtain the variable matrix V as shown in Equation (12).
V C o l o r = X Y Z   X Y   X Z   Y Z   X   Y   Z T
V = X 3   Y 3   Z 3   X 2 Y   X 2 Z   Y 2 Z   X Y 2   X Z 2   Y Z 2   X Y Z   X 2   Y 2   Y 2   X Y   X Z   Y Z   X   Y   Z   a T
As stated in Equations (13) and (14), these mistakes can be rectified by making use of a variable matrix, denoted by the letter V. This allows one to retrieve the X, Y, and Z values that have been corrected (XYZCorrect).
C = X Y Z S p e c t r u m × p i n v V  
X Y Z C o r r e c t = C × V  
The same Macbeth chart, under a controlled lighting environment, is also supplied to an Ocean Optics QE65000 spectrometer to find the reflectance spectra of the 24 colors. The brightness ratio (k) is obtained from the standardized value of brightness directly from the Y value of the XYZ color spectrum (XYZSpectrum). In addition, XYZSpectrum is converted into the CIE 1931 XYZ color space. The construction process of this algorithm is illustrated in Figure 4.
The root-mean-square error (RMSE) of all 24 colors, compared with XYZSpectrum and XYZCorrect, is only 0.19, which is such a small value that it is essentially negligible. In addition to multiple regression analyses, the first six principal components are computed by performing a principal component analysis (PCA) on the reflectance spectrum (RSpectrum) data acquired from the spectrometer. In order to establish the nature of the connection that exists between XYZCorrect and RSpectrum, the dimensions of RSpectrum were reduced. Following completion of the study, the first six principle components were determined to be the primary components, which account for 99.64% of the variation in the data. In the process of conducting a regression analysis on XYZCorrect, the variable VColor was selected as the dependent variable because it contains all of the feasible combinations of XYZ values. Utilizing Equation (15), we were able to extract the analog spectrum, denoted by SSpectrum, of the 24-color block, which we then contrasted with RSpectrum. Converting XYZCorrect and XYZSpectrum from the XYZ color space to the CIELAB color space is required prior to making use of CIE DE2000 for the calculation of color difference. The following formula should be used for the conversion:
L * = 116 f Y Y n 16 a * = 500 f ( X X n ) f ( Y Y n ) b * = 200 f ( Y Y n ) f ( Z Z n )
f n =     n 1 3 ,       n > 0.008856 7.787 n + 0.137931 ,   o t h e r w i s e
The RMSE value of the 24 colors in the Macbeth chart is calculated individually, and the average deviation is 0.75. This value indicates that the reproduced reflectance spectrum is indistinguishable, and the colors are replicated accurately.
S S p e c t r u m 380 700nm = E V M V C o l o r
The aforementioned method can be used to accurately reproduce a hyperspectral image from an RGB image captured by a Raspberry Pi camera, eliminating the need for expensive and heavy components, and making the module low-cost and portable.

2.4. Classification Method

In this study, to classify the currency notes, first the Raspberry Pi module is used with the help of the Windows-based Python application to capture the currency note from the Raspberry Pi camera. A ROI is cropped out from the currency notes. Secondly, the wavelength at which the currency should be analysed is selected. Then, the MGV of the ROI at that specific wavelength is measured. Since MGV is an average intensity of all the pixels in the ROI, the orientation or alignment of the ROI does not make a difference in the output value. Based on the MGV value, the currency note is classified as either counterfiet or original.

3. Results

In this study, two ROIs are selected for further analysis. These ROIs are cropped from the original image by using the template matching function. The first ROI is a symbol of a cherry blossom. It has a surface area of 1 cm2. The second ROI is the number “1” found on the left bottom corner, as shown in Figure 5. ROI 2 has a surface area of 0.336 cm2. ROI 1 does not have any specific security feature, while ROI 2 is printed using OVI. Therefore, the sensitivity of the method developed in this study can be compared with the security features of the banknote (see Supplementary Materials, Section S2 for the entropy measurement of ROI 1).
Since this study represents a pilot study, three counterfeit and three original 100 NTD banknotes are selected for analysis. For both ROIs, the MGV is found in the visible band between 400 nm and 500 nm. The MGV is the average measure of brightness of every pixel in an image; it has been used in various image classification and measurement methods in the past [60,61,62,63,64]. Figure 6 shows the MGV of ROI 1, while Figure 7 shows the MGV of ROI 2 (see Supplementary Materials, Section S5 for the MGV mean of the duplicate and original samples).
Notably, even though the VIS-HSI algorithm can calculate MGVs up to 700 nm, all the original and counterfeit banknotes have similar MGVs after 500 nm (see Supplementary Materials, Figures S12 and S13 for MGVs in the range of 400–700 nm). Hence, we can conclude that the most suitable wavelength region for this application is between 400 nm and 500 nm. The other reason is that the UV wavelength is the most sensitive region for detecting counterfeit banknotes. Given that 400–500 nm corresponds to the violet and blue spectra, these bands are efficient. The average RMSE in this band is 8.008 in ROI 1 and 14.079 in ROI 2. Therefore, we can conclude that ROI 2 exhibits better classification sensitivity because ROI 2 has security features, such as OVI and intaglio printing, whereas ROI 1 does not have any specific security feature.
The 90% confidence interval (CI) is calculated around the average MGV of the original and counterfeit banknotes in ROI 1. The original and counterfeit banknotes are classified into two different classes. The 90% CI around the average of each class represents the range in which the MGV value of the banknotes belonging to that specific class will fall in that specific wavelength. Notably, in ROI 2, 95% CI is calculated and the original and counterfeit banknotes are classified into two different classes within this 5%.
To capture the image of a banknote through the designed module, the Raspberry Pi web-camera interface is installed on the Raspberry Pi operating system. In this study, a Python-based Windows application is also developed to capture and analyze images. This application can directly control the Raspberry Pi camera by using the “Start” and “Stop” buttons to start and stop the live feed, respectively. The “Capture” button is used to capture an image. The narrowband wavelength values in which the banknote must be analyzed, and the gain value specified for each wavelength, are the input. Once the “Analyze” button is clicked, the template-matching function crops the selected ROI and converts it into the specified wavelength. The snapshot-based VIS-HSI algorithm developed in this study is applied to the image, and the MGV of the image is measured. Finally, the banknote is classified as either original or counterfeit based on the CI value, and the results are displayed at the bottom of the application, as shown in Figure 8.

4. Conclusions

As a point of reference for identifying and categorizing counterfeit banknotes, this study made use of three authentic polymer 100 NTD banknotes, as well as three counterfeit versions of the same note. It was decided to design and 3D print a module that would be capable of housing a Raspberry Pi 4 Model B, a Raspberry Pi camera, a 2.8-inch TFT touch screen, and an LED strip. It was possible to develop a snapshot-based VIS-HSI algorithm that, when applied to an RGB image, could transform it into a hyperspectral image. This image could be taken by the Raspberry Pi camera housed within the module. The mean gray value (MGV) was determined after choosing two ROIs from the banknotes. The 90% confidence interval (CI) was also calculated around the MGVs for both classes, and the samples were categorized based on the range provided by this CI. In addition, a Python-based Windows application was developed for the purpose of analyzing the sample and capturing the image of a hologram. This application could manage the interface of the web camera used by the Raspberry Pi computer. The findings indicated that the MGV values of the various classes were within the range of the 90% confidence interval (CI). The important novelty of this study is the usage of HSI to detect counterfeit currency by converting an RGB image to an HSI image, without the usage of any moving components or an expensive spectral imager. This study has the potential to be expanded further by taking into consideration additional polymer banknotes as well as different ROIs. In addition, using this research to collect reflectance spectra allows for the construction of a spectral library.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/s23042026/s1: Figure S1. Entropy of two samples of original and duplicate in the ROI 1 at the wavelength of 400 nm. Entropy of Duplicate sample 1 (Top left corner), Entropy of Duplicate sample 2 (Top right corner), Entropy of Original sample 1 (Bottom left corner), Entropy of Original sample 2 (Bottom right corner); Figure S2. Entropy of two samples of original and duplicate in the ROI 1 at the wavelength of 450 nm. Entropy of Duplicate sample 1 (Top left corner), Entropy of Duplicate sample 2 (Top right corner), Entropy of Original sample 1 (Bottom left corner), Entropy of Original sample 2 (Bottom right corner); Figure S3. Entropy of two samples of original and duplicate in the ROI 1 at the wavelength of 500 nm. Entropy of Duplicate sample 1 (Top left corner), Entropy of Duplicate sample 2 (Top right corner), Entropy of Original sample 1 (Bottom left corner), Entropy of Original sample 2 (Bottom right corner); Figure S4. Entropy of two samples of original and duplicate in the ROI 1 at the wavelength of 550 nm. Entropy of Duplicate sample 1 (Top left corner), Entropy of Duplicate sample 2 (Top right corner), Entropy of Original sample 1 (Bottom left corner), Entropy of Original sample 2 (Bottom right corner); Figure S5. Entropy of two samples of original and duplicate in the ROI 1 at the wavelength of 600 nm. Entropy of Duplicate sample 1 (Top left corner), Entropy of Duplicate sample 2 (Top right corner), Entropy of Original sample 1 (Bottom left corner), Entropy of Original sample 2 (Bottom right corner); Figure S6. Entropy of two samples of original and duplicate in the ROI 1 at the wavelength of 650 nm. Entropy of Duplicate sample 1 (Top left corner), Entropy of Duplicate sample 2 (Top right corner), Entropy of Original sample 1 (Bottom left corner), Entropy of Original sample 2 (Bottom right corner); Figure S7. Entropy of two samples of original and duplicate in the ROI 1 at the wavelength of 700 nm. Entropy of Duplicate sample 1 (Top left corner), Entropy of Duplicate sample 2 (Top right corner), Entropy of Original sample 1 (Bottom left corner), Entropy of Original sample 2 (Bottom right corner); Figure S8. Spectral reflectance of the three original and three duplicate banknotes between 400 nm and 700 nm in ROI 1; Figure S9. Spectral reflectance of the three original and three duplicate banknotes between 400 nm and 700 nm in ROI 2; Figure S10. Mean of the duplicate and original MGVs of the original and duplicate samples in ROI 1; Figure S11. Mean of the duplicate and original MGVs of the original and duplicate samples in ROI 2; Figure S12. Mean of the duplicate and original MGVs of the original and duplicate samples in ROI 1; Figure S13. Mean of the duplicate and original MGVs of the original and duplicate samples in ROI 2; Table S1. Components used to build the module; Table S2. Instrument Specification; Table S3. MGVs of the RGB image.

Author Contributions

Conceptualization, H.-C.W. and A.M.; data curation, Y.-M.T., W.-M.C. and A.M.; formal analysis, A.M.; funding acquisition, F.-C.L. and H.-C.W.; investigation, Y.-M.T., A.M. and W.-M.C.; methodology, H.-C.W. and A.M.; project administration, Y.-M.T., A.M. and H.-C.W.; resources, H.-C.W. and A.M.; software, Y.-M.T., W.-M.C. and A.M.; supervision, Y.-M.T. and H.-C.W.; validation, Y.-M.T. and A.M.; writing—original draft, A.M.; writing—review and editing, A.M. and H.-C.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Ministry of Science and Technology, the Republic of China under the grants MOST 111-2221-E-194-007. This work was partially financially supported by the Advanced Institute of Manufacturing with High-tech Innovations (AIM-HI) and the Center for Innovative Research on Aging Society (CIRAS) from The Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education (MOE), and Kaohsiung Armed Forces General Hospital research project MAB109-097 in Taiwan.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Arya, S.; Sasikumar, M. Fake currency detection. In Proceedings of the 2019 International Conference on Recent Advances in Energy-Efficient Computing and Communication (ICRAECC), Nagercoil, India, 7–8 March 2019; pp. 1–4. [Google Scholar]
  2. Alnowaini, G.; Alttal, A.; Alabsi, A. Design and development SST Prototype for Yemeni paper currency. In Proceedings of the 2021 International Conference of Technology, Science and Administration (ICTSA), Taiz, Yemen, 22–24 March 2021; pp. 1–7. [Google Scholar]
  3. Kamble, K.; Bhansali, A.; Satalgaonkar, P.; Alagundgi, S. Counterfeit Currency Detection using Deep Convolutional Neural Network. In Proceedings of the 2019 IEEE Pune Section International Conference (PuneCon), Pune, India, 18–20 December 2019; pp. 1–4. [Google Scholar]
  4. Quercioli, E.; Smith, L. The economics of counterfeiting. Econometrica 2015, 83, 1211–1236. [Google Scholar] [CrossRef]
  5. Zhang, Q.; Yan, W.Q. Currency detection and recognition based on deep learning. In Proceedings of the 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Auckland, New Zealand, 27–30 November 2018; pp. 1–6. [Google Scholar]
  6. Guedes, A.; Algarra, M.; Prieto, A.C.; Valentim, B.; Hortelano, V.; Neto, S.; Algarra, R.; Noronha, F. Raman microspectroscopy of genuine and fake euro banknotes. Spectrosc. Lett. 2013, 46, 569–576. [Google Scholar] [CrossRef]
  7. Upadhyaya, A.; Shokeen, V.; Srivastava, G. Counterfeit Currency Detection Techniques. In Proceedings of the 2018 8th International Conference on Cloud Computing, Data Science & Engineering (Confluence), Noida, India, 11–12 January 2018; pp. 394–398. [Google Scholar]
  8. Baldwin, C.D. Til Death Do Us Part: Filthy Rich; Strategic Book Publishing & Rights Agency: Vero Beach, FL, USA, 2016. [Google Scholar]
  9. Upadhyaya, A.; Shokeen, V.; Srivastava, G. Analysis of Counterfeit Currency Detection Techniques for Classification Model. In Proceedings of the 2018 4th International Conference on Computing Communication and Automation (ICCCA), Noida, India, 14–15 December 2018; pp. 1–6. [Google Scholar]
  10. Alnowaini, G.; Alabsi, A.; Ali, H. Yemeni Paper Currency Detection System. In Proceedings of the 2019 First International Conference of Intelligent Computing and Engineering (ICOICE), Hadhramout, Yemen, 15–16 December 2019; pp. 1–7. [Google Scholar]
  11. Alekhya, D.; Prabha, G.D.S.; Rao, G.V.D. Fake currency detection using image processing and other standard methods. Int. J. Res. Comput. Commun. Technol. 2014, 3, 128–131. [Google Scholar]
  12. Lim, H.-T.; Matham, M.V. Instrumentation challenges of a pushbroom hyperspectral imaging system for currency counterfeit applications. In Proceedings of the International Conference on Optical and Photonic Engineering (icOPEN 2015), Singapore, 14–16 April 2015; pp. 658–664. [Google Scholar]
  13. Rathee, N.; Kadian, A.; Sachdeva, R.; Dalel, V.; Jaie, Y. Feature fusion for fake Indian currency detection. In Proceedings of the 2016 3rd International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, India, 16–18 March 2016; pp. 1265–1270. [Google Scholar]
  14. Cardin, V.; Dion-Bertrand, L.-I.; Poirier, F.; Marcet, S.; Yvon-Leroux, J. Hyperspectral VIS/SWIR wide-field imaging for ink analysis. In Proceedings of the Hyperspectral Imaging and Applications, Birmingham, UK, 4 October 2020; p. 1157609. [Google Scholar]
  15. Lim, H.-T.; Murukeshan, V.M. Hyperspectral imaging of polymer banknotes for building and analysis of spectral library. Opt. Lasers Eng. 2017, 98, 168–175. [Google Scholar] [CrossRef]
  16. Brauns, E.B.; Dyer, R.B. Fourier transform hyperspectral visible imaging and the nondestructive analysis of potentially fraudulent documents. Appl. Spectrosc. 2006, 60, 833–840. [Google Scholar] [CrossRef]
  17. Silva, C.S.; Pimentel, M.F.; Honorato, R.S.; Pasquini, C.; Prats-Montalbán, J.M.; Ferrer, A. Near infrared hyperspectral imaging for forensic analysis of document forgery. Analyst 2014, 139, 5176–5184. [Google Scholar] [CrossRef]
  18. Schneider, A.; Feussner, H. Biomedical Engineering in Gastrointestinal Surgery; Academic Press: Cambridge, MA, USA, 2017. [Google Scholar]
  19. Vasefi, F.; MacKinnon, N.; Farkas, D. Hyperspectral and multispectral imaging in dermatology. In Imaging in Dermatology; Elsevier: Amsterdam, The Netherlands, 2016; pp. 187–201. [Google Scholar]
  20. Schelkanova, I.; Pandya, A.; Muhaseen, A.; Saiko, G.; Douplik, A. Early optical diagnosis of pressure ulcers. In Biophotonics for Medical Applications; Elsevier: Amsterdam, The Netherlands, 2015; pp. 347–375. [Google Scholar]
  21. ElMasry, G.; Sun, D.-W. Principles of hyperspectral imaging technology. In Hyperspectral Imaging for Food Quality Analysis and Control; Elsevier: Amsterdam, The Netherlands, 2010; pp. 3–43. [Google Scholar]
  22. Tsai, C.-L.; Mukundan, A.; Chung, C.-S.; Chen, Y.-H.; Wang, Y.-K.; Chen, T.-H.; Tseng, Y.-S.; Huang, C.-W.; Wu, I.-C.; Wang, H.-C. Hyperspectral Imaging Combined with Artificial Intelligence in the Early Detection of Esophageal Cancer. Cancers 2021, 13, 4593. [Google Scholar] [CrossRef]
  23. Tsai, T.-J.; Mukundan, A.; Chi, Y.-S.; Tsao, Y.-M.; Wang, Y.-K.; Chen, T.-H.; Wu, I.-C.; Huang, C.-W.; Wang, H.-C. Intelligent Identification of Early Esophageal Cancer by Band-Selective Hyperspectral Imaging. Cancers 2022, 14, 4292. [Google Scholar] [CrossRef]
  24. Fang, Y.-J.; Mukundan, A.; Tsao, Y.-M.; Huang, C.-W.; Wang, H.-C. Identification of Early Esophageal Cancer by Semantic Segmentation. J. Pers. Med. 2022, 12, 1204. [Google Scholar] [CrossRef]
  25. Huang, H.-Y.; Hsiao, Y.-P.; Mukundan, A.; Tsao, Y.-M.; Chang, W.-Y.; Wang, H.-C. Classification of Skin Cancer Using Novel Hyperspectral Imaging Engineering via YOLOv5. J. Clin. Med. 2023, 12, 1134. [Google Scholar] [CrossRef]
  26. Chen, C.-W.; Tseng, Y.-S.; Mukundan, A.; Wang, H.-C. Air Pollution: Sensitive Detection of PM2. 5 and PM10 Concentration Using Hyperspectral Imaging. Appl. Sci. 2021, 11, 4543. [Google Scholar] [CrossRef]
  27. Mukundan, A.; Huang, C.-C.; Men, T.-C.; Lin, F.-C.; Wang, H.-C. Air Pollution Detection Using a Novel Snap-Shot Hyperspectral Imaging Technique. Sensors 2022, 22, 6231. [Google Scholar] [CrossRef] [PubMed]
  28. Li, K.-C.; Lu, M.-Y.; Nguyen, H.T.; Feng, S.-W.; Artemkina, S.B.; Fedorov, V.E.; Wang, H.-C. Intelligent identification of MoS2 nanostructures with hyperspectral imaging by 3D-CNN. Nanomaterials 2020, 10, 1161. [Google Scholar] [CrossRef] [PubMed]
  29. Wu, I.; Syu, H.-Y.; Jen, C.-P.; Lu, M.-Y.; Chen, Y.-T.; Wu, M.-T.; Kuo, C.-T.; Tsai, Y.-Y.; Wang, H.-C. Early identification of esophageal squamous neoplasm by hyperspectral endoscopic imaging. Sci. Rep. 2018, 8, 13797. [Google Scholar] [CrossRef] [PubMed]
  30. Mukundan, A.; Tsao, Y.-M.; Artemkina, S.B.; Fedorov, V.E.; Wang, H.-C. Growth Mechanism of Periodic-Structured MoS2 by Transmission Electron Microscopy. Nanomaterials 2022, 12, 135. [Google Scholar] [CrossRef]
  31. Mukundan, A.; Feng, S.-W.; Weng, Y.-H.; Tsao, Y.-M.; Artemkina, S.B.; Fedorov, V.E.; Lin, Y.-S.; Huang, Y.-C.; Wang, H.-C. Optical and Material Characteristics of MoS2/Cu2O Sensor for Detection of Lung Cancer Cell Types in Hydroplegia. Int. J. Mol. Sci. 2022, 23, 4745. [Google Scholar] [CrossRef] [PubMed]
  32. Mukundan, A.; Wang, H.-C. The Space Logistics needs will be necessary for Sustainable Space Activities Horizon 2030. In Proceedings of the AIAA SCITECH 2023 Forum, National Harbor, MD, USA, 23–27 January 2023; p. 1603. [Google Scholar]
  33. Mukundan, A.; Patel, A.; Saraswat, K.D.; Tomar, A.; Kuhn, T. Kalam Rover. In Proceedings of the AIAA SCITECH 2022 Forum, San Diego, CA, USA, 3–7 January 2022; p. 1047. [Google Scholar]
  34. Mukundan, A.; Wang, H.-C. The Brahmavarta Initiative: A Roadmap for the First Self-Sustaining City-State on Mars. Universe 2022, 8, 550. [Google Scholar] [CrossRef]
  35. Sun, D.-W. Hyperspectral Imaging for Food Quality Analysis and Control; Elsevier: Amsterdam, The Netherlands, 2010. [Google Scholar]
  36. Huang, S.-Y.; Mukundan, A.; Tsao, Y.-M.; Kim, Y.; Lin, F.-C.; Wang, H.-C. Recent Advances in Counterfeit Art, Document, Photo, Hologram, and Currency Detection Using Hyperspectral Imaging. Sensors 2022, 22, 7308. [Google Scholar] [CrossRef]
  37. Mukundan, A.; Tsao, Y.-M.; Lin, F.-C.; Wang, H.-C. Portable and low-cost hologram verification module using a snapshot-based hyperspectral imaging algorithm. Sci. Rep. 2022, 12, 18475. [Google Scholar] [CrossRef]
  38. Mukundan, A.; Wang, H.-C.; Tsao, Y.-M. A Novel Multipurpose Snapshot Hyperspectral Imager used to Verify Security Hologram. In Proceedings of the 2022 International Conference on Engineering and Emerging Technologies (ICEET), Kuala, Lumpur, 27–28 October 2022; pp. 1–3. [Google Scholar]
  39. Feng, S.; Tang, S.; Zhao, C.; Cui, Y. A hyperspectral anomaly detection method based on low-rank and sparse decomposition with density peak guided collaborative representation. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–13. [Google Scholar] [CrossRef]
  40. Mukundan, A.; Wang, H.-C. Simplified Approach to Detect Satellite Maneuvers Using TLE Data and Simplified Perturbation Model Utilizing Orbital Element Variation. Appl. Sci. 2021, 11, 10181. [Google Scholar] [CrossRef]
  41. Wang, Y.; Chen, X.; Wang, F.; Song, M.; Yu, C. Meta-learning based hyperspectral target detection using Siamese network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–13. [Google Scholar] [CrossRef]
  42. Hong, D.; He, W.; Yokoya, N.; Yao, J.; Gao, L.; Zhang, L.; Chanussot, J.; Zhu, X. Interpretable hyperspectral artificial intelligence: When nonconvex modeling meets hyperspectral remote sensing. IEEE Geosci. Remote Sens. Mag. 2021, 9, 52–87. [Google Scholar] [CrossRef]
  43. Lowe, A.; Harrison, N.; French, A.P. Hyperspectral image analysis techniques for the detection and classification of the early onset of plant disease and stress. Plant Methods 2017, 13, 1–12. [Google Scholar] [CrossRef]
  44. Kapare, D.V.; Lokhande, S.; Kale, S. Automatic Cash Deposite Machine With Currency Detection Using Fluorescent And UV Light. Int. J. Comput. Eng. Res. 2013, 3, 309–311. [Google Scholar]
  45. Agasti, T.; Burand, G.; Wade, P.; Chitra, P. Fake currency detection using image processing. In Proceedings of the IOP Conference Series: Materials Science and Engineering, Birmingham, UK, 13–15 October 2017; p. 052047. [Google Scholar]
  46. Bhatia, A.; Kedia, V.; Shroff, A.; Kumar, M.; Shah, B.K. Fake Currency Detection with Machine Learning Algorithm and Image Processing. In Proceedings of the 2021 5th International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 6–8 May 2021; pp. 755–760. [Google Scholar]
  47. Girgis, S.; Amer, E.; Gadallah, M. Deep learning algorithms for detecting fake news in online text. In Proceedings of the 2018 13th International Conference on Computer Engineering and Systems (ICCES), Cairo, Egypt, 18–19 December 2018; pp. 93–97. [Google Scholar]
  48. Vedaldi, A.; Lenc, K. Matconvnet: Convolutional neural networks for matlab. In Proceedings of the 23rd ACM International Conference on Multimedia, Brisbane, Australia, 26–30 October 2015; pp. 689–692. [Google Scholar]
  49. Alshayeji, M.H.; Al-Rousan, M.; Hassoun, D.T. Detection method for counterfeit currency based on bit-plane slicing technique. Int. J. Multimed. Ubiquitous Eng. 2015, 10, 225–242. [Google Scholar] [CrossRef]
  50. Gogoi, M.; Ali, S.E.; Mukherjee, S. Automatic Indian currency denomination recognition system based on artificial neural network. In Proceedings of the 2015 2nd International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 19–20 February 2015; pp. 553–558. [Google Scholar]
  51. Hymans, J.E.; Fu, R.T.-m. The diffusion of international norms of banknote iconography: A case study of the New Taiwan Dollar. Political Geogr. 2017, 57, 49–59. [Google Scholar] [CrossRef]
  52. Franses, P.H.; Welz, M. Cash use of the Taiwan dollar: Is it efficient? J. Risk Financ. Manag. 2019, 12, 13. [Google Scholar] [CrossRef]
  53. Wang, K.; Chang, M.; Wang, W.; Wang, G.; Pan, W. Predictions models of Taiwan dollar to US dollar and RMB exchange rate based on modified PSO and GRNN. Clust. Comput. 2019, 22, 10993–11004. [Google Scholar] [CrossRef]
  54. Chang, C.; Yu, T.; Yen, H. Paper Currency Verification with Support Vector Machines. In Proceedings of the 2007 Third International IEEE Conference on Signal-Image Technologies and Internet-Based System, Shanghai, China, 16–18 December 2007; pp. 860–865. [Google Scholar]
  55. Bolton, F.J.; Bernat, A.S.; Bar-Am, K.; Levitz, D.; Jacques, S. Portable, low-cost multispectral imaging system: Design, development, validation, and utilization. J. Biomed. Opt. 2018, 23, 121612. [Google Scholar] [CrossRef]
  56. Zhang, L.; Wang, L.; Wang, J.; Song, Z.; Rehman, T.U.; Bureetes, T.; Ma, D.; Chen, Z.; Neeno, S.; Jin, J. Leaf Scanner: A portable and low-cost multispectral corn leaf scanning device for precise phenotyping. Comput. Electron. Agric. 2019, 167, 105069. [Google Scholar] [CrossRef]
  57. Kitić, G.; Tagarakis, A.; Cselyuszka, N.; Panić, M.; Birgermajer, S.; Sakulski, D.; Matović, J. A new low-cost portable multispectral optical device for precise plant status assessment. Comput. Electron. Agric. 2019, 162, 300–308. [Google Scholar] [CrossRef]
  58. Helfer, G.A.; Barbosa, J.L.V.; Alves, D.; da Costa, A.B.; Beko, M.; Leithardt, V.R.Q. Multispectral cameras and machine learning integrated into portable devices as clay prediction technology. J. Sens. Actuator Netw. 2021, 10, 40. [Google Scholar] [CrossRef]
  59. Sun, R.; Bouchard, M.B.; Burgess, S.A.; Radosevich, A.J.; Hillman, E.M. A low-cost, portable system for high-speed multispectral optical imaging. In Biomedical Optics and 3-D Imaging; Paper BTuD41; Optica Publishing Group: Washington, DC, USA, 2010. [Google Scholar]
  60. Alcázar, J.L.; León, M.; Galván, R.; Guerriero, S. Assessment of cyst content using mean gray value for discriminating endometrioma from other unilocular cysts in premenopausal women. Ultrasound Obstet. Gynecol. Off. J. Int. Soc. Ultrasound Obstet. Gynecol. 2010, 35, 228–232. [Google Scholar] [CrossRef]
  61. Guerriero, S.; Pilloni, M.; Alcazar, J.; Sedda, F.; Ajossa, S.; Mais, V.; Melis, G.B.; Saba, L. Tissue characterization using mean gray value analysis in deep infiltrating endometriosis. Ultrasound Obstet. Gynecol. 2013, 41, 459–464. [Google Scholar] [CrossRef]
  62. Lakshmanaprabu, S.; Mohanty, S.N.; Shankar, K.; Arunkumar, N.; Ramirez, G. Optimal deep learning model for classification of lung cancer on CT images. Future Gener. Comput. Syst. 2019, 92, 374–382. [Google Scholar]
  63. Frighetto-Pereira, L.; Menezes-Reis, R.; Metzner, G.A.; Rangayyan, R.M.; Azevedo-Marques, P.M.; Nogueira-Barbosa, M.H. Semiautomatic classification of benign versus malignant vertebral compression fractures using texture and gray-level features in magnetic resonance images. In Proceedings of the 2015 IEEE 28th International Symposium on Computer-Based Medical Systems, Carlos, Brazil, 22–25 June 2015; pp. 88–92. [Google Scholar]
  64. Costa, A.F.; Humpire-Mamani, G.; Traina, A.J.M. An efficient algorithm for fractal analysis of textures. In Proceedings of the 2012 25th SIBGRAPI Conference on Graphics, Patterns and Images, Ouro Preto, Brazil, 22–25 August 2012; pp. 39–46. [Google Scholar]
Figure 1. Security features of the 100 NTD banknote.
Figure 1. Security features of the 100 NTD banknote.
Sensors 23 02026 g001
Figure 2. Schematic of the HSI system.
Figure 2. Schematic of the HSI system.
Sensors 23 02026 g002
Figure 3. (a) Front view and (b) side view of the 3D printed design.
Figure 3. (a) Front view and (b) side view of the 3D printed design.
Sensors 23 02026 g003
Figure 4. Snapshot-based VIS-HSI algorithm.
Figure 4. Snapshot-based VIS-HSI algorithm.
Sensors 23 02026 g004
Figure 5. ROI selection: (a) cherry blossom symbol and (b) Arabic numeral “1”.
Figure 5. ROI selection: (a) cherry blossom symbol and (b) Arabic numeral “1”.
Sensors 23 02026 g005
Figure 6. MGVs of the duplicate and original samples in ROI 1.
Figure 6. MGVs of the duplicate and original samples in ROI 1.
Sensors 23 02026 g006
Figure 7. MGVs of the duplicate and original samples in ROI 2.
Figure 7. MGVs of the duplicate and original samples in ROI 2.
Sensors 23 02026 g007
Figure 8. Python-based Windows application for detecting counterfeit currency.
Figure 8. Python-based Windows application for detecting counterfeit currency.
Sensors 23 02026 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mukundan, A.; Tsao, Y.-M.; Cheng, W.-M.; Lin, F.-C.; Wang, H.-C. Automatic Counterfeit Currency Detection Using a Novel Snapshot Hyperspectral Imaging Algorithm. Sensors 2023, 23, 2026. https://doi.org/10.3390/s23042026

AMA Style

Mukundan A, Tsao Y-M, Cheng W-M, Lin F-C, Wang H-C. Automatic Counterfeit Currency Detection Using a Novel Snapshot Hyperspectral Imaging Algorithm. Sensors. 2023; 23(4):2026. https://doi.org/10.3390/s23042026

Chicago/Turabian Style

Mukundan, Arvind, Yu-Ming Tsao, Wen-Min Cheng, Fen-Chi Lin, and Hsiang-Chen Wang. 2023. "Automatic Counterfeit Currency Detection Using a Novel Snapshot Hyperspectral Imaging Algorithm" Sensors 23, no. 4: 2026. https://doi.org/10.3390/s23042026

APA Style

Mukundan, A., Tsao, Y. -M., Cheng, W. -M., Lin, F. -C., & Wang, H. -C. (2023). Automatic Counterfeit Currency Detection Using a Novel Snapshot Hyperspectral Imaging Algorithm. Sensors, 23(4), 2026. https://doi.org/10.3390/s23042026

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop