Next Article in Journal
A Label-Free Electrochemical Immunosensor for Carbofuran Detection Based on a Sol-Gel Entrapped Antibody
Previous Article in Journal
Simultaneous Application of Fibrous Piezoresistive Sensors for Compression and Traction Detection in Glass Laminate Composites
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Performance Evaluation of a Biometric System Based on Acoustic Images

by
Alberto Izquierdo-Fuente
*,
Lara Del Val
,
María I. Jiménez
and
Juan J. Villacorta
Departamento de Teoría de la Señal y Comunicaciones e Ingeniería Telemática, Universidad de Valladolid, Paseo Belén 15, 47011, Valladolid, Spain
*
Author to whom correspondence should be addressed.
Sensors 2011, 11(10), 9499-9519; https://doi.org/10.3390/s111009499
Submission received: 16 September 2011 / Accepted: 26 September 2011 / Published: 10 October 2011
(This article belongs to the Section Physical Sensors)

Abstract

: An acoustic electronic scanning array for acquiring images from a person using a biometric application is developed. Based on pulse-echo techniques, multifrequency acoustic images are obtained for a set of positions of a person (front, front with arms outstretched, back and side). Two Uniform Linear Arrays (ULA) with 15 λ/2-equispaced sensors have been employed, using different spatial apertures in order to reduce sidelobe levels. Working frequencies have been designed on the basis of the main lobe width, the grating lobe levels and the frequency responses of people and sensors. For a case-study with 10 people, the acoustic profiles, formed by all images acquired, are evaluated and compared in a mean square error sense. Finally, system performance, using False Match Rate (FMR)/False Non-Match Rate (FNMR) parameters and the Receiver Operating Characteristic (ROC) curve, is evaluated. On the basis of the obtained results, this system could be used for biometric applications.

1. Introduction

Biometric identification [13] is a subject of active research, where new algorithms and sensors are being developed. The most widely used identification systems are based on fingerprints, hand geometry, retina, face, voice, vein, signature, etc. The fusion of information from multiple biometric systems is also improving the performance of identification and verification systems [4].

Radar-based systems require expensive hardware and can be unreliable due to the very low reflection intensity from humans. Acoustic imaging provides a simple and cheap sensor alternative that allows for very precise range and angular information. Specifically in the acoustic field, there are two accurate and reliable classification systems for targets:

  • Animal echolocation, performed by mammals such as bats, whales and dolphins, where Nature has developed specific waveforms for each type of task [5,6] such as the classification of different types of flowers [7].

  • Acoustic signatures used in passive sonar systems [8,9], which analyze the signal received by a target in the time-frequency domain.

There are few papers working on acoustic imaging in air for the detection of human beings. Moebus and Zoubir [10,11] worked with the ultrasonic band (50 kHz) using a 2D array and beamforming in reception. They analyzed solid objects (poles and a cuboid on a pedestal) in their first work and human images more recently. They showed that humans have a distinct acoustic signature and proposed to model the echoes from the reflection parts of objects in the scene by a Gaussian-Mixture-Model. Based on the parameters of this model, a detector could be designed to discriminate between persons and non-person objects.

In previous works, the authors of this paper have developed multisensor surveillance and tracking systems based on acoustic arrays and image sensors [12,13]. After an exhaustive search in the literature, we have not found any papers on acoustic imaging in air for biometric verification of humans. Consequently, we launched a new line of research to develop a novel biometric system, based on acoustic images acquired with electronic scanning arrays. Humans are acoustically scanned by an active system working from 6 to 12 kHz (audio band), that registers acoustic images. Based on these images, the system can identify people using a previously acquired database of acoustic images.

Assuming a plane wave x(t) with a direction of arrival θ, and an array with N sensors separated a distance d, the signal received at each sensor xn, is a phase-shifted replica of x(t). A beamformer combines linearly the signals xn, which are previously multiplied by complex weights wn, obtaining an output signal y(t). Figure 1, shows the structure of a beamformer.

By means of the selection of the weights, it is possible to generate a narrow beam steered to a given direction, called steering angle, and therefore to implement an electronic scanning array [14,15]. The spatial response of a beamformer is called the array factor, and its graphical representation is the beampattern. Figure 2 shows a beampattern of an array with 8 λ/2-equispaced sensors, for a steering angle of 0°.

The proposed system uses beamforming, with a linear microphone array and a linear tweeter array, in transmission and reception, respectively. In this paper, Section 2 describes the system including the hardware architecture and the functional description. Section 3 designs the system parameters and characterizes the acoustic array sensor for these parameters. Section 4 describes the definition and extraction of acoustic profiles and Section 5 tests these images for biometry applications, defining a metric based on mean square error, and presents the obtained FMR/FNMR parameters and ROC curve. Finally, Section 6 presents our conclusions.

2. Description of the System

2.1. Functional Description

Based on basic Radar/Sonar principles [17,18], an acoustic sound detection and ranging system for biometric identification is proposed, according to the block diagram in Figure 3.

The manager controls all subsystems, performing three main tasks: (i) person scanning and detection, (ii) acoustic images acquisition and (iii) person identification based on a database of acoustic images.

The following system parameters can be defined:

  • A scanning area in azimuth: [θmin – θmax]

  • A scanning area in range: [R1 – R2]

  • A collection of steering angles: θ1, θ2 … θM

  • Frequency f and pulse length T

For each steering angle, the system performs the following tasks:

  • Transmission

    • For each sensor of the array, a sinusoidal pulse sequence with frequency f, phase ϕi and length T is generated.

    • Transmission beamforming for steering angle θi is done.

    • Sequences are sent to the D/A converter.

    • Signals are amplified and tweeters of the TX array are stimulated.

  • Reception

    • Signals from microphones of RX array are preamplified.

    • The A/D converter samples the preamplified signals.

    • A digital bandpass filter with central frequency f is implemented.

    • Phase and quadrature components are obtained.

    • Reception beamforming for steering angle θi is done.

    • Signal envelope is obtained.

    • Signal is filtered with a matched filter.

    • Signal is assembled in a two-dimensional array.

After processing M steering angles, there is a two-dimensional array that represents the acoustic image, as it is shown in Figure 4.

An application that runs, in a distributed way on two processing hardware platforms: PC and DSP, has been developed. The software that runs the PC has been developed in Visual C ++. And the software that runs the DSP has been developed in C++ and uses the “Malibu” DSP library from Innovative Integration.

Acquisition, filtering and beamforming are implemented on the DSP, and management, storage of images, biometric algorithms and user interface are implemented on the PC. Figure 5 shows, in light gray, the functions implemented on the DSP and, in dark gray, the functions implemented on the PC.

The application software developed has four operation modes: Channel Calibration, Surveillance, Image acquisition and Biometric identification.

  • (1) Channel calibration

Experimental measurements have revealed that phase-shift errors and gain errors appear due to small differences in the electronic components of the analog processing chain. Beamforming requires that all channels must have the same phase and gain. Therefore a calibration procedure to compensate the gain errors and the phase-shift errors in each channel has been established [19]. The acoustic array uses a set of reference microphone and speaker in front of the array. The calibration algorithm uses the microphone to calibrate the speakers of the TX array and the speaker to calibrate the microphones of the RX array. Using the reciprocal sensor as a common reference, gain and phase-shift errors are calculated and applied in beamforming.

  • (2) Surveillance

In this mode, the system can detect and estimate the position of the targets present in the chamber, visualizing an acoustic image.

  • (3) Image acquisition

In this mode, the system captures the acoustic image of a person for a predefined set of frequencies and positions.

  • (4) Biometric identification

Previously, the system acquires the acoustic images for a set of N individuals, which are stored in a database.

Next, for the person under analysis, the system gets the acoustic images and compares them with the images stored in the database, performing the biometric identification.

2.2. Hardware Architecture

The biometric system has four elements:

  • A computer with a real-time acquisition system for 16 channels.

  • A preamplifier and amplifier system.

  • A transmission array (TX array) and a reception array (RX array).

  • An acoustic anechoic chamber.

Figure 6 shows a block diagram of the system and the interconnection between its elements.

2.2.1. Personal Computer with Data Acquisition and Signal Processing Subsystems

These subsystems are based on a PC with a Pentium processor, which houses a Innovative Integration M6713 card, as shown in Figure 7, with a C6713@300MHz, 1.5 M gate FPGA Xilinx Spartan-3 DSP and two Omnibus I/O Daughter Card sites. This card is designed to work in the Radar, Sonar and Sodar fields.

M6713 hosts a module Omnibus SD16, shown in Figure 8, which provides 16 channels of 18 bit, 48 kHz sigma-delta A/D and D/A converters. Each input channel employs sigma-delta modulation with 384x oversampling, providing highly effective digital anti-alias filtering. The output channels include an 8X interpolation filter, digital attenuation and de-emphasis.

2.2.2. Preamplifier and Amplifier Systems

This subsystem is formed by two SM PRO AUDIO EP84 multichannel microphone preamplifiers and eight T. AMP S75 amplifiers, installed on a 19” rack with 48U, as shown in Figure 9.

The SM PRO AUDIO EP84 preamplifier features eight channels, independent variable gain control, −60 dB microphone sensitivity, XLR and TRS input and output, 48V Phantom power per channel, 20 Hz–20 kHz bandwidth, −20dB pad, phase reverse and low cut filters. The EP84 is shown in Figure 10.

The T.AMP S75 amplifier, shown in Figure 11, features two channels, 75W/4Ohm, soft start and balanced XLR & jack.

2.2.3. Transmitter and Receiver Arrays

This subsystem consists of a receiver uniform linear array (ULA) with 15 BEHRINGER C2 studio microphones with omnidirectional pattern and a ULA transmitter with 15 HPC tweeters, as shown in Figure 12.

2.2.4. Acoustic Anechoic Chamber

The acoustic anechoic chamber has a 5 × 3 × 2.5 m working area. To eliminate reflections from the walls of the chamber, every surface is covered with absorbent materials with 50 × 50 × 170 mm foam pyramidal wedges. The chamber has been designed for a 500 Hz cut-off frequency. This chamber is shown in Figure 13.

3. Acoustic Array Sensor

3.1. Spatial Aperture Selection

In the design process of the spatial aperture (length) of transmission and reception ULAs, the following parameters must be considered: angular resolution, frequency band, angular excursion and transducer diameter. If the array spatial aperture is increased, the angular resolution improves, however grating lobes appear [16].

Two ULAs with 15 λ/2-equispaced sensors have been employed. These arrays have different spatial apertures in order to reduce sidelobe levels on the final beampattern (Tx + Rx). Note that sidelobe positions on each beampattern are different, while the mainlobe keeps its position.

A transmission array with a 50 cm spatial aperture and a reception array with a 40 cm spatial aperture has been used. On the transmission array, the tweeters are placed so as to occupy the minimum space.

3.2. Frequency Band Selection

After defining the array spatial apertures, it is necessary to evaluate the range of frequencies where the array works properly. This evaluation is based on:

  • The angular resolution: 3-dB beamwidth of the mainlobe.

  • Non appearance of grating lobes.

  • Frequency response of the microphone-tweeter pair.

  • Frequency response of a person.

Working with low frequencies increases the main beam width and therefore degrades the angular resolution. Working with high frequencies decreases the main beam width, but grating lobes appear, which degrades the beampattern. On the other hand, we note that main lobe width and grating lobes level increases as the steering angle rises. Therefore, the maximum steering angle should be determined by the size of the person and his/her distance from the array.

Based on these considerations, the following parameters have been selected:

  • The positioning area is located 3 m from the array

  • The maximum width of a person with outstretched arms is 2 m

  • The range is 2.5 m

For these parameters, the angle excursion is ±15°, as shown in Figure 14.

Then, the beampattern for the broadside (θ = 0°) and maximum steering angle (θ = 15°) have been analyzed, with a sweeping frequency from 4 to 16 kHz in intervals of 1 kHz. The lower and upper frequencies where the main lobe width and the grating lobes level are appropriate for the system have been determined, obtaining the values 4 kHz and 14 kHz, respectively.

Figure 15 shows the beampattern of the array proposed for θ = 0° and θ = 15° for the lower working frequency, i.e., 4 kHz. It can be observed that there are no gratings lobes and the main lobe width range from 6.4° on the broadside to 6.7° on the maximum steering angle.

Finally, Figure 16 shows the beampattern of the proposed array for θ = 0° and θ = 15° for the higher working frequency. i.e., 14 kHz. It can be observed that there are no gratings lobes and main lobe width ranges from 1.80° on the broadside to 1.85° on maximum steering angle.

For frequencies below 4 kHz, or above 14 kHz the beampattern degrades significantly and cannot be used. Analyzing the frequency response for the microphone-tweeter pair used (Figure 17), the following results have been obtained.

For frequencies below 6 kHz and for frequencies above 12 kHz, the system sensitivity is very low, due to the pass-band response of the tweeter. Therefore a frequency band between 6 kHz and 12 kHz has been selected.

At this point, the number of frequencies and values must be determined. It is clear that a large number of frequencies allow people characterization, but at the expense of increasing acquisition and processing times. On the other hand, a high number of frequencies does not improve the system performance, as the frequencies have to be closer and the obtained images are not independent.

After several tests, four different frequencies that guarantee the independence of the obtained images have been selected. The determination of the optimal values would be a very complex process, because it would depend on people are wearing and an exhaustive study will be required. Finally, the selected frequencies are 6 kHz, 8 kHz, 10 kHz and 12 kHz, where the frequency gap is the maximum in order to have independent images.

3.3. Angle Resolution Cells and Number of Beams

Given a ULA, Δu is defined as 3-dB beamwidth of the mainlobe in the sin(θ) space. Beamwidth in sin(θ) space does not depend on the steering angle and therefore, assuming that beams are 3-dB overlapped, the number of beams necessary to cover the exploration zone will be [16]:

M = round ( 2 sin   θ max Δ u )
where θmax is the angular excursion.

After evaluating the final beampattern of the transmission and reception arrays, Δθ is obtained. Δθ is defined as the 3-dB beamwidth of the mainlobe in degrees. Then, beamwidth in the sin(θ) space is obtained using the expression:

Δ u = sin Δ θ

Finally, the number of beams for each frecuency, Mk, is calculated using Expressions (1) and (2). These values are shown in Table 1. A value of θmax = 15° for the angular excursion has been assumed.

Steering angles for each frequency are shown in Figure 18.

Finally, the collection of beampatterns necessary to cover the exploration zone for f = 10 kHz is shown in Figure 19.

4. Definition and Extraction of Acoustic Profiles

A collection of samples of acoustic images, with the procedures and parameter values used for image acquisition and identification are presented in this section.

4.1. Image Parameters

Following the design considerations of Section 3, the system retrieves the acoustic image associated to a rectangle of 2 m × 2.5 m (width × depth) dimensions, where the person under analysis must be located 3 meters away from the line array, as described in Figure 14. As justified in the previous section, 4 frequencies: f1 = 6 kHz, f2 = 8 kHz, f3 = 10 kHz and f4 = 12 kHz were selected.

A 2 ms pulse width value has been selected. This value is a trade-off between range resolution, which is inversely proportional, and the received energy, which is proportional. The acoustic images are collected from 2.0 m to 4.5 m, in range coordinate, and from −15° to 15°, in azimuth coordinate, using Mi steering angles. The acoustic images are stored into a matrix I:

I ( n , m ) { 1 n N range 1 m M k azimuth

Assuming a sampling frequency fs = 32 kHz and a sound velocity v = 340 m/s, the matrix dimension N will be:

N = 2.5 m 32   kHz / 340 = 235

The matrix dimension Mk is the number of steering angles necessary to cover the exploration area for each frequency.

4.2. Positions

After analyzing multiple positions of people was concluded, it has been determined that the best results are obtained for the following positions: front view with arms folded on both sides (p1), front view with arms outstretched (p2), back view (p3) and Side view (p4). Figure 20 shows the four positions using a test subject.

These positions are associated with the different body sections and therefore are clearly differentiated. These positions are highly independent, improving system biometrics performance. Only these four positions have been used because a higher number would increase acquisition and processing times, without a noticeable improvement of the biometric system performance.

4.3. Acoustic Profile

The acoustic profile Pi, associated to person i, includes the acoustic images obtained for the positions (p1, p2, p3, p4), evaluated at frequencies (f1, f2, f3, f4).

Figure 21 shows the acoustic images for: i) the front view position (p1), where the head and trunk of the subject can be clearly identified ii) the front view position with arms outstretched (p2), where the head and arms of the subject can be clearly identified, iii) the back view position (p3) where the back of the head can be identified and iv) the side view position (p4), where the closest shoulder and side of the head can be identified.

4.4. Image Normalization

Each acoustic image is normalized for its energy, according to the expression:

I ¯ ( i , j ) = I ( i , j ) n = 1 N m = 1 M k I ( n , m ) 2

5. Test Acoustic Profiles for Biometric Applications

5.1. Metric Based on Mean Square Error

An algorithm for biometric identification has been implemented based on the mean square error between acoustic images from the profile Pi and the profile Pj [20]. First, the function E p f [ i , j ] is defined as the mean square error between an acoustic image from profile Pi and an acoustic image from profile Pj, for a specific frequency f and a position p:

E p f [ i , j ] = n = 1 N m = 1 M ( I ¯ i ( n , m ) I ¯ j ( n , m ) ) 2 i , j = 1 N F
where NF is the number of acoustic profiles stored in the database.

Then, the function Ep[i,j], called multifrequency error, is defined as the sum of the errors at each frequency for the position p:

E p [ i , j ] = E p 6 kHz [ i , j ] + E p 8 kHz [ i , j ] + E p 10 kHz [ i , j ] + E p 12 kHz [ i , j ]

Finally, the function E(i,j), called global error, is defined as the sum of the multifrequency error at each position p:

E [ i , j ] = E p 1 [ i , j ] + E p 2 [ i , j ] + E p 3 [ i , j ] + E p 4 [ i , j ]

If Pk is an unknown profile to be identified, the algorithm will associate the profile Pk, to the person “i” in the database, whose profile Pi has the minimum E[k,i] value. The normalized global error will be defined as the distance or metric used by the biometric system.

5.2. Scenario Definition

The case-study involved 10 people–5 men and 5 women–in order to analyze the behaviour of the system. Each selected person has distinct morphological features, as shown in Table 2. In this analysis, all people use an overall as common reference clothing, in order to eliminate clothing as a distinctive factor. The biometric system uses a metric or distance based on the mean square error, according to expression (6).

To evaluate the system, acoustic profiles were captured 10 times for each of the 10 people under test. These captures were carried out for 10 days, one capture per person per day, thereby people did not remember their position in the previous capture and there was not a “memory effect”. They were placed in the center of the measurement area (cross marked on the ground). Every 60 seconds, a multifrequency capture was done for each position, with the following sequence: front view (p1) with arms folded on both sides, front view with arms outstretched (p2), back view (p3) and side view (p4).

One hundred profiles were acquired; each one with 16 acoustic images (four frequencies by four positions). These captures were stored with a unique identifier formed by sub identifiers associated with the person ID, the number of capture, the position and the frequency. Finally, the normalized global error between all acquired profiles was calculated.

5.3. False Match Rate (FMR) and False Non-Match Rate (FNMR)

Based on the methodology to characterize a biometric system [21], FNMR and FMR parameters have been calculated. It is assumed that there are no errors in the acquisition; therefore FAR/FMR and FRR/FNMR pairs are equivalent.

False match rate (FMR) is the probability of the system matching incorrectly the input acoustic profile to a non-matching template in the database, i.e. the percentage of imposters incorrectly matched to a valid user’s biometric. It measures the percent of invalid inputs which are incorrectly accepted. FMR is obtained by matching acoustic profiles of different people. The global error E(i,j) is calculated for all these cases; then the FMR parameter is calculated as the percentage of matching whose error value is equal or less than the distance d:

E ( i , j ) d

Where the distance d is the set of possible values of the global error. False non-match rate (FNMR) is the probability of the system not matching the input acoustic profile to a matching template in the database, i.e. the percentage of incorrectly rejected valid users. It measures the percent of valid inputs which are incorrectly rejected. FNMR is obtained by matching acoustic profiles of the same people. Again, the normalized global error is calculated for all these cases; then the FNMR parameter is calculated as the percentage of matching whose error value is greater or equal than the distance d:

E ( i , j ) d

Figure 22 shows the FMR and the FNMR functions versus the normalized distance d.

5.4. Equal Error Rate (EER)

Equal Error Rate (EER) is the crossing point between FMR and FNMR functions. EER is a global parameter that allows the evaluation of the system performance. Thus, the lower the EER value, the better the biometric system performance [21]. For this case-study, a value of EER = 6.53% for a distance d = 0.35 has been obtained. Figure 23 shows an enlarged area of FMR and FNMR functions in order to show EER clearly.

5.5. Receiver Operating Characteristic (ROC)

Finally, the ROC curve, a visual characterization of the trade-off between the FNMR and the FMR, is shown in Figure 24.

Besides this system, many other novel biometric systems expect to give an alternative to the well known biometric systems based on fingerprints. There are analyses of systems based on hand geometry [22], on finger geometry [23], on finger veins [24] or even on the mouse dynamics [25], amongst others.

Comparing the results, particularly the EER value, obtained in the study case shown in this work with the results shown by other novel biometric systems, it can be observed that the EER value of the system based on acoustic images is comparable to the behaviour of other novel biometric systems. Table 3 shows the EER values of different novel biometric values.

Results in Table 3 are quite promising and confirm the feasibility of using acoustic images in biometric systems. The results obtained in this work are based on the mean square error, which is a robust and simple metric. As more complex metrics are developed, the system performance will be better.

6. Conclusions

An acoustic biometric system based on an electronic scanning array using sound detection and ranging techniques has been developed. People are scanned with a narrow acoustic beam in an anechoic chamber, and then an acoustic image is created by collecting people's response to the transmitted signal.

This work is focused on analyzing the feasibility of employing acoustic images of a person as a biometric feature. Specifically, four pulsed tone signals with frequency 6 kHz, 8 kHz, 10 kHz and 12 kHz and four positions for the person (front, front with arms outstretched, back and side) have been used, getting a representative set of acoustic images.

FNMR, FMR and the ROC curve have been obtained, being comparable to those of commercial biometric systems. These facts confirm the feasibility of using acoustic images in biometric systems. Currently, work on improving algorithms and extending the case-study presented with a broader set of users is being carried out. The weights of the different acoustic images (frequency and position) in the error function are also being optimized.

Acknowledgments

This work has been supported by the regional Ministry of Education from the Regional Government of Castilla y León (VA056A05 and VA023A07).

References and Notes

  1. Jain, AK; Ross, A; Prabhakar, S. An Introduction to Biometric Recognition. IEEE Trans. Circuits Syst. Video Technol 2004, 14, 4–20. [Google Scholar]
  2. Delac, K; Grgic, M. A survey of Biometric Recognition Methods. Proceedings of 46th International Symposium Electronics in Marine, ELMAR-2004, Zadar, Croatia, 16–18 June 2004; pp. 184–193.
  3. International Biometric Group. Comparative Biometric Testing. Round 7 Public report V1.2, 2009. Available online: http://l.b5z.net/i/u/6084428/i/CBT7_IBGReport.pdf (accessed on 1 June 2011).
  4. Tejas, J; Sommath, D. Multimodal Biometrics: State of the art in Fusion Techniques. Int. J. Biometrics 2009, 4, 393–417. [Google Scholar]
  5. Baker, C; Vespe, M; Jones, G. 50 million years of waveform design. Waveform Diversity and Design in Communications, Radar and Sonar. Proceedings of Forum on Engineering and Technology, London, UK, 22 November 2006; pp. 7–21.
  6. Balleri, A; Woodbridge, K; Baker, CJ; Holderied, MW. Flower Classification by bats: Radar comparisons. IEEE Aero. El. Sys. Mag 2009, 5, 4–7. [Google Scholar]
  7. Helversen, D; Holderied, MW; Helversen, O. Echoes of bat-pollinated bell-shaped flowers: conspicuous for nectar-feeding bats. J. Exp. Biol 2003, 6, 1025–1034. [Google Scholar]
  8. Chevalier, LF. Principles of Radar and Sonar Signal Processing, 1st ed; Artech House: Boston, MA, USA, 2002. [Google Scholar]
  9. Ricker, DW. Echo Signal Processing, 1st ed; Kluwer: Dordrecht, The Netherlands, 2003. [Google Scholar]
  10. Moebus, M; Zoubir, AM. Three-dimensional ultrasound imaging in air using a 2D array on a fixed platform. Proceedings of IEEE International Conference on Acoustic, Speech and Signal Processing, Honolulu, HI, USA, 15–20 April 2007; pp. 961–964.
  11. Moebus, M; Zoubir, AM. Parameterization of acoustic images for the detection of human presence by mobile platforms. Proceedings of IEEE International Conference on Acoustic, Speech and Signal Processing, Dallas, TX, USA, 14–19 March 2010; pp. 3538–3541.
  12. Duran, JD; Fuente, AI; Calvo, JJV. Multisensorial modular system of monitoring and tracking with information fusion techniques and neural networks. Proceedings of IEEE International Carnahan Conference on Security Technology, Madrid, Spain, 5–7 October 1999; pp. 59–66.
  13. Izquierdo-Fuente, A; Villacorta-Calvo, J; Raboso-Mateos, M; Martinez-Arribas, A; Rodriguez-Merino, D; del Val-Puente, L. A human classification system for a video-acoustic detection platform. Proceedings of International Carnahan Conference on Security Technology, Albuquerque, NM, USA, 12–15 October 2004; pp. 145–152.
  14. Naidu, PS. Sensor Array Signal Processing, 1st ed; CRC Press: Boca Raton, FL, USA, 2001. [Google Scholar]
  15. van Veen, BD; Buckley, KM. Beamforming: a versatile approach to spatial filtering. IEEE ASSP Magazine 1988, 2, 4–24. [Google Scholar]
  16. Wirth, WD. Radar Techniques Using Array Antennas; IEE Radar, Sonar, Navigation and Avionics Series 10; The Institution of Electrical Engineers: London, UK, 2001. [Google Scholar]
  17. Barton, DK. Radar System Analysis and Modeling; Artech House: Norwood, MA, USA, 2005. [Google Scholar]
  18. Skolnik, MI. Introduction to Radar Systems, 3rd ed; McGraw Hill: New York, NY, USA, 2001. [Google Scholar]
  19. Izquierdo-Fuente, A; Villacorta-Calvo, JJ; Val-Puente, L; Jiménez-Gomez, MI. A simple methodology of calibration for sensor arrays for acoustical radar system. Proceedings in 118 Convention Audio Engineering Society, Barcelona, Spain, 28–31 May 2005.
  20. Duda, R; Hart, P; Stork, D. Pattern Classification, 2nd ed; John Wiley and Sons: San Diego, CA, USA, 2001. [Google Scholar]
  21. Jain, AK; Flynn, P; Ross, A. Handbook of Biometrics, 1st ed; Springer: New York; NY, USA, 2008. [Google Scholar]
  22. Jain, AK; Nandakumar, K; Ross, A. Score normalization in multimodal biometric systems. Pattern Recogn 2005, 38, 2270–2285. [Google Scholar]
  23. Lee, EC; Jung, H; Kim, D. New finger biometric method using near infrared imaging. Sensors 2011, 11, 2319–2333. [Google Scholar]
  24. Lee, EC; Park, KR. Image restoration of skin scattering and optical blurring for finger vein recognition. Opt. Laser Eng 2011, 49, 816–828. [Google Scholar]
  25. Hamdy, O; Traoré, I. Cognitive-based biometrics system for static user authentication. Proceedings of 4th International Conference on Internet Monitoring and Protection, Venice, Italy, 24–28 May 2009; pp. 90–97.
Figure 1. Structure of a beamformer.
Figure 1. Structure of a beamformer.
Sensors 11 09499f1 1024
Figure 2. Beampattern for θ = 0°.
Figure 2. Beampattern for θ = 0°.
Sensors 11 09499f2 1024
Figure 3. Block diagram.
Figure 3. Block diagram.
Sensors 11 09499f3 1024
Figure 4. Acoustic image.
Figure 4. Acoustic image.
Sensors 11 09499f4 1024
Figure 5. Functional distribution.
Figure 5. Functional distribution.
Sensors 11 09499f5 1024
Figure 6. Block diagram.
Figure 6. Block diagram.
Sensors 11 09499f6 1024
Figure 7. M6713 card.
Figure 7. M6713 card.
Sensors 11 09499f7 1024
Figure 8. SD16 module.
Figure 8. SD16 module.
Sensors 11 09499f8 1024
Figure 9. Preamplifiers and amplifiers.
Figure 9. Preamplifiers and amplifiers.
Sensors 11 09499f9 1024
Figure 10. Microphone preamplifier SM PRO AUDIO EP84.
Figure 10. Microphone preamplifier SM PRO AUDIO EP84.
Sensors 11 09499f10 1024
Figure 11. Amplifier T.AMP S75.
Figure 11. Amplifier T.AMP S75.
Sensors 11 09499f11 1024
Figure 12. Transmitter and receiver arrays.
Figure 12. Transmitter and receiver arrays.
Sensors 11 09499f12 1024
Figure 13. Acoustic anechoic chamber.
Figure 13. Acoustic anechoic chamber.
Sensors 11 09499f13 1024
Figure 14. Scanning and positioning area.
Figure 14. Scanning and positioning area.
Sensors 11 09499f14 1024
Figure 15. Beampattern for 0° and 15° for f = 4 kHz.
Figure 15. Beampattern for 0° and 15° for f = 4 kHz.
Sensors 11 09499f15 1024
Figure 16. Beampattern for 0° and 15° for f = 14kHz.
Figure 16. Beampattern for 0° and 15° for f = 14kHz.
Sensors 11 09499f16 1024
Figure 17. Frequency response of microphone-tweeter.
Figure 17. Frequency response of microphone-tweeter.
Sensors 11 09499f17 1024
Figure 18. Steering angles vs. frequency.
Figure 18. Steering angles vs. frequency.
Sensors 11 09499f18 1024
Figure 19. Collection of beampatterns to cover the exploration zone for f = 10 kHz.
Figure 19. Collection of beampatterns to cover the exploration zone for f = 10 kHz.
Sensors 11 09499f19 1024
Figure 20. Person positions.
Figure 20. Person positions.
Sensors 11 09499f20 1024
Figure 21. Acoustic images. x-axis: angle (degrees), y-axis: range (m).
Figure 21. Acoustic images. x-axis: angle (degrees), y-axis: range (m).
Sensors 11 09499f21 1024
Figure 22. Functions FMR and FNMR vs. distance d.
Figure 22. Functions FMR and FNMR vs. distance d.
Sensors 11 09499f22 1024
Figure 23. EER.
Figure 23. EER.
Sensors 11 09499f23 1024
Figure 24. ROC.
Figure 24. ROC.
Sensors 11 09499f24 1024
Table 1. Number of beams vs. frequency.
Table 1. Number of beams vs. frequency.
F (Hz)Δθ (degrees)ΔμMk
60004.20°0.07327
80003.20°0.05589
100002.56°0.044711
120002.12°0.037013
Table 2. Morphological features.
Table 2. Morphological features.
Properties
IDGenderConstitutionHeight
00MaleVery strongTall
01MalestrongAverage
02MalestrongAverage
03MaleThinTall
04MalenormalTall
05FemaleThinTall
06FemalestrongSmall
07FemaleThinAverage
08FemalestrongAverage
09FemalenormalSmall
Table 3. EER of novel biometric systems
Table 3. EER of novel biometric systems
Novel biometric systemEER
Acoustic images6.53%
Hand geometry7.7–13.2%
Finger geometry17.86–23.16%
Finger vein1.91–14.8%
Mouse dynamics3.8%

Share and Cite

MDPI and ACS Style

Izquierdo-Fuente, A.; Del Val, L.; Jiménez, M.I.; Villacorta, J.J. Performance Evaluation of a Biometric System Based on Acoustic Images. Sensors 2011, 11, 9499-9519. https://doi.org/10.3390/s111009499

AMA Style

Izquierdo-Fuente A, Del Val L, Jiménez MI, Villacorta JJ. Performance Evaluation of a Biometric System Based on Acoustic Images. Sensors. 2011; 11(10):9499-9519. https://doi.org/10.3390/s111009499

Chicago/Turabian Style

Izquierdo-Fuente, Alberto, Lara Del Val, María I. Jiménez, and Juan J. Villacorta. 2011. "Performance Evaluation of a Biometric System Based on Acoustic Images" Sensors 11, no. 10: 9499-9519. https://doi.org/10.3390/s111009499

Article Metrics

Back to TopTop