Next Article in Journal
An Efficient and Accurate Ground-Based Synthetic Aperture Radar (GB-SAR) Real-Time Imaging Scheme Based on Parallel Processing Mode and Architecture
Previous Article in Journal
On the Security of a Secure and Computationally Efficient Authentication and Key Agreement Scheme for Internet of Vehicles
Previous Article in Special Issue
Embedding Enhancement Method for LightGCN in Recommendation Information Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

PerFication: A Person Identifying Technique by Evaluating Gait with 2D LiDAR Data

by
Mahmudul Hasan
1,2,*,
Md. Kamal Uddin
3,
Ryota Suzuki
1,
Yoshinori Kuno
1 and
Yoshinori Kobayashi
1
1
Graduate School of Science and Engineering, Saitama University, Saitama 338-0825, Japan
2
Department of Computer Science and Engineering, Comilla University, Cumilla 3506, Bangladesh
3
Department of Computer Science and Telecommunication Engineering, Noakhali Science and Technology University, Noakhali 3814, Bangladesh
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(16), 3137; https://doi.org/10.3390/electronics13163137
Submission received: 26 June 2024 / Revised: 30 July 2024 / Accepted: 2 August 2024 / Published: 8 August 2024

Abstract

:
PerFication is a person identification technique that uses a 2D LiDAR sensor in a customized dataset KoLaSu (Kobayashi Laboratory of Saitama University). Video-based recognition systems are highly effective and are now at the forefront of research. However, it experiences bottlenecks. New inventions can cause embarrassing situations, settings, and momentum. To address the limitations of technology, one must introduce a new technology to enhance it. Using biometric characteristics are highly reliable and valuable methods for identifying individuals. Most approaches depend on close interactions with the subject. A gait is the walking pattern of an individual. Most research on identifying individuals based on their walking patterns is conducted using RGB or RGB-D cameras. Only a limited number of studies utilized LiDAR data. Working with 2D LiDAR imagery for individual tracking and identification is excellent in situations where video monitoring is ineffective, owing to environmental challenges such as disasters, smoke, occlusion, and economic constraints. This study presented an extensive analysis of 2D LiDAR data using a meticulously created dataset and a modified residual neural network. In this paper, an alternative method of person identification is proposed that circumvents the limitations of video cameras in terms of capturing difficulties. An individual is precisely identified by the system through the utilization of ankle-level 2D LiDAR data. Our LiDAR-based detection system offers a unique method for person identification in modern surveillance systems, with a painstaking dataset, remarkable results, and a break from traditional camera setups. We focused on demonstrating the cost-effectiveness and durability of LiDAR sensors by utilizing 2D sensors in our research.

1. Introduction

Identifying individuals is a vast and well-established field of research. Various techniques have been created in this sector. The accuracy of this important area has been improved by a variety of biometric parameters [1,2,3]. This research was driven by advancements in camera-based applications. With the constant advancements in technology and the ever-evolving capabilities of modern devices, each day brings new and exciting developments. However, video-based processing raises several concerns, including privacy, camera illumination, disasters, etc. The advent of novel biometric characteristics has painstakingly crafted the veracity of human identification in several sensitive applications. Similarly, the performance of biometric identifications may be compromised in certain situations due to the urgency of close proximity to the devices. Therefore, gait recognition serves as a suitable substitute for person identification in situations where subjects are not expected to be near the devices. Throughout all these advancements, video cameras [2] have been utilized as crucial identifiers to display the states of persons and their unique characteristics.
When discussing human movement, the term “gait” refers to the unique way each person walks. In contrast, gait recognition involves recognizing and identifying a person by studying and understanding their distinct gait pattern. The useful features and substantial benefits of gait over traditional camera-based applications have led to its recent increase in popularity. Recently, the idea of “remotely accessing” a nearby camera has become widely accepted and embraced. It is also feasible to recognize gait when the person is not cooperating. Even in the absence of or inability to identify other biometric features—such as a person’s face, fingerprints, or iris—gait recognition is an important component of biometric identification. The difficulty of imitating gait characteristics highlights its critical importance in criminal analysis. Low-quality capture, disaster susceptibility, and computing complexity are only a few of the obstacles that video-based recognition faces, despite its potential. Taking quality pictures during disasters is a major difficulty for traditional video-based applications. These applications may not work as well in environments with poor lighting or physical obstacles. However, none of these issues arise with LiDAR sensors. Our proposed ankle-level 2D LiDAR-based person identification is a new modality that uses advanced gait data analysis algorithms, as introduced in groundbreaking research. The use of a deep-learning methodology and recent advances in computer power have greatly improved the accuracy of this research to levels never seen before. We tried to find an alternative solution for a camera-based surveillance system. Here, we propose 2D LiDAR sensor-based identification. While there have been other initiatives using 3D LiDAR sensors, they were expensive and sometimes implemented alongside camera-based identification. Lightning, deployment costs, intimate contact with the object, and privacy concerns all have a negative impact on the performance of surveillance video cameras and 3D LiDAR sensors. This article simplifies 3D problems to 2D. Here, the meeting on tracking and confidentiality intent presents a unique solution.
Furthermore, 2D LiDAR systems [4] are appropriate for situations that require rapid decision making because they are cost-effective [5], ideal for budget applications, and less demanding for real-time data analysis [6]. They have a long history of reliability and performance, making them an affordable option for real-time data processing given their versatility [7]. In general, a 3D LiDAR offers a detailed, three-dimensional representation of the environment, enabling flexible obstacle avoidance that a 2D LiDAR is unable to provide due to its limited 3D information. In our proposed dataset, we primarily utilized individual LiDAR data to assess the system’s performance. We also attempted to integrate two layers of distance data from 2D LiDAR sensors into a motion history image, which enables us to obtain continuous ankle movement trajectories solely in the 2D plane. In contrast, 3D LiDAR provides 3D multibeam data, which can be quite costly [8]. Moreover, a 3D image may reveal a person’s individual identity, while 2D LiDAR provides ankle positioning data where a person’s identity cannot be disclosed.
Two-dimensional LiDAR was used at the ankle level in an earlier study [9] to show a person tracking system that could analyze gait. It has proven to be a difficult task to identify moving things, like people, in front of a LiDAR sensor. The tracking method was enhanced by using density-based clustering instead of traditional methods. The research first used multivariate density-based techniques to obtain the most precise model fit. Visualizing tracking with LiDAR data is a challenging issue that we handled carefully. Improving the tracking accuracy was a hurdle in this experiment, which necessitated the creation of density-based algorithms. The two new algorithms we propose, EDBSCAN and EOPTICS [10], aim to accurately determine individuals’ ankle positions and identify their way of walking through clustering analysis. This technique substantially improved the performance of our earlier tracking system by implementing a novel person tracking system that entirely depends on 2D LiDAR data. Here, the ankle occlusion problem could be accurately handled as well. Therefore, ankle movement data, particularly tracking data, can be used to effectively measure influential features such as age, height, and sex. We have expanded our research on person property estimation using a 2D LiDAR sensor [11]. A deep neural network was utilized for training and testing the model. A comprehensive dataset was prepared to conduct experiments, considering various factors, such as ethnicity, sex, and height. The experiments were conducted using a parametric formulation, and the results were clearly identified. The results of the trials were impressive and reliable compared to the real results.
The practical instincts of RGB/RGB-D cameras can be affected by various factors, such as illusion, illumination, smoky or foggy conditions, and real-time computational inaccuracies. Our new research objective is LiDAR-based person identification, which comprehensively addresses the drawbacks of visual imaging. The addition of multiple 2D LiDAR sensors positioned at the ankle level greatly enhances the level of detail in the acquired data, allowing for more comprehensive analyses in further studies. The LiDARs emit pulsed light onto the objects in their surroundings, and the distance traveled before being received by the sensor is calculated. A model was set up for experimentation, with LiDAR sensors placed on grounded tripods. People were then allowed to walk in front of the sensors. The time series data in a bag file were captured using a robot operating system (ROS). The distance data were used to generate motion history images (MHIs) by plotting them at a specific rate. The MHI were crucial inputs for our DNN model to accurately identify an individual in a video. The development of a walking path is efficiently determined by continuous ankle movements on a surface, which enhances the tracking system. The uniqueness of individuals’ gestures and walking styles, particularly the movements of their ankles, has led us to develop a person recognition system based on gait.
Figure 1 displays a block diagram that provides a concise overview of the system. The sensing module gathers data from the LiDAR sensor and uses this information to generate motion history images. The figure shows the input datasets we have. The person’s property was estimated using a residual neural network with the help of these image datasets [11]. This study used a tracking system [9] based on the modified density-based clustering methods EDBSCAN and EOPTICS [10], together with a property estimation technique. The recognition system can now identify individuals using gait analysis.

2. Related Works

This section offers a thorough examination of the most recent developments in individual identification. We have thoroughly examined a wide range of current articles that discuss the subject of individual recognition. Video-based analysis techniques were predominantly used for the research. We tried to understand the approaches used from that point onward. Facial recognition has become increasingly popular and widely used in recent years. Throughout the years, this biometric feature has been widely used in numerous research studies for person detection. Another groundbreaking biometric innovation has transformed the field of accurate data analysis, empowering researchers to carry out meticulous studies on a regular basis. We also delved into the realm of gait-based identification. There is a lack of available resources for LiDAR-based identification due to its limited focus.

2.1. Recognition of Individuals through Facial Features

Research on human identification began much earlier. The performance was negatively affected by misleading information [12]. Recognition can sometimes be misled by prior knowledge. Still, research persists. Several innovative approaches have been implemented. Person recognition also utilizes a multimodal audio-video-based approach [13]. The technique of merging individuals’ facial recognition and speaker identification was implemented to enhance the accuracy of the recognition process. Additionally, facial video information was utilized for person recognition [14]. The purpose was achieved by utilizing temporal information, such as eigenface, fisherface, and elastic graph matching. The Grassmann manifold technique is a modern approach for identifying individuals from video footage [15]. The original Grassmann manifold was used to achieve geometry-aware dimension reduction, resulting in favorable classifications. An innovative invention called the Neural Person Search Machine [16] has been developed to enable efficient person searching through recursive localization. There was room for improvement in optimizing the local search on the image. However, it is crucial to establish standard normalizations in face recognition research. The need for dataset-independent and automated standardization was evident, resulting in a reduction in information loss. Xiangyu Zhu et al. [17] presented a method called high-fidelity pose and expression normalization (HPEN) for generating a frontal view of a typical facial image. Over time, this approach led to significant improvements in facial recognition performance, enabling more accurate identification of individuals. Due to advancements in deep learning [18], face recognition research has made significant progress and achieved higher levels of accuracy. A significant breakthrough came with DeepFace [19], which utilized a nine-layer deep neural network to perform piecewise transformation and accurately represent a face. The dataset achieved an impressive accuracy of approximately 97%, marking a significant milestone in the field of face recognition. FaceNet [20] introduced the concept of automatic learning, which involves a mapping method from facial images to measure facial similarity. The model was trained using a deep convolutional network. The dataset used previously achieved a remarkable new record accuracy of 99%. MagFace [21] is a widely recognized global representation for face recognition. The losses categorize the general feature embedding of an input image, despite not achieving the leading benchmark.

2.2. Gait-Based Person Recognition

Each individual possesses unique characteristics. This unique characteristic distinguishes individuals from one another. Even from a distance, people can identify others by their walking patterns, despite not being able to see their faces. This technique is widely recognized as gait. The way creatures move and walk is unique to each species and is known as gait. This discovery has been the subject of extensive scientific investigation. We will attempt to provide a description of something similar to this here.
Gait research has been a long-standing focus of current research. This idea dates back to more than 400 years. There are various types of gaits that can be categorized. The representation of human gait is typically divided into two categories: model-based and model-free. It can vary depending on factors such as human size, age, clothes, religion, shoes, ethnicity, accidents, and other reasons. The development of information technology, particularly computer vision, has led to the rapid improvement of automated gait recognition features. Gait recognition, as demonstrated in silhouette analysis [22], is a complex yet intuitive method for identifying individuals based on their walking patterns. The technique was implemented using standard statistical tools, such as PCA. In a later study, a technique for gait recognition was proposed that did not require the subject’s cooperation [23]. The idea was driven by gait research advances. In an unexpected turn of events, this idea outperformed the others. In gait recognition research, dynamic normalization [24] was utilized to identify individuals. The performance was improved through a thorough implementation of the hidden Markov model. The recognition of automatic gait is heavily influenced by both static body shape and dynamic arm and leg movements. Gait recognition was performed based on dynamic body features, specifically individual detection [25].
Walking speed can differ depending on the circumstances. It also has an impact on gait analysis. These speed transition situations can cause a traditional gait recognition system to underfit. An approach for gait identification was proposed to address situations involving changes in speed [26]. The coordination of body structures during gait can sometimes be quite chaotic and confusing. Two-point gait [27] addresses the separation of these two concerns by focusing primarily on limb motion rather than body structure. Dynamic parameters are crucial for the successful implementation of a gait recognition system, allowing for real-time calculations that can disregard variations in body shape. This research presented a method of incorporating ankle-level sensors into our study. CNNs, also known as deep convolutional networks [28], have emerged as powerful learning techniques that significantly enhance the effectiveness of recognition tasks. CNN’s early work focused on similarity learning for gait-based identification [29]. The experimental results surpass the existing evolutions with a commendable score. A recent application of the Koopman operator theory is in cross-view gait recognition [30]. The application of universal deep linear embedding with a large public dataset yielded impressive performance. The GaitPart [31] model, known for its effective performance, focuses on individual body parts instead of the entire body. The research mentioned above was conducted using RGB/RGB-D cameras. We will delve into a specific research study using LiDAR data in the following section, which included a proposal for a 4D visualization and tracking system.

2.3. Person Recognition Using Lidar Data

The identification of individuals has been extensively researched, with a focus on analyzing video cameras and images as the primary inputs. LiDAR sensors, although rarely utilized for recognition, serve as an alternative to cameras (RGB, depth, or infrared). Research on this topic appears to be lacking and can benefit from further investigation. Yamada, H. et al. recently conducted a gait-based recognition study using a 3D LiDAR sensor [32] and an LSTM network. The accuracy (60%) and processing of high-volume data (3D LiDAR) also prompted us to explore new research areas. The research of Benedek et al. [33] was among the pioneering initiatives utilizing LiDAR sensors to analyze gait and identify individuals. The use of a rotating multibeam (RMB) LiDAR sensor added complexity to processing, resulting in less than satisfactory accuracy. There is room for improvement in this area. An attempt has been made to address these problems in a timely manner. This study utilized a 2D LiDAR sensor to compile data efficiently and cost-effectively. The system is impressively accurate, making it reliable.
A promising early attempt was made to use rotating multibeam (RMB) LiDAR [34] for gait analysis. A 2D-LiDAR-based gait analysis [35] system was recently proposed. The study focused solely on capturing the motion of walking using the sensor, without tracking or identifying individuals. Three-dimensional LiDAR is a popular technique utilized for human detection [36]. This purpose was achieved through the application of point cloud clusters and classifications. The accuracy of the sensor was not particularly impressive due to the variation in distance. Person behavior measurements were also analyzed using 3D LiDAR sensors [37]. This system enhances the accuracy of a service robot’s interaction with people in a variety of applications. Our proposed 2D LiDAR-based person tracking and identification approach is a pioneering initiative in this field. This system has a wide range of applications and produces reliable results.

3. Proposed Method

The underlying principle of this study is to propose 2D LiDAR sensors as a substitute for 3D LiDAR sensors, with the intention of significantly improving system integrity while drastically reducing intrinsic and computational costs. Our earlier methods involved incrementally improving LiDAR-based person tracking and property estimates. It was confronting for us to utilize video cameras, which were frequently susceptible to privacy breaches. In addition, environmental and natural deficiencies impeded RGB/RGB-D camera performance.

3.1. Overview of the Integrated System

Figure 2 shows a comprehensive system diagram. A 2D LiDAR sensor is placed at the ankle level to collect the required data. The motion history images (MHIs) are created by plotting all time-series data on blank images at a rate of 40 frames per second. MHI offers the advantage of encoding a variety of time data within a single frame. MHI spans can represent human gestures and movements. An update function µ(x, y, ti) can be used to calculate the MHI M£ (x, y, ti) [38] as follows:
M £   ( x ,   y , t i ) = {                                           £ ,                                               if   µ ( x ,   y , t i ) = 1 max ( 0 ,   M £   ( x ,   y , t i 1 ) φ ) ,                           o t h e r w i s e                      
The variables are defined as follows: (x, y) represents the position, t i represents the time, and µ(x, y, ti) is the ankle position or motion in the present frame. The duration £ determines the temporal extent of the movement, with φ representing degradation in the images. Retaining previous images as afterimages facilitates the comprehension of time series data, resulting in the creation of the motion history image depicted in Figure 3.
The MHI is regarded as our system’s input. We used modified density-based clustering techniques, in addition to classic clustering methods, to locate the ankle of an individual. Clustering methods were used to group individuals based on the proximity of their two ankles’ movements in a two-dimensional space. The heuristic approach was extensively examined until it achieved a satisfactory degree of accuracy. This study presents a gait-based identification system [39] that relies solely on 2D LiDAR technology. This innovative approach to identifying a person solely through a 2D LiDAR sensor can be effectively utilized to address privacy concerns.

3.2. Person Identification Based on Gait Data

Extensive research has been conducted on person identification over the course of many years. This research was enhanced by the inclusion of identification methods based on other biometric features, which added to its authenticity and accuracy. The work involved utilizing a wide range of cameras and sensors to achieve exceptional results. This research took on a new dimension with the advent of the deep neural network. It is true that from time to time, a person’s walking style changes, or even varies, between slow walking and fast walking, morning and evening. These are the challenges of gait-based recognition. We are collecting as much data as we can to make the system more accurate. We aim to provide a unique contribution to the fields of sensor technology and computational analysis.

3.2.1. Experimental Configuration

Figure 4 shows the experimental setup used in this study. We carefully arranged two stands with four distinct LiDAR sensors, each at a different height and angle. This setup ensures that data collection is effortless for everyone involved. The two LiDAR stands are positioned two meters apart, with a 90-degree angled gap between them. The LiDARs were positioned at different heights, with one six inches above the ground and the other ten inches above. The tripod holds two LiDARs, one called multilayer and the other called multiangle, which are positioned differently. Participants were given the freedom to walk naturally, showcasing their unique style and movement. Experiments were carried out during the same season and indoors. Participants moved in various directions and distances from the LiDARs, covering a range of 0.5 to 25 m. The positions of the LiDARs were considered stationary, while the participants were in motion. The experiments considered both individual walking and group walking. The analysis involved utilizing data from various sensors in different ways. We utilized UTM-30LX 2D LiDAR sensors (Manufacturer: Hokuyo, NC, USA) for this experiment. The scanning range is 30 m, and the scanner covers a 270-degree area. This sensor is perfect for outdoor use due to its lightweight design. The room’s lighting conditions were adjusted, and participants were given the freedom to move around at various speeds. Pedestrians were instructed to walk, run, or move for ten minutes as part of the research in order to collect more comprehensive and detailed data for analysis.

3.2.2. Preparing the Dataset

We used our newly created dataset in this study. Since no rigorous 2D LiDAR-based identification techniques have been discovered previously, no dataset is accessible to the public in general. Two-dimensional LiDAR sensors offer distance measurements exclusively for objects located in their forward direction. Creating our image dataset proved to be quite a challenge. We generated various datasets by considering a wide range of parameters and referred to them as KoLaSU (Kobayashi Laboratory of Saitama University). Twenty-nine impartial observers participated in the investigation, walking in front of the LiDAR sensors without any prejudice. In addition, this dataset is meticulously organized, taking into account a wide range of international audiences with diverse characteristics, such as height, age, gender, and ethnicity. This study included a diverse group of participants, including individuals from Bangladesh, Japan, and the Philippines. The participants consisted of both males and females. Of the 29 participants, only 3 opted for sandals, while the majority chose to wear shoes. We excluded the data on sandals to maintain consistency. The data are also used in rigorous analysis to cross-validate the performance. The dataset was created by aggregating the data from all four LiDAR sensors, including the data from each individual sensor. We developed a comprehensive dataset that considers various conditions, including different layers and angles. This dataset includes multilayer-13, multilayer-24, multiangle-12, multiangle-14, and other combinations. LiDAR positions are indicated by numbers 1, 2, 3, and 4, as shown in Figure 4. In the experiments, we collected data using vertically shifted LiDARs (KoLaSU LiDAR 13 and KoLaSU LiDAR 24). We replicated the same procedure with LiDAR 1 and LiDAR 2 (KoLaSU LiDAR 12), as well as LiDAR 3 and LiDAR 4 (KoLaSU LiDAR 34), arranging them horizontally on the plane (Figure 4). We used four individual LiDAR sensors to collect the data separately. For the experiment, we combined different LiDAR data with one another, even combining all four (KoLaSU LiDAR 1234). We placed our sensors at the ankle level, which allows us to collect data from only four different ankle positions using four different sensors. If we used 3D LiDAR instead of 2D, we could obtain a 3D view of a person, thereby compromising their privacy and potentially increasing the setup budget.
Of the 26 participants, 12 measured at or above 170 cm, while the remaining participants were below this height. There were 17 individuals who were younger than or equal to 30 years, while the remaining individuals were older than 30 years. Height varies from 156 cm to 190 cm, with an age range of 20 to 56 years old. Individual weights varied between 49 kg and 92 kg. We are working to expand the dataset to include other diversities. Our produced dataset is briefly summarized in Figure 5. Seven chronological movements of two individuals are nicely depicted in our series of expertly created photos. To represent the data from different LiDAR sensors in the photos, the MHI uses different colors. The lines in the pictures, each denoted by a distinct color, show the locations of the ankles as recorded by several LiDAR sensors. One LiDAR image was produced by combining the data from three LiDARs. The records of all individuals were generated at a steady pace while walking in the MHI, capturing the details at a rate of 40 frames per second. To handle a variety of data, we considered a 100 FPS rate for running and fast-moving pedestrians. One may often wonder about the use of multiple sensor setups and the combination of data. Obtaining substantial information using only 2D LiDAR data can be quite challenging. Using a single LiDAR for scanning individuals may not be ideal for situations involving occlusion or fast movement. In addition, it can be challenging to differentiate individuals from a single LiDAR view. The system benefits from increased accuracy when multiple LiDAR data points are combined into a single image due to the increased data variety. This study involved experiments with various combinations to determine the optimal performance of the system. The results showed a significant improvement in system performance.

3.2.3. Identification with ResNet

A comprehensive overview of gait-based identification is given in Figure 6. Motion history images (MHIs) were created from LiDAR data and used as input for the neural network, as mentioned before. We trained our model for the experiments and arranged a variety of techniques. The system was validated and tested using a pretrained residual neural network (ResNet-18). The network was pre-trained and subsequently utilized to train the KoLaSU dataset, comprising more than one million images sourced from the ImageNet datasets. Utilizing a pretrained network benefits from acquiring a broad spectrum of feature representations from a vast and varied collection of images. ResNet is trained to extract convolutional features using identity mapping vectors and residual learning. It is possible to specify the residuals associated with a ResNet architecture by [40].
Y = f(x) + x
x is the input vector, Y denotes the output vector, and f(x) signifies the residual mapping function in this scenario. This paper describes how the convolutional features of the KoLaSU dataset were extracted using 18 and 50 layers of ResNet. After the first convolutional layer, the network’s second layer uses max pooling to combat data overfitting. Using a fully connected (FC) layer, an average pooling layer, and a softmax layer together makes it possible to obtain human detection features from gait data. ResNet makes it easy to train a significant number of layers without increasing the training error rate. ResNet is highly effective at addressing the vanishing gradient problem, which sets it apart from traditional neural networks. Ultimately, a highly accurate gait-based classification was carried out effectively.

4. Discussion and Technical Experiments

4.1. Identification of Individuals Based on Gait

Extensive analysis was conducted during the acquisition process to ensure that the application was comprehensive and reliable across all ranges. An evaluation was conducted on both homogeneous and heterogeneous LiDAR setups for data collection purposes. The performance of the system can be compromised when the data are distributed and overlapped in a critical manner, even though a single LiDAR sensor can effectively capture the necessary data for pedestrian detection. The primary focus was on implementing a LiDAR system positioned at the ankle level to efficiently monitor and distinguish pedestrians. We have observed a diverse array of walking styles, each with its own unique characteristics. The primary obstacle we faced was the process of overlaying LiDAR data onto an image. Nevertheless, we approached this challenge with great care and carried out a series of experiments, resulting in exceptional levels of accuracy. The KoLaSU dataset includes fourteen carefully selected outdoor sequences that document the movements of twenty-nine participants. The participants were involved in a controlled experimental setup in which they walked for five to ten minutes. We chose to use a standard frame rate of forty frames per second (fps) to record the data. Regarding cross-validation, we also considered the 100 fps rate. Our datasets were carefully organized into three distinct groups for all our experiments. The data were divided into three sets: training, testing, and validation. Sixty percent of the data was allocated to the training set, while the remaining forty percent was evenly divided between the test and validation sets. Multiple stages of data segmentation were also tested. In a separate experimental trial, a substantial portion of the data, precisely 80 percent, was assigned to the training group. The remaining twenty percent of the data were evenly distributed between the testing and validation groups to ensure a thorough evaluation. The data underwent cross-validation in multiple phases to ensure their reliability and accuracy. In addition, the training data were augmented to enhance the overall performance of the system. The results mentioned above are shown in the experiments depicted in Figure 7.
Table 1 displays the comprehensive experimental findings for gait-based recognition involving 26 individuals. Nine of the fourteen conditions that were included in the KoLaSU person tracking dataset were investigated and considered. The four LiDAR datasets were partitioned, and the upper four segments were retained in Table 1. In Table 1, we combined various LiDAR data by assigning them numbers (such as LiDAR 12, which merged the data from LiDAR 1 and LiDAR 2 in MHI with a frame rate of 40 fps). It is essential to have a dataset that is versatile enough to effectively combine diverse data. In our previous studies, we used a single LiDAR for data acquisition. Dealing with occluded data and handling groups of people can pose challenges when trying to obtain accurate outcomes. In addition, the dataset’s credibility is enhanced by incorporating data from various perspectives and levels. The coverage includes a wide range of angles, allowing for accurate tracking of individuals. Additionally, this approach ensures that all four LiDAR sensors cannot be deployed in practical applications. We prioritized credibility over cost when creating an original dataset. The GIGABYTE BRIX GPU device was used in our study to examine the data. The number of epochs varied from 25 to 50 at regular intervals, with a batch size of 38. To train our model, we made use of a deep neural network. The dataset was trained using a pretrained ResNet18 network, which was particularly designed for the purpose of the binary classification of LiDAR images. The required input image dimensions were 224 by 224 by 3, and 71 deep layers were used in the analysis.
In addition, ResNet50 was utilized to verify the system’s performance. Typically, the performance of deeper models is superior, but it is important to acknowledge that computational time is a significant concern. During the cross-validation phase, ResNet models of different sizes were utilized to validate the credibility of the system. In the next part, we have provided some results for additional reference. All machine-generated data were used to choose the training, test, and validation datasets. This made sure that each segment was completely different. First, we stored each individual’s data in three separate groups: training, testing, and validation. In addition, we have expanded our study to include unknown test datasets. The system reacts when faced with an unidentified individual, as it lacks the necessary information to make a proper identification. The data in Table 1 demonstrate an impressive level of accuracy, with nearly 99% of the data correctly identified in all three segments. Due to KoLaSU LiDAR 3 and KoLaSU LiDAR 4 congestion, the data might not have been precisely right. Nevertheless, their performance in the test case consistently surpassed 93%.
The issue here is that height causes congestion and impacts performance. Although the system’s performance is excellent at a 6-inch height, the level of data detail decreases as the LiDAR height increases. In addition, the combined dataset (KoLaSU LiDAR 34) demonstrated impressive performance, achieving a precession rate of 99%. This marked the first stage of implementing a 2D LiDAR system for person identification. This study focused on accuracy as the main measure of performance, while other aspects of precision were also considered in previous studies. We utilized a pretrained model (ImageNet) to accurately classify our model with exceptional precision. In our analysis, an accuracy above 90% was deemed to be accurate.

4.2. Comparison across Various Data Types

The network was validated to address the issue of overfitting. The accuracy of the network is quite impressive, with none of the datasets falling below 93%. The test’s accuracy aligns closely with the validation’s accuracy, mirroring its performance. The relationship between accuracy and loss in a system is inversely proportional. Our system also reflects this trend. The network design and performance of this system naturally establish a solid foundation for utilizing two-dimensional LiDAR sensors for individual identification on a wide scale.
We conducted extensive testing using various datasets to evaluate the system’s performance. All fourteen datasets were considered for this cross-testing. The results were carefully analyzed, revealing a significant symmetry in the performance analysis phase that closely matched our theoretical expectations. Figure 7 provides a detailed representation of these findings. The top four datasets, namely KoLaSU LiDAR 24 and 13, show these clearly. The system was trained and validated using the same dataset, with only four cases where the test data were modified. LiDAR 24 and LiDAR 13 are created by combining sensors 2 and 4 and sensors 1 and 3, respectively. We conducted the testing using LiDAR 4 and 2, as well as LiDAR 3 and LiDAR 1 individually. The training and validation accuracy is extremely high, but unfortunately, the bar chart indicates that the test accuracy drops to less than 20%. All cases were considered using unbiased, disjointed data. The figure demonstrates that when a person undergoes proper training by the neural network and is tested under different conditions, the performance of the system will decline. All cases show consistent performance, except when the combined dataset LiDAR 1234 is tested with data 24 and 13. The performance reached 38%, but it was not particularly impressive. It is important to note that for optimal system performance, training and testing should be conducted using the same types of data. Avoiding any biases is crucial in this regard.
In addition, we examined various neural networks to evaluate the performance and effectiveness of our data system. Table 2 displays one example of this type of analysis. We trained and validated the KoLaSU LiDAR 1234 dataset using ResNet18 and ResNet50 models. All other parameters remained unchanged, except for the epoch size. The system was tested using different datasets, namely KoLaSU LiDAR 24, to analyze the performance of two separate networks. ResNet18 achieved an accuracy of nearly 38%, while ResNet50 achieved 40% accuracy. The network size of ResNet50 (Layer 50) is significantly larger than that of ResNet18 (Layer 18), resulting in a considerably longer computation time (approximately four times) in these experiments. Therefore, this study opted to use ResNet18 instead of ResNet50, despite its slightly enhanced performance.
We conducted various tests on the system’s performance by combining different datasets and thoroughly training and validating our system. The accuracy we achieved was impressive in all cases. Figure 8 shows a collection of experiments conducted using combined datasets. Here, we combine the system’s accuracy and loss. Six datasets that are related to their corresponding aligned datasets. Combining LiDAR 24 data with LiDAR 2 and 4 data for the training and validation of the system could enhance the accuracy and reliability of the results. In addition, we conducted individual tests on the system using LiDAR 24, 2, and 4 data. LiDAR 1234, 13, and 24 were also used in the same scenarios. Regardless of the circumstances, the system consistently demonstrated its reliability through extensive training and testing with various datasets. The test accuracies in Figure 8 hover around approximately 99%, with a relatively low loss of less than 20%. The accuracy and loss curves also exhibit asymmetry, which suggests the performance of the network.

4.3. Comparison with Other Modern Studies

According to the available data, no studies have been undertaken employing 2D LiDAR sensors to analyze a person’s stride for identification purposes. The initiatives that used a 3D LiDAR sensor were noteworthy. It was hard to make accurate comparisons because the sensor setups, experimental complexity, methods, and datasets were all very different. An abstract description is given in Figure 9. Research on lidar-based gait analysis was initiated by Benedek et al. [33]. The SZTAK-LGA dataset was prepared with 28 participants. A multilayer perceptron (MLP) and a convolutional neural network (CNN) were used to train and test the system. The number of participants in the studies varied, and when more people joined, the system’s performance dropped from 92% (with five participants) to 75% (with 28 participants). Yamada et al. [32] conducted a thorough experiment on LiDAR-based gait analysis. A 3D LiDAR sensor was used in the study. They have put together a dataset with 30 participants dubbed the PCG (point cloud gait). The system was trained and evaluated with a CNN and a long short-term memory (LSTM) neural network. While the accuracy varies with the input patterns, l = 1:8 was able to obtain an overall high of 72%. Since datasets, scenarios, sensors, and methods are all distinct, it is not possible to directly compare their accuracies. The datasets we used were categorized into three classes: training, testing, and validation. They were selected randomly and had complete impartiality. The system consistently achieves a performance level that surpasses 98% of the expected training data, highlighting the extensive utilization of 2D sensors across various applications.

5. Conclusions

This article describes an innovative method named PerFication for human identification that utilizes a cutting-edge LiDAR sensor. Limited research has been conducted in this field, specifically focusing on the utilization of said sensors within this specific setup. We aimed to explore the use of 2D LiDAR sensors and demonstrate their real-time calculating capabilities and high accuracy levels. In the area of gait recognition, ResNet—a pretrained deep neural network—has proven to be extremely effective in both data fitting and loss identification. The accuracy of the system is impressive. One potential way to improve autonomous robot movements is to incorporate person tracking and recognition capabilities into this sensor. All gait-based identification methods have difficulty detecting changes in gait. In our future investigations, we will consider environmental constraints (smoke, rain, fog, etc.) and changing gait issues since LiDAR-based tracking and recognition is a new concept.

Author Contributions

Conceptualization, M.H. and Y.K. (Yoshinori Kobayashi); methodology, M.H.; software, M.H.; validation, M.H., Y.K. (Yoshinori Kobayashi), and R.S.; formal analysis, M.H. and M.K.U.; investigation, M.H.; resources, Y.K. (Yoshinori Kobayashi), Y.K. (Yoshinori Kuno) and R.S.; data curation, M.H. and M.K.U.; writing—original draft preparation, M.H.; writing—review and editing, M.H. and Y.K. (Yoshinori Kobayashi); visualization, M.H.; supervision, Y.K. (Yoshinori Kobayashi); project administration, Y.K. (Yoshinori Kobayashi); funding acquisition, Y.K. (Yoshinori Kobayashi). All authors have read and agreed to the published version of the manuscript.

Funding

There was no external funding for this research.

Data Availability Statement

We developed our dataset for experimentation, and any researcher can request it.

Acknowledgments

We are extremely grateful to all of our Kobayashi Laboratory members for their active cooperation during dataset preparation. We especially thank Junichi Hanawa, Riku Goto, and Hisato Fukuda for their great help in creating this dataset. We thank all volunteers from Bangladesh, the Philippines, and Japan for their participation in collecting the data. We also thank Saitama University and MEXT for providing us with the necessary support for this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bolle, R.M.; Connell, J.; Pankanti, S.; Ratha, N.K.; Senior, A.W. Guide to Biometrics; Springer Science & Business Media: Berlin, Germany, 2013. [Google Scholar]
  2. Wan, C.; Wang, L.; Phoha, V.V. A survey on gait recognition. ACM Comput. Surv. 2018, 51, 89. [Google Scholar] [CrossRef]
  3. Dargan, S.; Kumar, M. A comprehensive survey on the biometric recognition systems based on physiological and behavioral modalities. Expert. Syst. Appl. 2020, 143, 113114. [Google Scholar] [CrossRef]
  4. Bouazizi, M.; Ye, C.; Ohtsuki, T. 2-D LIDAR-Based Approach for Activity Identification and Fall Detection. IEEE Internet Things J. 2022, 9, 10872–10890. [Google Scholar] [CrossRef]
  5. Bi, S.; Yuan, C.; Liu, C.; Cheng, J.; Wang, W.; Cai, Y. A Survey of Low-Cost 3D Laser Scanning Technology. Appl. Sci. 2021, 11, 3938. [Google Scholar] [CrossRef]
  6. Yusuf, M.; Zaidi, A.; Haleem, A.; Bahl, S.; Javaid, M.; Garg, S.B.; Garg, J. IoT-based low-cost 3D mapping using 2D Lidar for different materials. Mater. Today Proc. 2022, 57, 942–947. [Google Scholar] [CrossRef]
  7. Raj, T.; Hashim, F.H.; Huddin, A.B.; Ibrahim, M.F.; Hussain, A. A Survey on LiDAR Scanning Mechanisms. Electronics 2020, 9, 741. [Google Scholar] [CrossRef]
  8. Kang, X.; Yin, S.; Fen, Y. 3D Reconstruction & Assessment Framework based on affordable 2D Lidar. In Proceedings of the 2018 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Auckland, New Zealand, 9–12 July 2018; pp. 292–297. [Google Scholar] [CrossRef]
  9. Hasan, M.; Hanawa, J.; Goto, R.; Fukuda, H.; Kuno, Y.; Kobayashi, Y. Tracking People Using Ankle-Level 2D LiDAR for Gait Analysis. In Advances in Artificial Intelligence, Software and Systems Engineering AHFE 2020; Advances in Artificial Intelligence; Ahram, T., Ed.; Springer: Cham, Switzerland, 2021; p. 1213. [Google Scholar] [CrossRef]
  10. Hasan, M.; Hanawa, J.; Goto, R.; Fukuda, H.; Kuno, Y.; Kobayashi, Y. Person Tracking Using Ankle-Level LiDAR Based on Enhanced DBSCAN and OPTICS. IEEJ Trans. Elec Electron. Eng. 2021, 16, 778–786. [Google Scholar] [CrossRef]
  11. Hasan, M.; Goto, R.; Hanawa, J.; Fukuda, H.; Kuno, Y.; Kobayashi, Y. Person Property Estimation Based on 2D LiDAR Data Using Deep Neural Network. In Intelligent Computing Theories and Application. ICIC 2021; Lecture Notes in Computer Science; Huang, D.S., Jo, K.H., Li, J., Gribova, V., Bevilacqua, V., Eds.; Springer: Cham, Switzerland, 2021; p. 12836. [Google Scholar] [CrossRef]
  12. Read, J.D. The availability heuristic in person identification: The sometimes misleading consequences of enhanced contextual information. Appl. Cognit. Psychol. 1995, 9, 91–121. [Google Scholar] [CrossRef]
  13. Choudhury, T.; Clarkson, B.; Jebara, T.; Pentland, A. Multimodal person recognition using unconstrained audio and video. In Proceedings of the International Conference on Audio- and Video-Based Person Authentication, Washington, DC, USA, 22–24 March 1999. [Google Scholar]
  14. Matta, F.; Jean-Luc, D. Person recognition using facial video information: A state of the art. J. Vis. Lang. Comput. 2009, 20, 180–187. [Google Scholar] [CrossRef]
  15. Huang, Z.; Wang, R.; Shan, S.; Chen, X. Projection Metric Learning on Grassmann Manifold with Application to Video based Face Recognition. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 140–149. [Google Scholar] [CrossRef]
  16. Liu, H.; Feng, J.; Jie, Z.; Jayashree, K.; Zhao, B.; Qi, M.; Jiang, J.; Yan, S. Neural Pers. Search. Machines. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 493–501. [Google Scholar] [CrossRef]
  17. Zhu, X.; Lei, Z.; Yan, J.; Yi, D.; Li, S.Z. High-fidelity Pose and Expression Normalization for face recognition in the wild. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 787–796. [Google Scholar] [CrossRef]
  18. Wang, M.; Deng, W. Deep face recognition: A survey. Neurocomputing 2021, 29, 215–244. [Google Scholar] [CrossRef]
  19. Taigman, Y.; Yang, M.; Ranzato, M.; Wolf, L. DeepFace: Closing the Gap to Human-Level Performance in Face Verification. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1701–1708. [Google Scholar] [CrossRef]
  20. Schroff, F.; Kalenichenko, D.; Philbin, J. FaceNet: A unified embedding for face recognition and clustering. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 815–823. [Google Scholar] [CrossRef]
  21. Meng, Q.; Zhao, S.; Huang, Z.; Zhou, F. MagFace: A Universal Representation for Face Recognition and Quality Assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 14225–14234. [Google Scholar]
  22. Wang, L.; Tan, T.; Ning, H.; Hu, W. Silhouette analysis-based gait recognition for human identification. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 1505–1518. [Google Scholar] [CrossRef]
  23. Bashir, K.; Xiang, T.; Gong, S. Gait recognition without subject cooperation. Pattern Recognit. Lett. 2010, 31, 2052–2060. [Google Scholar] [CrossRef]
  24. Liu, Z.; Sarkar, S. Improved gait recognition by gait dynamics normalization. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 863–876. [Google Scholar] [CrossRef] [PubMed]
  25. Singh, J.P.; Jain, S. Person identification based on Gait using dynamic body parameters. In Proceedings of the Trendz in Information Sciences & Computing (TISC2010), Chennai, India, 17–19 December 2010; pp. 248–252. [Google Scholar] [CrossRef]
  26. Mansur, A.; Makihara, Y.; Aqmar, R.; Yagi, Y. Gait Recognition under Speed Transition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 2521–2528. [Google Scholar] [CrossRef]
  27. Lombardi, S.; Nishino, K.; Makihara, Y.; Yagi, Y. Two-Point Gait: Decoupling Gait from Body Shape. In Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, NSW, Australia, 1–8 December 2013; pp. 1041–1048. [Google Scholar] [CrossRef]
  28. Sepas-Moghaddam, A.; Etemad, A. Deep Gait Recognition: A Survey. arXiv 2021, arXiv:2102.09546. [Google Scholar] [CrossRef] [PubMed]
  29. Wu, Z.; Huang, Y.; Wang, L.; Wang, X.; Tan, T. A Comprehensive Study on Cross-View Gait Based Human Identification with Deep CNNs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 209–226. [Google Scholar] [CrossRef] [PubMed]
  30. Zhang, S.; Wang, Y.; Li, A. Cross-View Gait Recognition with Deep Universal Linear Embeddings. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 9095–9104. [Google Scholar]
  31. Fan, C.; Peng, Y.; Cao, C.; Liu, X.; Hou, S.; Chi, J.; Huang, Y.; Li, Q.; He, Z. GaitPart: Temporal Part-Based Model for Gait Recognition. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 14213–14221. [Google Scholar] [CrossRef]
  32. Yamada, H.; Ahn, J.; Mozos, O.M.; Iwashita, Y.; Kurazume, R. Gait-based person identification using 3D LiDAR and long short-term memory deep networks. Adv. Robot. 2020, 34, 1201–1211. [Google Scholar] [CrossRef]
  33. Benedek, C.; Gálai, B.; Nagy, B.; Jankó, Z. Lidar-based gait analysis and activity recognition in a 4d surveillance system. IEEE Trans. Circuits Syst. Video Technol. 2018, 28, 101–113. [Google Scholar] [CrossRef]
  34. Benedek, C.; Nagy, B.; Gálai, B.; Jankó, Z. Lidar-based gait analysis in people tracking and 4D visualization. In Proceedings of the 2015 23rd European Signal Processing Conference (EUSIPCO), Nice, France, 31 August–4 September 2015; pp. 1138–1142. [Google Scholar] [CrossRef]
  35. Yoon, S.; Jung, H.-W.; Jung, H.; Kim, K.; Hong, S.-K.; Roh, H.; Oh, B.-M. Development and validation of 2D-LiDAR-Based Gait Analysis Instrument and Algorithm. Sensors 2021, 21, 414. [Google Scholar] [CrossRef]
  36. Yan, Z.; Duckett, T.; Bellotto, N. Online learning for 3D LiDAR-based human detection: Experimental analysis of point cloud clustering and classification methods. Auton. Robot. 2020, 44, 147–164. [Google Scholar] [CrossRef]
  37. Koide, K.; Miura, J.; Menegatti, E. A portable three-dimensional LIDAR-based system for long-term and wide-area people behavior measurement. Int. J. Adv. Robot. Syst. 2019, 16, 1–16. [Google Scholar] [CrossRef]
  38. Bobick, A.; Davis, J. The recognition of human movement using temporal templates. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 257–267. [Google Scholar] [CrossRef]
  39. Enoki, M.; Watanabe, K.; Noguchi, H. Single Person Identification and Activity Estimation in a Room from Waist-Level Contours Captured by 2D Light Detection and Ranging. Sensors 2024, 24, 1272. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  40. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
Figure 1. PerFication: An overview of 2D LiDAR-based estimation.
Figure 1. PerFication: An overview of 2D LiDAR-based estimation.
Electronics 13 03137 g001
Figure 2. Person tracking, property estimation, and recognition using LiDAR.
Figure 2. Person tracking, property estimation, and recognition using LiDAR.
Electronics 13 03137 g002
Figure 3. Motion history image.
Figure 3. Motion history image.
Electronics 13 03137 g003
Figure 4. Person identification experimental setup.
Figure 4. Person identification experimental setup.
Electronics 13 03137 g004
Figure 5. KoLaSU, two persons’ data: MHI on top and posture on bottom.
Figure 5. KoLaSU, two persons’ data: MHI on top and posture on bottom.
Electronics 13 03137 g005
Figure 6. Person identification based on gait.
Figure 6. Person identification based on gait.
Electronics 13 03137 g006
Figure 7. Cross validation: gait performance test with cross-data.
Figure 7. Cross validation: gait performance test with cross-data.
Electronics 13 03137 g007
Figure 8. Performance analysis of combined data.
Figure 8. Performance analysis of combined data.
Electronics 13 03137 g008
Figure 9. Modern studies utilizing cutting-edge equipment [32,33].
Figure 9. Modern studies utilizing cutting-edge equipment [32,33].
Electronics 13 03137 g009
Table 1. Gait-based person identification on different parameters.
Table 1. Gait-based person identification on different parameters.
DataTrainingTestingValidation
Accuracy (%)F1 Score (%) Accuracy (%)F1 Score (%)Accuracy (%)F1 Score (%)
KoLaSU LiDAR 10.99420.99370.98430.98380.98510.9846
KoLaSU LiDAR 20.99320.99270.98460.98410.98500.9845
KoLaSU LiDAR 30.97720.97670.93360.93310.93480.9343
KoLaSU LiDAR 40.98040.97990.94790.94740.94710.9466
KoLaSU LiDAR 130.99820.99770.99620.99570.99600.9955
KoLaSU LiDAR 240.99830.99780.99620.99570.99610.9956
KoLaSU LiDAR 120.99720.99670.99340.99290.99310.9926
KoLaSU LiDAR 340.99820.99770.99340.99290.99420.9937
KoLaSU LiDAR 12340.99870.99820.99660.99610.99710.9966
Table 2. Performance test with various DNN models.
Table 2. Performance test with various DNN models.
DataKoLaSU LiDAR 1234 TestCross24KoLaSU LiDAR 1234TestCross24
Experiment Type26 Individual Persons (60%, 20%, and 20%)26 Individual Persons (60%, 20%, and 20%)
Batch Size3838
Epoch2540
GPUYesYes
ModelResNet18ResNet50_2
Train Accuracy0.99864 (%)0.99999 (%)
Train Loss0.00589 (%)0.000354 (%)
Test Accuracy0.379 (%)0.4007 (%)
Test Loss4.2741 (%)3.5852 (%)
Validation Accuracy0.99721 (%)0.99956 (%)
Validation Loss0.00589 (%)0.001578 (%)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hasan, M.; Uddin, M.K.; Suzuki, R.; Kuno, Y.; Kobayashi, Y. PerFication: A Person Identifying Technique by Evaluating Gait with 2D LiDAR Data. Electronics 2024, 13, 3137. https://doi.org/10.3390/electronics13163137

AMA Style

Hasan M, Uddin MK, Suzuki R, Kuno Y, Kobayashi Y. PerFication: A Person Identifying Technique by Evaluating Gait with 2D LiDAR Data. Electronics. 2024; 13(16):3137. https://doi.org/10.3390/electronics13163137

Chicago/Turabian Style

Hasan, Mahmudul, Md. Kamal Uddin, Ryota Suzuki, Yoshinori Kuno, and Yoshinori Kobayashi. 2024. "PerFication: A Person Identifying Technique by Evaluating Gait with 2D LiDAR Data" Electronics 13, no. 16: 3137. https://doi.org/10.3390/electronics13163137

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop