**4. Experiment Results**

## *4.1. FPDS Dataset*

Analysis and comparison of different fall detection algorithms is a real problem due to the lack of public datasets with a large number of people in lying-positions [21,47,48]. ImageNet [28] and MS-COCO [49] are examples of those large image datasets. Some fall detection datasets provide images or videos with the camera situated in different positions but most of them in simulated environments [23,48,50–52]. However, they are neither large enough nor have all the required image variations for testing our experiments—several environments, more than one person in each image, persons in resting positions, falls with a variety of body orientations and persons with different sizes and clothes.

For all those reasons, in this paper, we present our own dataset (FPDS) to be used in fall detection algorithms. All images were taken by using a single camera inserted in a robot at 76 cm above the floor. This dataset consisted of a total of 2062 manually labeled images with 1072 falls and 1262 people standing up, sitting in a chair, lying on the sofa, walking and so forth. Images could have more than one actor and were recorded from different perspectives (Figure 8). An essential feature of this dataset compared with other datasets was having actors with a height range of 1.2–1.8 m (see Figure 9).

**Figure 8.** Fallen Person Dataset (FPDS) images with different lying-body orientations.

**Figure 9.** FPDS images with different person sizes—1.2, 1.4 and 1.8 m height.

Images were taken in eight different environments with variable illumination, as well as shadows and reflections, defining eight splits. Figure 10 and Table 2 show sample images and the characteristics of the FPDS, respectively.

**Figure 10.** *Cont.*

**Figure 10.** Ground-truth image examples of the FPDS. Each row belongs to a different split. Bounding boxes are red/green in case of fall/nonfall detection.



FPDS dataset consists of images and txt files with the same name. These files contain five parameters per bounding box in the image— bounding box coordinates {*Xlef tbi*, *Xrightbi*, *Ytopbi*, *Ydownbi*} and the classification label *y* (*y* = 1 fall, *y* = −1 nonfall). Additionally, in the dataset we provided some sample images of a well-defined pattern (chessboard) taken with the camera from different perspectives for calibration purposes. FPDS dataset is public and available at http: //agamenon.tsc.uah.es/Investigacion/gram/papers/fall\_detection/FPDS\_dataset.zip.

For all experiments, training and testing images belonged to different splits to correctly evaluate the ability of the algorithm to learn. We built training set *L* with splits 1, 2 and 3, by using 681 falls and 432 nonfalls in a total number of 1084 images. Testing set *T* was built by using from 4 to 8 splits, with 391 falls and 830 nonfalls in a total number of 973 images.
