Table S1: Studies characteristics regarding their sensors used, experimental procedures, detection methods and detection performance Authors Experimental Procedures Detection Methods Sensors Adolf et al., 201827 Panasonic's Grid-EYE® AMG88?? 8x8 pixels Chen & Ma, 201528 Melexis MLX90620 16x4 pixels Empty room Sensor: 1 mounted on the ceiling at 2.85 m from the floor Ground capture perimeter: 3 m x 3 m Ambient temperature: between 23 and 25 °C Participant: 4 Frame rate: 10 fps. Average thermal spatial distributions in frame were computed over last 10 frames. 495 unique measurements (4950 frames) were performed and roughly labeled into the five categories. Per participant: 3 trials, 1 at 23 °C, 1 at 24 °C, and 1 at 25 °C. Per trial: 5 classes of postures were performed several times, each held for 2s. Nobody and no object Only an object (chair or table) Standing position Sitting position Lying position Total recorded data: 399 training data and 96 testing data. Machine learning classifier: CNN Inception Version 3 developed by Szegedy et al. 201547 ( Empty room Sensors: 2 mounted on the wall at 1.2 m from the floor, spaced 3.3 m, each near an angle and oriented 30° inward and 8.2° toward the floor. Ground capture perimeter: 3 m x 2.35 m Ambient temperature: unspecified Participants: 5 Frame rate: 16 fps. Noise Reduction of original image: the variance of a white noise signal was reduced to the multiple of 1/M scale, with M corresponding to an average of frames together (M = 3). 5 normal actions were performed 16 times each: Sitting down Bending Squatting Walking Standing up 8 types of falls, each in one direction, were performed 10 times each. Total recorded data: 80 normal and 80 fall actions treated using a k-fold cross-validation. N.B.: A second step concerned the pursuit of the participant walking along a line (not relevant to the topic of this review). Foreground and background generation: the foreground pixel was generated by thresholding the difference between the observed image and the background reference image. Threshold: 2.5 °C multiplied by the standard deviation of temperature. Sensor choice: they segmented the analysis by window of 1 second. Then, they chose a single segmented data, based on the foreground region detected by both sensors at the same time. When a sensor had a larger foreground area, then extraction of features from this sensor because the person is closer there. 7 features: (minv) Minimum value of vertical trajectory. (Vv) Vertical velocity. (Vh) Horizontal velocity. (MADv) Mean absolute deviation of vertical trajectory. (MADh) Mean absolute deviation of horizontal trajectory. (σv) Standard deviation of vertical trajectory. (σh) Standard deviation of horizontal trajectory. By considering the correlation between features, the Mahalanobis distance was used instead of Euclidean distance. Machine learning classifier: classification into "fall" and "non-fall" using a k-NN algorithm. Detection performance Main results (%) Performance for the five categories Se Nobody 48 Only object 41 Standing 42 Siting 50 Laying 47 Sp 89 90 83 82 87 Performance for three categories Se Nobody 75 Only object 85 Standing 85 Sp 100 79 93 Accuracy (Ac), Sensibility (Se), and Specificity (Sp): equations unspecified. They scanned and determined both k value and feature subsets with the highest performance. They determined the k value by scanning the k value from 1–79 with step size 2. From 7 features, there were totally 127 amounts of combination of feature subsets. The performance was estimated by k-fold cross-validation. Highest performance for k = 9 and the feature subset, including Vv, MADh, minv, and σh. Ac 93 Se 95.25 Sp 90.75 Chen & Wang, 201829 Panasonic's Grid-EYE® AMG8853 8x8 pixels NB: An ultrasonic sensor was also used in this study, but the authors analyzed performance with and without it Empty room Sensor: 1 mounted on the head of mini-robot (80 cm high) orientable along the x and z axes and following the participant. Capture perimeter: height of a human and 1.8 m in depth Ambient temperature: 24.5 °C Participants: 3 They recorded actions at 3 distances from the participant (1.2, 1.5 et 1.8 m), profile view. Per distance and per participant: Step 1: discrete recordings Falling forward and sideway 15 times each. Standing up from a sitting position 10 times. Sitting down on a chair 10 times. Stooping down to pick up an item from the ground and returning

docxDoc 329668

Practical Docs > Common > Other > Preview
11 Pages 0 Downloads 47 Views 3.0 Score
Tips: Current document can only be previewed at most page3,If the total number of pages in the document exceeds page 3,please download the document。
Uploaded by admin on 2022-04-25 01:36:11
You can enter 255 characters
What is my domain?( )
  • No comments yet