sensors-logo

Journal Browser

Journal Browser

Annotation of User Data for Sensor-Based Systems

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (31 May 2018) | Viewed by 41346

Special Issue Editors


E-Mail Website
Guest Editor
Institute of Computer Science, University of Rostock, 18051 Rostock, Germany
Interests: activity and intention recognition; human behavior models; knowledge elicitation; natural language processing; automatic extraction of behavior models from textual sources
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Swansea University, Swansea, UK
Interests: computer vision; machine learning; AI-assisted healthcare

E-Mail Website
Guest Editor
David R. Cheriton School of Computer Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada
Interests: artificial intelligence; affective computing; computational social science; Markov decision processes; affect control theory

Special Issue Information

Dear Colleagues,

Labelling user data is a central part of the design and evaluation of sensors and sensor-based systems that aim to support the user through situation-aware reasoning. It is essential, both in designing and training a sensor-based system to recognize and reason about the situation, either through the design of new sensors, the definition of suitable observation and situation models in knowledge-driven applications, or though the preparation of training data for learning tasks in data-driven models. Hence, the quality of annotations can have a significant impact on the performance of the derived systems. Labelling is also vital for validating and quantifying the performance of sensors and sensor-based applications as well as for selecting the best performing sensor setup and configuration.

With sensor-based systems relying increasingly on large datasets with multiple sensors, the process of data labelling is becoming a major concern for the community.

To address these problems, this Special Issue contains selected papers from the International Workshop on Annotation of useR Data for UbiquitOUs Systems (ARDUOUS)(2017/2018) with focus on:

1) intelligent and interactive tools and automated methods for annotating large sensor datasets.
2) the role and impact of annotations in designing sensor-based applications,
3) the process of labelling, and the requirements to produce high quality annotations, especially in the context of large sensor datasets.

In addition, we are looking for outstanding submissions, which will extend the state-of-the-art in annotation for sensor-based systems. The scope of the issue includes but is not limited to:

 - methods and intelligent tools for annotating sensor data
 - processes of and best practices in annotating sensor data
 - annotation methods and tools for sensor setup and configuration
 - sensors and sensor-based methods and practices towards an automation of the annotation
 - improving and evaluating the annotation quality for better sensor interpretation
 - ethical issues concerning the collection and annotation of sensor data
 - beyond the labels: ontologies for semantic annotation of sensor data
 - high-quality and re-usable annotation for publicly available sensor datasets
 - impact of annotation on a sensor-based system's performance
 - building classifier models that are capable of dealing with multiple (noisy) annotations and/or making use of taxonomies/ontologies
 - the potential value of incorporating modelling of the annotators into predictive models

Dr.-Ing. Kristina Yordanova
Dr. Adeline Paiement
Prof. Jesse Hoey
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 2153 KiB  
Article
Creating and Exploring Semantic Annotation for Behaviour Analysis
by Kristina Yordanova and Frank Krüger
Sensors 2018, 18(9), 2778; https://doi.org/10.3390/s18092778 - 23 Aug 2018
Cited by 15 | Viewed by 3512
Abstract
Providing ground truth is essential for activity recognition and behaviour analysis as it is needed for providing training data in methods of supervised learning, for providing context information for knowledge-based methods, and for quantifying the recognition performance. Semantic annotation extends simple symbolic labelling [...] Read more.
Providing ground truth is essential for activity recognition and behaviour analysis as it is needed for providing training data in methods of supervised learning, for providing context information for knowledge-based methods, and for quantifying the recognition performance. Semantic annotation extends simple symbolic labelling by assigning semantic meaning to the label, enabling further reasoning. In this paper, we present a novel approach to semantic annotation by means of plan operators. We provide a step by step description of the workflow to manually creating the ground truth annotation. To validate our approach, we create semantic annotation of the Carnegie Mellon University (CMU) grand challenge dataset, which is often cited, but, due to missing and incomplete annotation, almost never used. We show that it is possible to derive hidden properties, behavioural routines, and changes in initial and goal conditions in the annotated dataset. We evaluate the quality of the annotation by calculating the interrater reliability between two annotators who labelled the dataset. The results show very good overlapping (Cohen’s κ of 0.8) between the annotators. The produced annotation and the semantic models are publicly available, in order to enable further usage of the CMU grand challenge dataset. Full article
(This article belongs to the Special Issue Annotation of User Data for Sensor-Based Systems)
Show Figures

Figure 1

11 pages, 1784 KiB  
Article
A Combined Approach to Predicting Rest in Dogs Using Accelerometers
by Cassim Ladha and Christy L. Hoffman
Sensors 2018, 18(8), 2649; https://doi.org/10.3390/s18082649 - 13 Aug 2018
Cited by 18 | Viewed by 3699
Abstract
The ability to objectively measure episodes of rest has clear application for assessing health and well-being. Accelerometers afford a sensitive platform for doing so and have demonstrated their use in many human-based trials and interventions. Current state of the art methods for predicting [...] Read more.
The ability to objectively measure episodes of rest has clear application for assessing health and well-being. Accelerometers afford a sensitive platform for doing so and have demonstrated their use in many human-based trials and interventions. Current state of the art methods for predicting sleep from accelerometer signals are either based on posture or low movement. While both have proven to be sensitive in humans, the methods do not directly transfer well to dogs, possibly because dogs are commonly alert but physically inactive when recumbent. In this paper, we combine a previously validated low-movement algorithm developed for humans and a posture-based algorithm developed for dogs. The hybrid approach was tested on 12 healthy dogs of varying breeds and sizes in their homes. The approach predicted state of rest with a mean accuracy of 0.86 (SD = 0.08). Furthermore, when a dog was in a resting state, the method was able to distinguish between head up and head down posture with a mean accuracy of 0.90 (SD = 0.08). This approach can be applied in a variety of contexts to assess how factors, such as changes in housing conditions or medication, may influence a dog’s resting patterns. Full article
(This article belongs to the Special Issue Annotation of User Data for Sensor-Based Systems)
Show Figures

Figure 1

18 pages, 2235 KiB  
Article
Exploring Semi-Supervised Methods for Labeling Support in Multimodal Datasets
by Alexander Diete, Timo Sztyler and Heiner Stuckenschmidt
Sensors 2018, 18(8), 2639; https://doi.org/10.3390/s18082639 - 11 Aug 2018
Cited by 6 | Viewed by 3878
Abstract
Working with multimodal datasets is a challenging task as it requires annotations which often are time consuming and difficult to acquire. This includes in particular video recordings which often need to be watched as a whole before they can be labeled. Additionally, other [...] Read more.
Working with multimodal datasets is a challenging task as it requires annotations which often are time consuming and difficult to acquire. This includes in particular video recordings which often need to be watched as a whole before they can be labeled. Additionally, other modalities like acceleration data are often recorded alongside a video. For that purpose, we created an annotation tool that enables to annotate datasets of video and inertial sensor data. In contrast to most existing approaches, we focus on semi-supervised labeling support to infer labels for the whole dataset. This means, after labeling a small set of instances our system is able to provide labeling recommendations. We aim to rely on the acceleration data of a wrist-worn sensor to support the labeling of a video recording. For that purpose, we apply template matching to identify time intervals of certain activities. We test our approach on three datasets, one containing warehouse picking activities, one consisting of activities of daily living and one about meal preparations. Our results show that the presented method is able to give hints to annotators about possible label candidates. Full article
(This article belongs to the Special Issue Annotation of User Data for Sensor-Based Systems)
Show Figures

Figure 1

22 pages, 1728 KiB  
Article
Talk, Text, Tag? Understanding Self-Annotation of Smart Home Data from a User’s Perspective
by Emma L. Tonkin, Alison Burrows, Przemysław R. Woznowski, Pawel Laskowski, Kristina Y. Yordanova, Niall Twomey and Ian J. Craddock
Sensors 2018, 18(7), 2365; https://doi.org/10.3390/s18072365 - 20 Jul 2018
Cited by 14 | Viewed by 4445
Abstract
Delivering effortless interactions and appropriate interventions through pervasive systems requires making sense of multiple streams of sensor data. This is particularly challenging when these concern people’s natural behaviours in the real world. This paper takes a multidisciplinary perspective of annotation and draws on [...] Read more.
Delivering effortless interactions and appropriate interventions through pervasive systems requires making sense of multiple streams of sensor data. This is particularly challenging when these concern people’s natural behaviours in the real world. This paper takes a multidisciplinary perspective of annotation and draws on an exploratory study of 12 people, who were encouraged to use a multi-modal annotation app while living in a prototype smart home. Analysis of the app usage data and of semi-structured interviews with the participants revealed strengths and limitations regarding self-annotation in a naturalistic context. Handing control of the annotation process to research participants enabled them to reason about their own data, while generating accounts that were appropriate and acceptable to them. Self-annotation provided participants an opportunity to reflect on themselves and their routines, but it was also a means to express themselves freely and sometimes even a backchannel to communicate playfully with the researchers. However, self-annotation may not be an effective way to capture accurate start and finish times for activities, or location associated with activity information. This paper offers new insights and recommendations for the design of self-annotation tools for deployment in the real world. Full article
(This article belongs to the Special Issue Annotation of User Data for Sensor-Based Systems)
Show Figures

Figure 1

23 pages, 3129 KiB  
Article
Activities of Daily Living Ontology for Ubiquitous Systems: Development and Evaluation
by Przemysław R. Woznowski, Emma L. Tonkin and Peter A. Flach
Sensors 2018, 18(7), 2361; https://doi.org/10.3390/s18072361 - 20 Jul 2018
Cited by 15 | Viewed by 3978
Abstract
Ubiquitous eHealth systems based on sensor technologies are seen as key enablers in the effort to reduce the financial impact of an ageing society. At the heart of such systems sit activity recognition algorithms, which need sensor data to reason over, and a [...] Read more.
Ubiquitous eHealth systems based on sensor technologies are seen as key enablers in the effort to reduce the financial impact of an ageing society. At the heart of such systems sit activity recognition algorithms, which need sensor data to reason over, and a ground truth of adequate quality used for training and validation purposes. The large set up costs of such research projects and their complexity limit rapid developments in this area. Therefore, information sharing and reuse, especially in the context of collected datasets, is key in overcoming these barriers. One approach which facilitates this process by reducing ambiguity is the use of ontologies. This article presents a hierarchical ontology for activities of daily living (ADL), together with two use cases of ground truth acquisition in which this ontology has been successfully utilised. Requirements placed on the ontology by ongoing work are discussed. Full article
(This article belongs to the Special Issue Annotation of User Data for Sensor-Based Systems)
Show Figures

Figure 1

20 pages, 14107 KiB  
Article
Consistent Semantic Annotation of Outdoor Datasets via 2D/3D Label Transfer
by Radim Tylecek and Robert B. Fisher
Sensors 2018, 18(7), 2249; https://doi.org/10.3390/s18072249 - 12 Jul 2018
Cited by 8 | Viewed by 4539
Abstract
The advance of scene understanding methods based on machine learning relies on the availability of large ground truth datasets, which are essential for their training and evaluation. Construction of such datasets with imagery from real sensor data however typically requires much manual annotation [...] Read more.
The advance of scene understanding methods based on machine learning relies on the availability of large ground truth datasets, which are essential for their training and evaluation. Construction of such datasets with imagery from real sensor data however typically requires much manual annotation of semantic regions in the data, delivered by substantial human labour. To speed up this process, we propose a framework for semantic annotation of scenes captured by moving camera(s), e.g., mounted on a vehicle or robot. It makes use of an available 3D model of the traversed scene to project segmented 3D objects into each camera frame to obtain an initial annotation of the associated 2D image, which is followed by manual refinement by the user. The refined annotation can be transferred to the next consecutive frame using optical flow estimation. We have evaluated the efficiency of the proposed framework during the production of a labelled outdoor dataset. The analysis of annotation times shows that up to 43% less effort is required on average, and the consistency of the labelling is also improved. Full article
(This article belongs to the Special Issue Annotation of User Data for Sensor-Based Systems)
Show Figures

Figure 1

20 pages, 1299 KiB  
Article
Automatic Annotation for Human Activity Recognition in Free Living Using a Smartphone
by Federico Cruciani, Ian Cleland, Chris Nugent, Paul McCullagh, Kåre Synnes and Josef Hallberg
Sensors 2018, 18(7), 2203; https://doi.org/10.3390/s18072203 - 09 Jul 2018
Cited by 45 | Viewed by 6444
Abstract
Data annotation is a time-consuming process posing major limitations to the development of Human Activity Recognition (HAR) systems. The availability of a large amount of labeled data is required for supervised Machine Learning (ML) approaches, especially in the case of online and personalized [...] Read more.
Data annotation is a time-consuming process posing major limitations to the development of Human Activity Recognition (HAR) systems. The availability of a large amount of labeled data is required for supervised Machine Learning (ML) approaches, especially in the case of online and personalized approaches requiring user specific datasets to be labeled. The availability of such datasets has the potential to help address common problems of smartphone-based HAR, such as inter-person variability. In this work, we present (i) an automatic labeling method facilitating the collection of labeled datasets in free-living conditions using the smartphone, and (ii) we investigate the robustness of common supervised classification approaches under instances of noisy data. We evaluated the results with a dataset consisting of 38 days of manually labeled data collected in free living. The comparison between the manually and the automatically labeled ground truth demonstrated that it was possible to obtain labels automatically with an 80–85% average precision rate. Results obtained also show how a supervised approach trained using automatically generated labels achieved an 84% f-score (using Neural Networks and Random Forests); however, results also demonstrated how the presence of label noise could lower the f-score up to 64–74% depending on the classification approach (Nearest Centroid and Multi-Class Support Vector Machine). Full article
(This article belongs to the Special Issue Annotation of User Data for Sensor-Based Systems)
Show Figures

Figure 1

16 pages, 2369 KiB  
Article
Automatic Annotation of Unlabeled Data from Smartphone-Based Motion and Location Sensors
by Nsikak Pius Owoh, Manmeet Mahinderjit Singh and Zarul Fitri Zaaba
Sensors 2018, 18(7), 2134; https://doi.org/10.3390/s18072134 - 03 Jul 2018
Cited by 12 | Viewed by 3790
Abstract
Automatic data annotation eliminates most of the challenges we faced due to the manual methods of annotating sensor data. It significantly improves users’ experience during sensing activities since their active involvement in the labeling process is reduced. An unsupervised learning technique such as [...] Read more.
Automatic data annotation eliminates most of the challenges we faced due to the manual methods of annotating sensor data. It significantly improves users’ experience during sensing activities since their active involvement in the labeling process is reduced. An unsupervised learning technique such as clustering can be used to automatically annotate sensor data. However, the lingering issue with clustering is the validation of generated clusters. In this paper, we adopted the k-means clustering algorithm for annotating unlabeled sensor data for the purpose of detecting sensitive location information of mobile crowd sensing users. Furthermore, we proposed a cluster validation index for the k-means algorithm, which is based on Multiple Pair-Frequency. Thereafter, we trained three classifiers (Support Vector Machine, K-Nearest Neighbor, and Naïve Bayes) using cluster labels generated from the k-means clustering algorithm. The accuracy, precision, and recall of these classifiers were evaluated during the classification of “non-sensitive” and “sensitive” data from motion and location sensors. Very high accuracy scores were recorded from Support Vector Machine and K-Nearest Neighbor classifiers while a fairly high accuracy score was recorded from the Naïve Bayes classifier. With the hybridized machine learning (unsupervised and supervised) technique presented in this paper, unlabeled sensor data was automatically annotated and then classified. Full article
(This article belongs to the Special Issue Annotation of User Data for Sensor-Based Systems)
Show Figures

Figure 1

1572 KiB  
Article
Smart Annotation of Cyclic Data Using Hierarchical Hidden Markov Models
by Christine F. Martindale, Florian Hoenig, Christina Strohrmann and Bjoern M. Eskofier
Sensors 2017, 17(10), 2328; https://doi.org/10.3390/s17102328 - 13 Oct 2017
Cited by 13 | Viewed by 5819
Abstract
Cyclic signals are an intrinsic part of daily life, such as human motion and heart activity. The detailed analysis of them is important for clinical applications such as pathological gait analysis and for sports applications such as performance analysis. Labeled training data for [...] Read more.
Cyclic signals are an intrinsic part of daily life, such as human motion and heart activity. The detailed analysis of them is important for clinical applications such as pathological gait analysis and for sports applications such as performance analysis. Labeled training data for algorithms that analyze these cyclic data come at a high annotation cost due to only limited annotations available under laboratory conditions or requiring manual segmentation of the data under less restricted conditions. This paper presents a smart annotation method that reduces this cost of labeling for sensor-based data, which is applicable to data collected outside of strict laboratory conditions. The method uses semi-supervised learning of sections of cyclic data with a known cycle number. A hierarchical hidden Markov model (hHMM) is used, achieving a mean absolute error of 0.041 ± 0.020 s relative to a manually-annotated reference. The resulting model was also used to simultaneously segment and classify continuous, ‘in the wild’ data, demonstrating the applicability of using hHMM, trained on limited data sections, to label a complete dataset. This technique achieved comparable results to its fully-supervised equivalent. Our semi-supervised method has the significant advantage of reduced annotation cost. Furthermore, it reduces the opportunity for human error in the labeling process normally required for training of segmentation algorithms. It also lowers the annotation cost of training a model capable of continuous monitoring of cycle characteristics such as those employed to analyze the progress of movement disorders or analysis of running technique. Full article
(This article belongs to the Special Issue Annotation of User Data for Sensor-Based Systems)
Show Figures

Figure 1

Back to TopTop