- Affiliated to IEEE PerCom 2018
- March 19-23, 2018 (exact day TBC) (full day)
Labelling user data is a central part of the design and evaluation of pervasive systems that aim to support the user through situation-aware reasoning. It is essential both in designing and training the system to recognize and reason about the situation, either through the definition of a suitable situation model in knowledge-driven applications, or though the preparation of training data for learning tasks in data-driven models. Hence, the quality of annotations can have a significant impact on the performance of the derived systems.
Labelling is also vital for validating and quantifying the performance of applications.
With pervasive systems relying increasingly on large datasets for designing and testing models of users’ activities, the process of data labelling is becoming a major concern for the community. This also reflects the increasing need of intelligent interactive annotation tools, which can reduce the manual annotation effort and improve the annotation performance and quality in large datasets.
To address these problems, this year’s workshop has a particular focus on:
- intelligent and interactive tools and automated methods for annotating large user datasets.
Furthermore, we aim to address the general problems of:
- the role and impact of annotations in designing pervasive applications,
- the process of labelling, and the requirements to produce high quality annotations, especially in the context of large datasets.
The goal of the workshop is to provide a ground for researchers from interdisciplinary backgrounds to reflect on their
experiences, challenges, and possible resolutions of the related problems.
More details about the workshop can be found below.
- Call for Papers
- Submission Guidelines
- Program Committee
- Previous Workshops
- Special Issue
- Datasets and Annotation Tools