Here you can find different artifacts from the project Text2HBM.
Textual Descriptions Dataset
Summary: Recent research in behavior understanding through language grounding has shown it is possible to automatically generate behaviour models from textual instructions. These models usually have goal-oriented structure and are modelled with different formalisms from the planning domain such as the Planning Domain Definition Language. One major problem that still remains is that there are no benchmark datasets for comparing the different model generation approaches, as each approach is usually evaluated on domain-specific application. To allow the objective comparison of different methods for model generation from textual instructions, in this report we introduce a dataset consisting of 83 textual instructions in English language, their refinement in a more structured form as well as manually developed plans for each of the instructions. The dataset is publicly available to the community.
Here you can find the link to the technical report: Towards Evaluating Plan Generation Approaches with Instructional Texts
Here you can find the dataset, which is publicly available.
Kitchen Task Assessment Dataset
Summary: With the demographic change towards ageing population, the number of people suffering from neurodegenerative diseases such as dementia increases. As the ratio between young and elderly population changes towards the seniors, it becomes important to develop intelligent technologies for supporting the elderly in their everyday activities. Such intelligent technologies usually rely on training data in order to learn models for recognising problematic behaviour. One problem these systems face is that there are not many datasets containing training data for people with dementia. What is more, many of the existing datasets are not publicly available due to privacy concerns. To address the above problems, we present a sensor dataset for the kitchen task assessment containing normal and erroneous behaviour due to dementia. The dataset is recorded by actors, who follow instructions describing normal and erroneous behaviour caused by the progression of dementia. Furthermore, we present a semantic annotation scheme which allows reasoning not only about the observed behaviour but also about the causes of the errors.
Here you can find the link to the sensor dataset, including sensor data and semantic annotation.
The video data is available on request. If you are interested in the video data, please, contact kristina.yordanova (at) text2hbm.org or peter.eschholz (at) uni-rostock.de.
A paper describing the dataset and first evaluation results has been accepted for publication in the PerCom Workshop Proceedings.
The dataset and the first evaluation results have been certified in the PerCom Artifacts Track.
Semantic Annotation for the CMU-MMAC Dataset
Summary: Providing ground truth is essential for activity recognition for three reasons: to apply methods of supervised learning, to provide context information for knowledge-based methods, and to quantify the recognition performance. Semantic annotation extends simple symbolic labelling by assigning semantic meaning to the label, enabling further reasoning. We create semantic annotation for three of the five sub datasets in the CMU grand challenge dataset, which is often cited but, due to missing and incomplete annotation, almost never used. The CMU-MMAC consists of five sub datasets (Brownie, Sandwich, Eggs, Salad, Pizza). Each of them contains recorded sensor data from one food preparation task. The dataset contains data from 55 subjects, were each of them participates in several sub experiments. While executing the assigned task, the subjects were recorded with five cameras and multiple sensors.
The produced annotation is publicly available, to enable further usage of the CMU grand challenge dataset. The annotation of three of the five datasets (Brownie, Sandwich, and Eggs) can be downloaded here.