This challenge focuses on the problem of annotating everyday activities (cooking) captured with camera and sensors. For this purpose we are using the CMU Kitchen Dataset. The challenge has two strands:
- Manual annotation: demonstrate your manual annotation software or toolset using the subset of this dataset described in Section 1.1 from the challenge’s instructions. We will compare the performance via various metrics, including: inter-rater reliability (percentage agreement, Cohen’s Kappa, intraclass correlation, Krippendorff’s Alpha, Pearson’s correlation coefficient, Spearman’s Rho), time taken for the annotation process, and learning curve.
- Semi-supervised or automated annotation: build a solution or demonstrate your existing solution for annotating the data from this dataset, using any of the datatypes listed in Section 1.1 from the challenge’s instructions. We will compare the performance with standard machine learning evaluation metrics (accuracy, precision, F1-score). We will also ask you to provide indications regarding the speed and efficiency of the method used, and the resources required to run it.
For both of these tasks we are providing our own data dictionary for you to use, which is more detailed than the existing CMU annotation, and is validated against a series of metrics and against a causal model.
The detailed description of the challenge and the instructions could be downloaded from here.
The CMU Kitchen Dataset is a subset of the CMU Multi-Modal Activity Database (CMU-MMAC), which ‘contains multimodal measures of the human activity of subjects performing the tasks involved in cooking and food preparation. The CMU-MMAC database was collected in Carnegie Mellon’s Motion Capture Lab. A kitchen was built and to date twenty-five subjects have been recorded cooking five different recipes: brownies, pizza, sandwich, salad, and scrambled eggs.
For this challenge, we ask you not to use the audio feed or motion capture or the BodyMedia device data. You can use the video, the eWatch wearable data, the IMUs and RFID data.
A detailed description of the available data sources and their resolution and format could be found here.
A small prize will be offered for the winner in each category. Select participants will be invited to present their results in the next ARDUOUS workshop. The most highly scored overall entry across both challenges will also be granted a fee waiver to present their results in the MDPI journal Sensors.
The submission deadline is 15th November 2021.
Submit the results and artefacts of the challenge [here]. Use the format described in the instructions for the challenge. Please, after submitting your results and artefacts, send us an email with your contact details. See “Contact” for our email addresses.
For any questions, contact Kristina Yordanova (firstname.lastname@example.org), Emma Tonkin (email@example.com), Teodor Stoev (firstname.lastname@example.org) or Greg Tourte (email@example.com).