Online Conference Program and PDF Conference Program are now up.
Instructions for paper presentation is now up.
Instructions for camera ready submission and visa letters are now up.
Hotel and Conference registration are now open.
Learning with few or without annotated face body and gesture data
Since more than a decade, Deep Learning has been successfully employed for vision-based face, body and gesture analysis, both for static and dynamic granularities. This is particularly due to the development of effective deep architectures and the release of quite consequent datasets. However, one of the main limitations of Deep Learning is that it requires large scale annotated datasets to train efficient models. Gathering such face, body or gesture data and annotating them can be very very time consuming and laborious. This is particularly the case in areas where experts from the field are required, like in the medical domain. In such a case, using crowdsourcing may not be suitable. In addition, currently available face and/or datasets cover a limited set of categories. This makes the adaptation of trained models to novel categories not straightforward. Finally, while most of the available datasets focus on classification problems with discretized labels, continuous annotations are required in many scenarios. Hence, this significantly complicates the annotation process. The goal of this workshop is to explore approaches to overcome such limitations by investigating ways to learn from few annotated data, to transfer knowledge from similar domains or problems, or to benefit from the community to gather novel large scale annotated datasets.
Organizers: Maxime Devanne, Mohamed Daoudi, Stefano Berretti, Germain Forestier, Jonathan Weber