TY - GEN
T1 - How Effective are Self-supervised Models for Contact Identification in Videos
AU - Gunawardhana, Malitha
AU - Sadith, Limalka
AU - David, Liel
AU - Harari, Daniel
AU - Khan, Muhammad Haris
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.
PY - 2025
Y1 - 2025
N2 - The exploration of video content via Self-Supervised Learning (SSL) models has unveiled a dynamic field of study, emphasizing both the complex challenges and unique opportunities inherent in this area. Despite the growing body of research, the ability of SSL models to detect physical contacts in videos remains largely unexplored, particularly the effectiveness of methods such as downstream supervision with linear probing or full fine-tuning. This work aims to bridge this gap by employing eight different convolutional neural networks (CNNs) based video SSL models to identify instances of physical contact within video sequences specifically. The Something-Something v2 (SSv2) and Epic-Kitchen (EK-100) datasets were chosen for evaluating these approaches due to the promising results on UCF101 and HMDB51, coupled with their limited prior assessment on SSv2 and EK-100. Additionally, these datasets feature diverse environments and scenarios, essential for testing the robustness and accuracy of video-based models. This approach not only examines the effectiveness of each model in recognizing physical contacts but also explores the performance in the action recognition downstream task. By doing so, valuable insights into the adaptability of SSL models in interpreting complex, dynamic visual information are contributed.
AB - The exploration of video content via Self-Supervised Learning (SSL) models has unveiled a dynamic field of study, emphasizing both the complex challenges and unique opportunities inherent in this area. Despite the growing body of research, the ability of SSL models to detect physical contacts in videos remains largely unexplored, particularly the effectiveness of methods such as downstream supervision with linear probing or full fine-tuning. This work aims to bridge this gap by employing eight different convolutional neural networks (CNNs) based video SSL models to identify instances of physical contact within video sequences specifically. The Something-Something v2 (SSv2) and Epic-Kitchen (EK-100) datasets were chosen for evaluating these approaches due to the promising results on UCF101 and HMDB51, coupled with their limited prior assessment on SSv2 and EK-100. Additionally, these datasets feature diverse environments and scenarios, essential for testing the robustness and accuracy of video-based models. This approach not only examines the effectiveness of each model in recognizing physical contacts but also explores the performance in the action recognition downstream task. By doing so, valuable insights into the adaptability of SSL models in interpreting complex, dynamic visual information are contributed.
UR - http://www.scopus.com/inward/record.url?scp=85210261761&partnerID=8YFLogxK
U2 - 10.1007/978-981-97-9003-6_8
DO - 10.1007/978-981-97-9003-6_8
M3 - Conference contribution
AN - SCOPUS:85210261761
SN - 9789819790029
T3 - Communications in Computer and Information Science
SP - 117
EP - 131
BT - Human Activity Recognition and Anomaly Detection - 4th International Workshop, DL-HAR 2024, and 1st International Workshop, ADFM 2024, Held in Conjunction with IJCAI 2024, Revised Selected Papers
A2 - Peng, Kuan-Chuan
A2 - Wang, Yizhou
A2 - Li, Ziyue
A2 - Chen, Zhenghua
A2 - Wu, Min
A2 - Yang, Jianfei
A2 - Suh, Sungho
PB - Springer Science and Business Media B.V.
T2 - 4th International Workshop on Deep Learning for Human Activity Recognition, DL-HAR 2024, and 1st International Workshop on Anomaly Detection with Foundation Models, ADFM 2024, Held in Conjunction with the International Joint Conference on AI, IJCAI 2024
Y2 - 3 August 2024 through 9 August 2024
ER -