From Imitation to Intuition: Intrinsic Reasoning for Open-Instance Video Classification

Ke Zhang1,2,*, Xiangchen Zhao2,*, Yunjie Tian2, Jiayu Zheng2, Vishal M. Patel1, Di Fu2
1 Johns Hopkins University
2 ByteDance Inc.

Method illustration

Method Pipeline Diagram

Abstract

Conventional video classification models, acting as effective imitators, excel in scenarios with homogeneous data distributions. However, real-world applications often present an open-instance challenge, where intra-class variations are vast and complex, beyond existing benchmarks. While traditional video encoder models struggle to fit these diverse distributions, vision-language models (VLMs) offer superior generalization but have not fully leveraged their reasoning capabilities (intuition) for such tasks. In this paper, we bridge this gap with an intrinsic reasoning framework that evolves open-instance video classification from imitation to intuition. Our approach, namely DeepIntuit, begins with a cold-start supervised alignment to initialize reasoning capability, followed by refinement using Group Relative Policy Optimization (GRPO) to enhance reasoning coherence through reinforcement learning. Crucially, to translate this reasoning into accurate classification, DeepIntuit then introduces an intuitive calibration stage. In this stage, a classifier is trained on this intrinsic reasoning traces generated by the refined VLM, ensuring stable knowledge transfer without distribution mismatch. Extensive experiments demonstrate that for open-instance video classification, DeepIntuit benefits significantly from transcending simple feature imitation and evolving toward intrinsic reasoning.

Open-instance video classification

Method Pipeline Diagram

Close-instance vs. open-instance video classification. (a) Close-instance benchmarks have relatively homogeneous intra-class distributions. (b) Open-instance settings exhibit broader, open-ended intra-class variation that better reflects real-world data. (c) Consequently, conventional video encoders fit close-instance data well but struggle to generalize, whereas VLMs with stronger semantic priors are more robust in the open-instance regime.