CIRCLE has been accepted to CVPR Findings
Published by Marco Garosi in Research · Monday 23 Feb 2026 · 1:30
Tags: large, multimodal, models, open, world, classification
Tags: large, multimodal, models, open, world, classification
Exciting news: our latest research is headed to CVPR Findings 2026!
I am thrilled to announce that our latest paper, investigating the true classification potential of large multimodal models (LMMs), has been accepted to CVPR Findings 2026!
For a long time, the computer vision community has relied on a generally accepted rule of thumb: use CLIP-like contrastive vision-language models (VLMs) for zero-shot classification, and save LMMs for more complex, generative tasks.
In our new paper, we challenge that status quo.
Rethinking multimodal classification
We wanted to know what happens when you leverage a frequently overlooked superpower of LMMs: in-context learning. Here is a brief look at what we discovered:
- Matching the specialists: When we benchmarked state-of-the-art LMMs on diverse closed-world classification datasets, we found that providing just a few in-context examples allows LMMs to match—and sometimes surpass—contrastive VLMs using cache-based adapters.
- The open-world hurdle: We then pushed LMMs into the much more challenging "open-world" setting. Because LMMs are generative, they naturally fit this task. However, they tend to stumble when provided with imperfect context information.
- Introducing CIRCLE: To solve this, we developed CIRCLE, a simple, completely training-free method. CIRCLE assigns pseudo-labels to in-context examples and iteratively refines them using the available context itself.
A unified future
Through extensive experiments, we demonstrated that CIRCLE establishes a highly robust baseline for open-world classification. It consistently outperforms VLM equivalents, proving that large multimodal models have the potential to act as powerful, unified classifiers and flexible alternatives to highly specialized models.
I can't wait to share more details with the community at CVPR 2026. Stay tuned for the full paper release and the accompanying code!
There are no reviews yet.

