ChatSearch: a Dataset and a Generative Retrieval Model for General Conversational Image Retrieval
In this paper, we investigate the task of general conversational image retrieval on open-domain images. The objective is to search for images based on interactive conversations between humans and computers. To advance this task, we curate a dataset called ChatSearch. This dataset includes a multi-round multimodal conversational context query for each target image, thereby requiring the retrieval system to find the accurate image from database. Simultaneously, we propose a generative retrieval model named ChatSearcher, which is trained end-to-end to accept/produce interleaved image-text inputs/outputs. ChatSearcher exhibits strong capability in reasoning with multimodal context and can leverage world knowledge to yield visual retrieval results. It demonstrates superior performance on the ChatSearch dataset and also achieves competitive results on other image retrieval tasks and visual conversation tasks. We anticipate that this work will inspire further research on interactive multimodal retrieval systems. Our dataset will be available at https://github.com/joez17/ChatSearch.
💡 Research Summary
The paper tackles the under‑explored problem of general conversational image retrieval—searching for images through natural, multi‑turn dialogues that may involve both text and visual references. To enable systematic study, the authors introduce two major contributions: a large‑scale multimodal dialogue dataset called ChatSearch and a generative retrieval model named ChatSearcher.
ChatSearch dataset
Starting from the MS‑COCO image‑text pairs, the authors build an automated pipeline that leverages state‑of‑the‑art foundation models (GPT‑4 for dialogue generation, CLIP‑H for image similarity search, and BLIP‑2‑OPT2.7B for captioning). The pipeline creates three types of conversational contexts:
- tChatSearch – pure textual multi‑turn dialogues.
- iChatSearch – single‑round image‑text interactions, generated via two complementary strategies:
- MDC‑I (reference‑image) – a source image is used, similar images are retrieved with CLIP, and GPT‑4 crafts a modification instruction that highlights differences.
- MDC‑T (reference‑text) – a source caption is used, GPT‑4 produces a two‑turn dialogue, the first answer is turned into an image via CLIP search, and the second answer returns the original caption.
- mChatSearch – merged multi‑turn multimodal dialogues that combine the above streams, yielding richer, more ambiguous contexts.
After automatic generation, a team of five experts manually reviews the test split, filtering out low‑quality images and incoherent dialogues. The final dataset comprises roughly 30 K dialogues, split into training and evaluation sets, and is organized into the three sub‑tasks mentioned above. Evaluation uses Recall@K (K = 1, 5, 10) and an average recall metric.
ChatSearcher model
ChatSearcher is a generative retrieval system that treats both word prediction and image retrieval as a single generation problem. Its architecture consists of:
- LLM backbone – Vicuna‑7B v1.5 (causal decoder‑only).
- Vision encoder – OpenAI CLIP ViT‑L, providing a global CLS token and a set of dense visual embeddings extracted by a Q‑former (borrowed from BLIP‑2).
- Multimodal tokenization – Visual embeddings are wrapped between special tokens `
Comments & Academic Discussion
Loading comments...
Leave a Comment