Towards User-intent aware Multimodal Retrieval

This eBay-funded project aims to develop a comprehensive framework to support vision–language (V–L) research within eBay.

Start date

April 2025

End date

March 2027

Overview

With this project, we aim to create a framework to support vision-language (V-L) research within eBay. We will develop a framework leveraging existing pre-trained vision-language models to improve the performance of matching listings with existing products, and for tasks like product listing moderation, attribute extraction, explained relevance prediction. This framework can later also serve as a testbed for customer support chatbots with image understanding and image-based product search.

Funding amount

£120,000

Funder

Contact

For enquiries or potential collaboration on this topic please contact Dr Diptesh Kanojia, the Principal Investigator of the project.

See other research projects carried out at the Centre for Translation Studies.

Related sustainable development goals

Industry, Innovation, and Infrastructure UN Sustainable Development Goal 9 logo

Research themes

Find out more about our research at Surrey: