Quanliang Jing, Di Yao, Chang Gong
Dec 15, 2021
2021 IEEE International Conference on Big Data (Big Data)
In this paper, we propose a new task namely trajectory cross-modal retrieval which achieves the cross-modal search between coordinate trajectories and images containing trajectories. Nevertheless, trajectory cross-modal retrieval is rather challenging in learning the representations of each modality and reduce the cross-domain discrepancy caused by the inconsistent data distribution at the same time. we proposes a cross-modal retrieval model TrajCross based on multi-level representation for trajectory cross-modal retrieval. Specifically, TrajCross extracts the location features and the shape information respectively for the represention of multi-modal data. we adopt a contrastive learning method to achieve semantic preservation among similar multi-modal data. Extensive experiments show that TrajCross significantly outperforms state-of-the-art cross-modal retrieval methods.