Paper
NaVILA: Legged Robot Vision-Language-Action Model for Navigation
Published Dec 5, 2024 · An-Chieh Cheng, Yandong Ji, Zhaojing Yang
ArXiv
19
Citations
3
Influential Citations
Abstract
This paper proposes to solve the problem of Vision-and-Language Navigation with legged robots, which not only provides a flexible way for humans to command but also allows the robot to navigate through more challenging and cluttered scenes. However, it is non-trivial to translate human language instructions all the way to low-level leg joint actions. We propose NaVILA, a 2-level framework that unifies a Vision-Language-Action model (VLA) with locomotion skills. Instead of directly predicting low-level actions from VLA, NaVILA first generates mid-level actions with spatial information in the form of language, (e.g.,"moving forward 75cm"), which serves as an input for a visual locomotion RL policy for execution. NaVILA substantially improves previous approaches on existing benchmarks. The same advantages are demonstrated in our newly developed benchmarks with IsaacLab, featuring more realistic scenes, low-level controls, and real-world robot experiments. We show more results at https://navila-bot.github.io/
NaVILA, a 2-level framework, effectively unifies a Vision-Language-Action model with locomotion skills, improving legged robot navigation in challenging scenes.
Full text analysis coming soon...