Multimodal and Embodied Learning Expecially for Science Discovery
![](https://jeffbredenkamp.neocities.org/redline-min.png)
In this research direction, we aim to develop systems that can leverage multimodal data and knowledge to solve expert tasks in various domains and accelerate scientific discovery. We desire to efficiently learn expert models which have lower resource requirements, greater generalizability, and better explainability of their behaviors compared to existing foundation models. Specifically, we aim to develop theories and techniques that combine geometric multimodal representation learning with analogy-based reasoning for learning embodied agents in a multimodal and multitask setting, improving learning efficiency, model generalizability, and model explainability.
Relevant Materials
Publications
- Haochen Shi−, Zhiyuan Sun−, Xingdi Yuan, Marc-Alexandre Côté✉, Bang Liu✉. OPEx: A Component-Wise Analysis of LLM-Centric Agents in Embodied Instruction Following, accepted by ACL 2024. (Download)
- Sirui Hong*, Yizhang Lin*, Bang Liu, Bangbang Liu, Binhao Wu, Danyang Li, Jiaqi Chen, Jiayi Zhang, Jinlin Wang, Li Zhang, Lingyao Zhang, Min Yang, Mingchen Zhuge, Taicheng Guo, Tuo Zhou, Wei Tao, Wenyi Wang, Xiangru Tang, Xiangtao Lu, Xiawu Zheng, Xinbing Liang, Yaying Fei, Yuheng Cheng, Zongze Xu, Chenglin Wu✉. Data Interpreter: An LLM Agent For Data Science, arXiv:2402.18679. (Download)