Vision-Language-Action Models
This module covers VLA models that enable natural language control of robots.
Learning Outcomes​
- Understand VLA architecture and training
- Deploy pre-trained VLA models
- Fine-tune models for specific tasks
- Integrate VLA with real robot systems
Prerequisites​
- Completion of NVIDIA Isaac Sim module
- Understanding of deep learning concepts
- Experience with PyTorch
Module Content​
Chapter 1: Introduction to VLA​
Overview of VLA models and their capabilities.
Chapter 2: Pre-trained Models​
Using models like RT-2 and OpenVLA.
Chapter 3: Fine-tuning​
Adapting VLA models for specific tasks.
Chapter 4: Deployment​
Running VLA models on robot hardware.