Figure's humanoid robot takes voice orders to help around the house | TechCrunch
Briefly

Figure's CEO Brett Adcock introduced Helix, a new Vision-Language-Action (VLA) model for humanoid robots that utilizes natural language prompts and visual data. Following their departure from an OpenAI collaboration, Figure aims to enhance robot autonomy by allowing them to understand and perform tasks in real-time. The system can operate two robots simultaneously, demonstrating strong object generalization by interacting with unfamiliar household items without prior training. This innovation seeks to address the unique challenges presented by home environments, which lack the structure of industrial settings.
In an ideal world, you could simply tell a robot to do something and it would just do it. That is where Helix comes in, according to Figure.
Helix displays strong object generalization, being able to pick up thousands of novel household items with varying shapes, sizes, colors, and material properties never encountered before in training, simply by asking in natural language.
Read at TechCrunch
[
|
]