Our android operators represent the future of ‘Software 2.0’ engineering, where they shape the capabilities of robots through data, rather than writing code, says Eric Jang, VP of AI at 1X.
1X has made significant progress in the development of autonomous technology for safe and intelligent androids. “Our goal is to create androids that mimic the human form for maximum versatility. We achieve this by training motor behaviors from scratch, using neural networks and visual input,” explains Eric Jang, VP of AI at 1X.
1X has released a demonstration video showing a vision-based neural network that initiates actions at a frequency of 10 Hz, comparable to human reaction speeds. This network analyzes input and produces new actions ten times per second, enabling the androids to respond to changes in their environment in real time. The network controls all aspects of the android’s movements, including driving, arms, gripping function, body, and head. “The video is completely free from remote control, computer graphics, cuts, speed increases, or pre-set motion patterns. Everything is controlled by neural networks,” says Jang.
To train the machine learning models behind these behaviors, 1X has compiled a diverse dataset from 30 EVE robots. This dataset was used to develop a basic model that understands a wide range of physical behaviors. The model was then fine-tuned to adapt to specific tasks. “This approach allows us to quickly integrate new skills with just a few minutes of data collection and training,” says Jang.
The ability to quickly train androids for new tasks removes the traditional limitations associated with the number of available AI engineers. This creates a new level of flexibility in what the androids can offer customers. “Our android operators represent the future of ‘Software 2.0’ engineering, where they shape the capabilities of robots through data, rather than writing code,” concludes Jang.