A Good UI Design for Humanoid Robots: A Face You can Read (Literally)

As humanoid general-purpose robots are now all the rage, Reuters reports that a new startup, Figure, has also gotten into the game with $70 million in funding.


Listen beautiful relax classics on our Youtube channel.

Earlier we asked who’s got the better-looking humanoid robot, Tesla or Sanctuary:

Tesla Optimus, left. Sanctuary AI Phoenix, right.

We wanted to add Figure’s offering to the debate, but images of the Figure-bot are pretty scarce, as you’ll see below. All we can deduce from what’s available is that they’ve gone with a monochrome color scheme and a pretty high level of refinement—no hard edges here, mostly organic—compared to Tesla and Sanctuary’s offerings.

At first I was dismayed that they’d gone with the bland mannequin-style head that Tesla went with. However, the designers have actually made the face a UI element for the human end user who will employ the robot, and I have to say it’s pretty brilliant:

While these robots will all undoubtedly be able to speak, surely there will be times when, complicated as they are, the human user will need data from the robot that is too dense or takes too long to parse verbally. If you’ll forgive the example, one of the UI/UX irritations I have with my wireless earbuds is that they have to announce “Battery charge [X]-percent, connected to [device]” every time I pop them in, rather than immediately playing the desired content; I’d much rather this was handled with some simple visual indicators.

I’m not saying you’d want to read the Figure-bot’s face every time it started up. But if, for example, you gave it a complicated set of instructions and wanted it to confirm what you’d asked of it, or if it was reporting results back to you, I think it would be tremendously helpful to have a robot whose face you could literally read.

Source: core77

No votes yet.
Please wait...