Earlier this week, I sat down with the company’s co-founders Gisèle Belliot and José Alonso Ybanez Zepeda, along with Uber co-founder-turned-investor Oscar Salazar, to discuss the product. The company’s ramping up for a formal announcement at CES, in tandem with the launch of an Indiegogo campaign, and it’s still working out some of the kinks around contextualizing its product.
We met up at a shared workspace in Manhattan, in a meeting room made up to resemble a living room — except for the big construction paper cutouts of buttons like Play and Pause adhered to different surfaces (another shorthand visualization of the product’s functionality).
By way of shortening this elevator ride, I’d describe the startup thusly: It’s Amazon Echo with a Kinect camera built in. In place of voice commands, you’ve got gestures.
In some ways Hayo is designed to fulfill similar functionality as Amazon’s hardware — a sort of connected home hub that ties together various smart devices — lights, music, thermostat, etc. When you get down to it, the possibilities are really endless when it comes to gesture controls in a three-dimensional space.
The company is, understandably, starting off simply with regards to functionality. At launch, the system will allow the user to designate 10 “buttons” per device. A button here is a point in space — a surface on, say, a wall or table. Each button can be assigned two different functions, which can adjust based on variables like time of day and user.
Very interesting post. User Interfaces have indeed moved into User Experiences and as products/services saturate our free/idle moments, context management and focus on relevance will be indeed core.
Plus we won't manage the dozen of objects in the smart home with dozens of apps and screen interactions. This is where http://hayo.io could step in.