Menu Close

Inside Facebook reality labs: wrist-based interaction for the next computing platform

WATCH videos here: https://about.fb.com/news/2021/03/inside-facebook-reality-labs-wrist-based-interaction-for-the-next-computing-platform/

Context: Last week, we kicked off a three-part series on the future of human-computer interaction (HCI). In the first post, we shared our 10-year vision of a contextually-aware, AI-powered interface for augmented reality (AR) glasses that can use the information you choose to share, to infer what you want to do, when you want to do it. Today, we’re sharing some nearer-term research: wrist-based input combined with usable but limited contextualized AI, which dynamically adapts to you and your environment. Later this year, we’ll address some groundbreaking work in soft robotics to build comfortable, all-day wearable devices and give an update on our haptic glove research.

At Facebook Reality Labs (FRL) Research, we’re building an interface for AR that won’t force us to choose between interacting with our devices and the world around us. We’re developing natural, intuitive ways to interact with always-available AR glasses because we believe this will transform the way we connect with people near and far.

The future of human-computer interaction demands an exceptionally easy-to-use, reliable and private interface that lets us remain completely present in the real world at all times. That interface will require many innovations in order to become the primary way we interact with the digital world. Two of the most critical elements are contextually-aware AI that understands your commands and actions as well as the context and environment around you, and technology to let you communicate with the system effortlessly — an approach we call ultra-low-friction input. The AI will make deep inferences about what information you might need or things you might want to do in various contexts, based on an understanding of you and your surroundings, and will present you with a tailored set of choices. The input will make selecting a choice effortless — using it will be as easy as clicking a virtual, always-available button through a slight movement of your finger.

But this system is many years off. So today, we’re taking a closer look at a version that may be possible much sooner: wrist-based input combined with usable but limited contextualized AI, which dynamically adapts to you and your environment.

Why the Wrist

Why the wrist? There are many other input sources available, all of them useful. Voice is intuitive, but not private enough for the public sphere or reliable enough due to background noise. A separate device you could store in your pocket like a phone or a game controller adds a layer of friction between you and your environment. As we explored the possibilities, placing an input device at the wrist became the clear answer. The wrist is a traditional place to wear a watch, meaning it could reasonably fit into everyday life and social contexts. It’s a comfortable location for all-day wear. It’s located right next to the primary instruments you use to interact with the world — your hands. This proximity would allow us to bring the rich control capabilities of your hands into AR, enabling intuitive, powerful and satisfying interaction.

A wrist-based wearable has the additional benefit of easily serving as a platform for compute, battery and antennas while supporting a broad array of sensors. The missing piece was finding a clear path to rich input, and a potentially ideal solution was EMG.

EMG, electromyography, uses sensors to translate electrical motor nerve signals that travel through the wrist to the hand into digital commands that you can use to control the functions of a device. These signals let you communicate crisp one-bit commands to your device, a degree of control that’s highly personalizable and adaptable to many situations.

The signals through the wrist are so clear that EMG can understand finger motion of just a millimeter. That means input can be effortless. Ultimately, it may even be possible to sense just the intention to move a finger.

Photo of wrist-based wearable research prototype

“What we’re trying to do with neural interfaces is to let you control the machine directly, using the output of the peripheral nervous system — specifically the nerves outside the brain that animate your hand and finger muscles,” says FRL Director of Neuromotor Interfaces Thomas Reardon, who joined the FRL team when Facebook acquired CTRL-labs in 2019.

This is not akin to mind reading. Think of it like this: you take many photos and choose to share only some of them. Similarly, you have many thoughts and you choose to act on only some of them. When that happens, your brain sends signals to your hands and fingers telling them to move in specific ways in order to perform actions like typing and swiping. This is about decoding those signals at the wrist — the actions you’ve already decided to perform — and translating them into digital commands for your device. It’s a much faster way to act on the instructions that you already send to your device when you tap to select a song on your phone, click a mouse or type on a keyboard today.

Dynamic Control at the Wrist

Initially, EMG will provide just one or two bits of control we’ll call a “click,” the equivalent of tapping on a button. These are movement-based gestures like pinch and release of the thumb and forefinger that are easy to execute, regardless of where you are or what you’re doing, while walking, talking or sitting with your hands at your sides, in front of you or in your pockets. Clicking your fingers together will always just work, without the need for a wake word, making it the first ubiquitous, ultra-low-friction interaction for AR.

But that’s just the first step. EMG will eventually progress to richer controls. In AR, you’ll be able to actually touch and move virtual UIs and objects, as you can see in this demo video. You’ll also be able to control virtual objects at a distance. It’s sort of like having a superpower like the Force.

And that’s just the beginning. It’s highly likely that ultimately you’ll be able to type at high speed with EMG on a table or your lap — maybe even at higher speed than is possible with a keyboard today. Initial research is promising. In fact, since joining FRL in 2019, the CTRL-labs team has made important progress on personalized models, reducing the time it takes to train custom keyboard models that adapt to an individual’s typing speed and technique.

Read the full story here.

Article: Inside Facebook reality labs: wrist-based interaction for the next computing platform

Leave a Reply

Your email address will not be published. Required fields are marked *