A recent patent filing from Google shows their intention to use touch gestures to control their devices. What does the patent describe, what challenges would such a system face and where could it lead in the future?
When it comes to physical devices, Google often falls behind many established competitors such as Microsoft, Apple, and Samsung, but that never stops them from trying to create the next “big thing” in technology. For example, they tried to innovate wearable technology with Google Glass, but it turned out to be a total failure after it became clear that those who wore them would be ridiculed (that and the glasses were completely impractical for everyday use).
However, they have had some success with their Pixel headphones which work almost identically to Apple AirPods, charge in their case, and only have wireless connectivity with Google products. Building on their moderate success, Google recently filed a new patent to take their headphones to the next level of connectivity with skin gestures.
The new patent describes different technologies and methods for deploying gestures that can be drawn on the skin with a finger, and the worn device will receive these gestures as commands. Unlike visual systems that see a gesture or a touchscreen that can read gestures, the patent describes the earpiece’s ability to measure acceleration and deformation of the skin to determine which gesture was made. This would effectively turn the user’s body into an interactive system with no interface or need to wear circuitry to read inputs.
For example, swiping up relative to the device can increase the volume, which would be detected by an accelerometer as an upward force. Another example would be tapping the skin to skip a track which would be detected via sudden large bursts of acceleration. For complex gestures, the use of machine learning can help infer actions from complex data that follow no apparent pattern.
Creating a completely electronics-free skin interface presents some challenges that must be carefully addressed. In the case of watches, the orientation is quite easy to determine because watches must be mounted in a specific way, and correctly choosing the direction of a gesture relative to the watch would be trivial.
However, listeners that don’t have a method to ensure that they always face a specific direction must ensure that they can orient themselves correctly. For example, an earpiece that has been rotated as little as 20 degrees will likely read gestures very differently, which could lead to errors when trying to determine which gesture is being made. Of course, machine learning can help counter these challenges, but that would potentially require a machine learning system that learns over time.
Another challenge encountered with such a system is being able to distinguish between swiping left and right and putting on a hat, glasses or scarf. Essentially, humans are very fidgety and prone to making all sorts of accelerometer noises, which could easily confuse an accelerometer-based gesture system. Again, neural networks can help cancel out unnecessary noise (especially when walking), but the difference between scratching your temple and turning up the volume might seem the same.
While bioelectronic systems are unlikely to appear anytime soon with displays embedded directly into our skin, the use of dermal gestures and other advanced control systems are beginning to gain traction. One such method already in commercial use is bone conduction for sound, which uses the skeletal structure of the head to transfer sound directly into the ear canal without the need for speakers.
But for such technology to become practical, it must first be demonstrated reliably while providing users with a practical use case. For example, laser keyboards that can project a keyboard onto a desk sound like a good idea, but the truth is that they’re uncomfortable because users press down on a solid surface with no tactile feedback. In the case of skin gestures, it can turn out to cause skin irritation and blisters if relied on too much.