Facebook aims to let people type from their brain, hear with skin
20 April 2017 | 0
Facebook revealed Wednesday that it is working on technology to let people type straight from their brains at 100 words per minute.
A team of over 60 scientists, engineers and others at its secretive Building 8 research lab are working in the area Facebook describes as ‘silent speech communications’. Another project is directed at allowing people to hear with their skin, for which the company is building the necessary hardware and software.
“So what if you could type directly from your brain?” Regina Dugan, vice president of engineering and Building 8, asked Wednesday at F8, Facebook’s annual two-day developer conference.
For Facebook, the question seems to be far from speculative. “Over the next two years, we will be building systems that demonstrate the capability to type at 100wpm [words per minute] by decoding neural activity devoted to speech,” Dugan wrote in a Facebook post. The executive has previously headed an advanced technology and projects group at Google and was earlier director of the US Defense Department’s Defense Advanced Research Projects Agency (DARPA). “It sounds impossible but it is closer than you may realise,” she said.
The concept isn’t exactly new. Researchers at Stanford University, for example, have shown that a brain-to-computer interface can enable people with paralysis to type via direct brain control, using electrode arrays placed in the brain to record signals from the motor cortex that controls muscle movement.
Facebook’s approach will be focused on developing a non-invasive system that could one day become a speech prosthetic for people with communication disorders or a new means for input to augmented reality, Dugan wrote. She said in her keynote that the planting of electrodes in the brain was not scalable and Facebook was looking instead at non-invasive sensors. Optical imaging techniques hold the most potential for providing the spatial and temporal resolution required for mapping brain signals, she added.
In a bid to placate privacy concerns, Dugan said that the technology was not interested in decoding a person’s random thoughts. The aim is to decode those words that the person decides to share and sends to the speech centre of the brain.
“Our brains produce enough data to stream four HD movies every second,” wrote Facebook’s CEO Mark Zuckerberg in a post. “The problem is that the best way we have to get information out into the world – speech – can only transmit about the same amount of data as a 1980s modem.” The human brain streams 1Tb/s although speech is transmitted at 40-60B/s, said Dugan who described speech as essentially a lossy compression algorithm.
The company’s aim is to develop a system that will let people type straight from their brain about five times faster than they can type on their phone today, which will be eventually turned into wearable technology that can be manufactured at scale. “Even a simple yes/no ‘brain click’ would help make things like augmented reality feel much more natural,” Zuckerberg wrote.
On hearing through the skin, Dugan said that we have two square meters of skin on our body that are filled with sensors, and wired to our brain. Braille took advantage of that by helping people interpret small bumps on a surface as language in the 19th century, but since then techniques have emerged that show the brain’s ability to reconstruct language from components. “Today we demonstrated an artificial cochlea of sorts and the beginnings of a new a ‘haptic vocabulary,’” she wrote on her Facebook page.
In another announcement Facebook said it is giving virtual reality developers the ability to embed 360-degree photo and video capture into their experiences with a new software development kit.
The 360 Capture SDK will let users capture the complete scene around them, for sharing to other platforms like Facebook. It’s a tool that’s designed to give people who don’t have VR headsets a window into the action and also lets people with the right hardware replay moments in full VR.
A key detail about the capture tool is it’s designed to work even on the minimum hardware necessary to run VR without degrading performance. The SDK can be used to capture 30 frames-per-second, 1080p video on less powerful hardware, while still maintaining 90fps frame rates for users who are in VR. On more powerful machines, it’s possible to capture higher resolution 4k content.
That performance is important for VR because a high framerate is critical to maintaining immersion and reducing disorientation for those people wearing headsets. It’s made possible by using a technique called cube mapping, which captures a six-sided cube of the scene. The technique provides a significant performance improvement over trying to capture a large number of photos then stitching them together, similar to traditional 360 cameras.
Facebook’s SDK works with the Unity and Unreal game engines out of the box and is also built to work with native engines. That means the tool will be useful for developers working with Facebook’s Oculus Rift headset, as well as competing hardware like the HTC Vive.
Right now, the photos and videos generated by the Capture SDK are saved to the user’s hard drive. In the future, Facebook will look into what sharing mechanism makes the most sense. In the meantime, developers can choose to set up their own sharing capabilities.
Wednesday’s news comes the same day Facebook announced two new camera arrays for capturing 360-degree content in the physical world. The Surround 360 x6 and x24 cameras will capture footage with six degrees of freedom (up and down, left and right, and forward and backward, with pitch, yaw, and roll).
The announcements also dovetail well with Tuesday’s launch of Facebook Spaces. It’s a virtual reality application that allows groups of up to four people to hang out with one another around a virtual table surrounded by a 360-degree photo or video.
IDG News Service