A sneak peak at radical future user interfaces for phones, computers, and VR

Grabity: a wearable haptic interface for simulating weight and grasping in VR (credit: UIST 2017)

Drawing in air, touchless control of virtual objects, and a modular mobile phone with snap-in sections (for lending to friends, family members, or even strangers) are among the innovative user-interface concepts to be introduced at the 30th ACM User Interface Software and Technology Symposium (UIST 2017) on October 22–25 in Quebec City, Canada.

Here are three concepts to be presented, developed by researchers at Dartmouth College’s human computer interface lab.

Retroshape: tactile watch feedback

Darthmouth’s Retroshape concept would add a shape-deforming tactile feedback system to the back of a future watch, allowing you to both see and feel virtual objects, such as a bouncing ball or exploding asteroid. Each pixel on RetroShape’s screen has a corresponding “taxel” (tactile pixel) on the back of the watch, using 16 independently moving pins.


UIST 2017 | RetroShape: Leveraging Rear-Surface Shape Displays for 2.5D Interaction on Smartwatches

Frictio smart ring

Current ring-gadget designs will allow users to control things. Instead, Frictio uses controlled rotation to provide silent haptic alerts and other feedback.


UIST 2017 — Frictio: Passive Kinesthetic Force Feedback for Smart Ring Output

Pyro: fingertip control

Pyro is a covert gesture-recognition concept, based on moving the thumb tip against the index finger — a natural, fast, and unobtrusive way to interact with a computer or other devices. It uses an energy-efficient thermal infrared sensor to detect to detect micro control gestures, based on patterns of heat radiating from fingers.


UIST 2017 — Pyro: Thumb-Tip Gesture Recognition Using Pyroelectric Infrared Sensing

Highlights from other presentations at UIST 2017:


UIST 2017 Technical Papers Preview

Teleoperating robots with virtual reality: getting inside a robot’s head

A new VR system from MIT’s Computer Science and Artificial Intelligence Laboratory could make it easy for factory workers to telecommute. (credit: Jason Dorfman, MIT CSAIL)

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a virtual-reality (VR) system that lets you teleoperate a robot using an Oculus Rift or HTC Vive VR headset.

CSAIL’s “Homunculus Model” system (the classic notion of a small human sitting inside the brain and controlling the actions of the body) embeds you in a VR control room with multiple sensor displays, making it feel like you’re inside the robot’s head. By using gestures, you can control the robot’s matching movements to perform various tasks.

The system can be connected either via a wired local network or via a wireless network connection over the Internet. (The team demonstrated that the system could pilot a robot from hundreds of miles away, testing it on a hotel’s wireless network in Washington, DC to control Baxter at MIT.)

According to CSAIL postdoctoral associate Jeffrey Lipton, lead author on an open-access arXiv paper about the system (presented this week at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) in Vancouver), “By teleoperating robots from home, blue-collar workers would be able to telecommute and benefit from the IT revolution just as white-collars workers do now.”

Jobs for video-gamers too

The researchers imagine that such a system could even help employ jobless video-gamers by “game-ifying” manufacturing positions. (Users with gaming experience had the most ease with the system, the researchers found in tests.)

Homunculus Model system. A Baxter robot (left) is outfitted with a stereo camera rig and various end-effector devices. A virtual control room (user’s view, center), generated on an Oculus Rift CV1 headset (right), allows the user to feel like they are inside Baxter’s head while operating it. Using VR device controllers, including Razer Hydra hand trackers used for inputs (right), users can interact with controls that appear in the virtual space — opening and closing the hand grippers to pick up, move, and retrieve items. A user can plan movements based on the distance between the arm’s location marker and their hand while looking at the live display of the arm. (credit: Jeffrey I. Lipton et al./arXiv).

To make these movements possible, the human’s space is mapped into the virtual space, and the virtual space is then mapped into the robot space to provide a sense of co-location.

The team demonstrated the Homunculus Model system using the Baxter humanoid robot from Rethink Robotics, but the approach could work on other robot platforms, the researchers said.

In tests involving pick and place, assembly, and manufacturing tasks (such as “pick an item and stack it for assembly”) comparing the Homunculus Model system with existing state-of-the-art automated remote-control, CSAIL’s Homunculus Model system had a 100% success rate compared with a 66% success rate for state-of-the-art automated systems. The CSAIL system was also better at grasping objects 95 percent of the time and 57 percent faster at doing tasks.*

“This contribution represents a major milestone in the effort to connect the user with the robot’s space in an intuitive, natural, and effective manner.” says Oussama Khatib, a computer science professor at Stanford University who was not involved in the paper.

The team plans to eventually focus on making the system more scalable, with many users and different types of robots that are compatible with current automation technologies.

* The Homunculus Model system solves a delay problem with existing systems, which use a GPU or CPU, introducing delay. 3D reconstruction from the stereo HD cameras is instead done by the human’s visual cortex, so the user constantly receives visual feedback from the virtual world with minimal latency (delay). This also avoids user fatigue and nausea caused by motion sickness (known as simulator sickness) generated by “unexpected incongruities, such as delays or relative motions, between proprioception and vision [that] can lead to the nausea,” the researchers explain in the paper.


MITCSAIL | Operating Robots with Virtual Reality


Abstract of Baxter’s Homunculus: Virtual Reality Spaces for Teleoperation in Manufacturing

Expensive specialized systems have hampered development of telerobotic systems for manufacturing systems. In this paper we demonstrate a telerobotic system which can reduce the cost of such system by leveraging commercial virtual reality(VR) technology and integrating it with existing robotics control software. The system runs on a commercial gaming engine using off the shelf VR hardware. This system can be deployed on multiple network architectures from a wired local network to a wireless network connection over the Internet. The system is based on the homunculus model of mind wherein we embed the user in a virtual reality control room. The control room allows for multiple sensor display, dynamic mapping between the user and robot, does not require the production of duals for the robot, or its environment. The control room is mapped to a space inside the robot to provide a sense of co-location within the robot. We compared our system with state of the art automation algorithms for assembly tasks, showing a 100% success rate for our system compared with a 66% success rate for automated systems. We demonstrate that our system can be used for pick and place, assembly, and manufacturing tasks.

Disney Research’s ‘Magic Bench’ makes augmented reality a headset-free group experience

Magic Bench (credit: Disney Research)

Disney Research has created the first shared, combined augmented/mixed-reality experience, replacing first-person head-mounted displays or handheld devices with a mirrored image on a large screen — allowing people to share the magical experience as a group.

Sit on Disney Research’s Magic Bench and you may see an elephant hand you a glowing orb, hear its voice, and feel it sit down next to you, for example. Or you might get rained on and find yourself underwater.

How it works

Flowchart of the Magic Bench installation (credit: Disney Research)

People seated on the Magic Bench can see themselves on a large video display in front of them. The scene is reconstructed using a combined depth sensor/video camera (Microsoft‰ Kinect) to image participants, bench, and surroundings. An image of the participants is projected on a large screen, allowing them to occupy the same 3D space as a computer-generated character or object. The system can also infer participants’ gaze.*

Speakers and haptic sensors built into the bench add to the experience (by vibrating the bench when the elephant sits down in this example).

The research team will present and demonstrate the Magic Bench at SIGGRAPH 2017, the Computer Graphics and Interactive Techniques Conference, which began Sunday, July 30 in Los Angeles.

* To eliminate depth shadows that occur in areas where the depth sensor has no corresponding line of sight with the color camera, a modified algorithm creates a 2D backdrop, according to the researchers. The 3D and 2D reconstructions are positioned in virtual space and populated with 3D characters and effects in such a way that the resulting real-time rendering is a seamless composite, fully capable of interacting with virtual physics, light, and shadows.


DisneyResearchHub | Magic Bench


Abstract of Magic Bench

Mixed Reality (MR) and Augmented Reality (AR) create exciting opportunities to engage users in immersive experiences, resulting in natural human-computer interaction. Many MR interactions are generated around a €first-person Point of View (POV). In these cases, the user directs to the environment, which is digitally displayed either through a head-mounted display or a handheld computing device. One drawback of such conventional AR/MR platforms is that the experience is user-specific. Moreover, these platforms require the user to wear and/or hold an expensive device, which can be cumbersome and alter interaction techniques. We create a solution for multi-user interactions in AR/MR, where a group can share the same augmented environment with any computer generated (CG) asset and interact in a shared story sequence through a third-person POV. Our approach is to instrument the environment leaving the user unburdened of any equipment, creating a seamless walk-up-and-play experience. We demonstrate this technology in a series of vignettes featuring humanoid animals. Participants can not only see and hear these characters, they can also feel them on the bench through haptic feedback. Many of the characters also interact with users directly, either through speech or touch. In one vignettŠe an elephant hands a participant a glowing orb. ŒThis demonstrates HCI in its simplest form: a person walks up to a computer, and the computer hands the person an object.

Alphabet’s X announces Glass Enterprise Edition, a hands-free device for hands-on workers

Glass Enterprise Edition (credit: X)

Alphabet’s X announced today Glass Enterprise Edition (EE) — an augmented-reality device targeted mainly to hands-on workers.

Glass EE is an improved version of the “Explorer Edition” — an experimental 2013 corporate version of the original Glass product.

On the left is an assembly engine manual that GE’s mechanics used to consult. Now they use Glass Enterprise Edition on the right. (credit: X)

On January 2015, the Enterprise team in X quietly began shipping the Enterprise Edition to corporate solution partners like GE and DHL.

Now, there are more than 50 businesses, including AGCO, Dignity HealthNSF InternationalSutter Health, The Boeing Company, and Volkswagen, all of who have been using Glass to complete their work faster and more easily than before, the X blog reports.

Workers can access training videos, images annotated with instructions, or quality assurance checklists, for example, or invite others to “see what you see” through a live video stream so you can collaborate and troubleshoot in real-time.

AGCO workers use Glass to see assembly instructions, make reports and get remote video support. (credit: X)

Glass EE enables workers to scan a machine’s serial number to instantly bring up a manual, photo, or video they may need to build a tractor. (credit: AGCO)

Significant improvements

The new “Glass 2.0″ design makes significant improvements over the original Glass, according to Jay Kothari, project lead on the Glass enterprise team, as reported by Wired. It’s accessible for those who wear prescription lenses. A release switch allows for removing the “Glass Pod” electronics part from the frame for use with safety glasses for the factory floor. EE also has faster WiFi, faster processing, extended battery life, an 8-megapixel camera (up from 5), and a (much-requested) red light to indicate recording is in process.

Using Glass with Augmedix, doctors and nurses at Dignity Health can focus on patient care rather than record keeping. (credit: X)

But uses are not limited to factories. EE exclusive distributor Glass Partners also offers Glass devices, specialized software solutions, and ongoing support for such applications as Augmedix, a documentation automation platform powered by human experts and software, which frees physicians from computer work (“Glass has “brought the joys of medicine back to my doctors,” says Albert Chan, M.D., Sutter Health), and swyMed, which gives medical care teams the ability to reliably connect to doctors for real-time telemedicine.

And there are even (carefully targeted) uses for non-workers: Aira provides instant access to information to blind and low-vision people.

A recent Forrester Research report predicts that by 2025, nearly 14.4 million U.S. workers will wear smart glasses.


sutterhealth | Smart Glass Transforms Doctor’s Office Visits, Improves Satisfaction

 

VR glove powered by soft robotics provides missing sense of touch

Prototype of haptic VR glove, using soft robotic “muscles” to provide realistic tactile feedback for VR experiences (credit: Jacobs School of Engineering/UC San Diego)

Engineers at UC San Diego have designed a light, flexible glove with soft robotic muscles that provide realistic tactile feedback for virtual reality (VR) experiences.

Currently, VR tactile-feedback user interfaces are bulky, uncomfortable to wear and clumsy, and they simply vibrate when a user touches a virtual surface or object.

“This is a first prototype, but it is surprisingly effective,” said Michael Tolley, a mechanical engineering professor at the Jacobs School of Engineering at UC San Diego and a senior author of a paper presented at the Electronic Imaging, Engineering Reality for Virtual Reality conference in Burlingame, California and published May 31, 2017 in Advanced Engineering Materials.

The key soft-robotic component of the new glove is a version of the “McKibben muscle” (a pneumatic, or air-based, actuator invented in 1950s by the physician Joseph L. McKibben for use in prosthetic limbs), using soft latex chambers covered with braided fibers. To apply tactile feedback when the user moves their fingers, the muscles respond like springs. The board controls the muscles by inflating and deflating them.*

Prototype haptic VR glove system. A computer generates an image of a virtual world (in this case, a piano keyboard with a river and trees in the background) that it sends to the VR device (such as an Oculus Rift). A Leap Motion depth-camera (on the table) detects the position and movement of the user’s hands and sends data to a computer. It sends an image of the user’s hands superimposed over the keyboard (in the demo case) to the VR display and to a custom fluidic control board. The board then feeds back a signal to soft robotic components in the glove to individually inflate or deflate fingers, mimicking the user’s applied forces.

The engineers conducted an informal pilot study of 15 users, including two VR interface experts. The demo allowed them to play the piano in VR. They all agreed that the gloves increased the immersive experience, which they described as “mesmerizing” and “amazing.”

VR headset image of a piano, showing user’s finger actions (credit: Jacobs School of Engineering/UC San Diego)

The engineers say they’re working on making the glove cheaper, less bulky, and more portable. They would also like to bypass the Leap Motion device altogether to make the system more self-contained and compact. “Our final goal is to create a device that provides a richer experience in VR,” Tolley said. “But you could imagine it being used for surgery and video games, among other applications.”

* The researchers 3D-printed a mold to make the gloves’ soft exoskeleton. This will make the devices easier to manufacture and suitable for mass production, they said. Researchers used silicone rubber for the exoskeleton, with Velcro straps embedded at the joints.


JacobsSchoolNews | A glove powered by soft robotics to interact with virtual reality environments


Abstract of Soft Robotics: Review of Fluid-Driven Intrinsically Soft Devices; Manufacturing, Sensing, Control, and Applications in Human-Robot Interaction

The emerging field of soft robotics makes use of many classes of materials including metals, low glass transition temperature (Tg) plastics, and high Tg elastomers. Dependent on the specific design, all of these materials may result in extrinsically soft robots. Organic elastomers, however, have elastic moduli ranging from tens of megapascals down to kilopascals; robots composed of such materials are intrinsically soft − they are always compliant independent of their shape. This class of soft machines has been used to reduce control complexity and manufacturing cost of robots, while enabling sophisticated and novel functionalities often in direct contact with humans. This review focuses on a particular type of intrinsically soft, elastomeric robot − those powered via fluidic pressurization.

A deep-learning tool that lets you clone an artistic style onto a photo

The Deep Photo Style Transfer tool lets you add artistic style and other elements from a reference photo onto your photo. (credit: Cornell University)

“Deep Photo Style Transfer” is a cool new artificial-intelligence image-editing software tool that lets you transfer a style from another (“reference”) photo onto your own photo, as shown in the above examples.

An open-access arXiv paper by Cornell University computer scientists and Adobe collaborators explains that the tool can transpose the look of one photo (such as the time of day, weather, season, and artistic effects) onto your photo, making it reminiscent of a painting, but that is still photorealistic.

The algorithm also handles extreme mismatch of forms, such as transferring a fireball to a perfume bottle. (credit: Fujun Luan et al.)

“What motivated us is the idea that style could be imprinted on a photograph, but it is still intrinsically the same photo, said Cornell computer science professor Kavita Bala. “This turned out to be incredibly hard. The key insight finally was about preserving boundaries and edges while still transferring the style.”

To do that, the researchers created deep-learning software that can add a neural network layer that pays close attention to edges within the image, like the border between a tree and a lake.

The software is still in the research stage.

Bala, Cornell doctoral student Fujun Luan, and Adobe collaborators Sylvian Paris and Eli Shechtman will present their paper at the Conference on Computer Vision and Pattern Recognition on July 21–26 in Honolulu.

This research is supported by a Google Faculty Re-search Award and NSF awards.


Abstract of Deep Photo Style Transfer

This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits.


Virtual-reality therapy found effective for treating phobias and PTSD

A soldier using “Bravemind” VR therapy (credit: USC Institute for Creative Technologies)

Virtual reality (VR) technology can be an effective part of treatment for phobias, post-traumatic stress disorder (PTSD) in combat veterans, and other mental health conditions, according to an open-access research review in the May/June issue of the Harvard Review of Psychiatry.

“VR-based exposure therapy” (VRE) has been found effective for treating panic disorder, schizophrenia, acute and chronic pain, addictions (including smoking), social anxiety disorder, claustrophobia, agoraphobia (fear or open spaces), eating disorders, “generalized anxiety disorder” (where daily functioning becomes difficult), obsessive-compulsive disorder, chronic pain, obsessive-compulsive disorder, and even schizophrenia.

iPhone VR Therapy System, including apps (lower right) (credit: Virtually Better, Inc.)

VR allows providers to “create computer-generated environments in a controlled setting, which can be used to create a sense of presence and immersion in the feared environment for individuals suffering from anxiety disorders,” says lead author Jessica L. Maples-Keller, PhD, of University of Georgia.

One dramatic example is progressive exposure to frightening situations in patients with specific phobias, such as fear of flying. This typically includes eight steps, from walking through an airport terminal to flying during a thunderstorm with turbulence, including specific stimuli linked to these symptoms (such as the sound of the cabin door closing). The patient can virtually experience repeated takeoffs and landings without going on an actual flight.

VR can simulate exposures that would be costly or impractical to recreate in real life, such as combat conditions, or to control the “dose” and specific aspects of the exposure environment.

“A VR system will typically include a head-mounted display and a platform (for the patients) and a computer with two monitors — one for the provider’s interface in which he or she constructs the exposure in real time, and another for the provider’s view of the patient’s position in the VR environment,” the researchers note.

However, research so far on VR applications has had limitations, including small numbers of patients and lack of comparison groups; and mental health care providers will need specific training, the authors warn.

The senior author of the paper, Barbara O. Rothbaum, PhD, disclosed one advisory board payment from Genentech and equity in Virtually Better, Inc., which creates virtual reality products.


Abstract of The Use of Virtual Reality Technology in the Treatment of Anxiety and Other Psychiatric Disorders

Virtual reality (VR) allows users to experience a sense of presence in a computer-generated, three-dimensional environment. Sensory information is delivered through a head-mounted display and specialized interface devices. These devices track head movements so that the movements and images change in a natural way with head motion, allowing for a sense of immersion. VR, which allows for controlled delivery of sensory stimulation via the therapist, is a convenient and cost-effective treatment. This review focuses on the available literature regarding the effectiveness of incorporating VR within the treatment of various psychiatric disorders, with particular attention to exposure-based intervention for anxiety disorders. A systematic literature search was conducted in order to identify studies implementing VR-based treatment for anxiety or other psychiatric disorders. This article reviews the history of the development of VR-based technology and its use within psychiatric treatment, the empirical evidence for VR-based treatment, and the benefits for using VR for psychiatric research and treatment. It also presents recommendations for how to incorporate VR into psychiatric care and discusses future directions for VR-based treatment and clinical research.

Precision typing on a smartwatch with finger gestures

The “Watchsense” prototype uses a small depth camera attached to the arm, mimicking a depth camera on a smartwatch. It could make it easy to type, or in a music program, volume could be increased by simply raising a finger. (credit: Srinath Sridhar et al.)

If you wear a smartwatch, you know how limiting it is to type it on or otherwise operate it. Now European researchers have developed an input method that uses a depth camera (similar to the Kinect game controller) to track fingertip touch and location on the back of the hand or in mid-air, allowing for precision control.

The researchers have created a prototype called “WatchSense,” worn on the user’s arm. It captures the movements of the thumb and index finger on the back of the hand or in the space above it. It would also work with smartphones, smart TVs, and virtual-reality or augmented reality devices, explains Srinath Sridhar, a researcher in the Graphics, Vision and Video group at the Max Planck Institute for Informatics.

KurzweilAI has covered a variety of attempts to use depth cameras for controlling devices, but developers have been plagued with the lack of precise control with current camera devices and software.

The new software, based on machine learning, recognizes the exact positions of the thumb and index finger in the 3D image from the depth sensor, says Sridhar, identifying specific fingers and dealing with the unevenness of the back of the hand and the fact that fingers can occlude each other when they are moved.

A smartwatch (or other device) could have an embedded depth sensor on its side, aimed at the back of the hand and the space above it, allowing for easy typing and control. (credit: Srinath Sridhar et al.)

“The currently available depth sensors do not fit inside a smartwatch, but from the trend it’s clear that in the near future, smaller depth sensors will be integrated into smartwatches,” Sridhar says.

The researchers, which include Christian Theobalt, head of the Graphics, Vision and Video group at MPI, Anders Markussen and Sebastian Boring at the University of Copenhagen, and Antti Oulasvirta at Aalto University in Finland, will present WatchSense at the ACM CHI Conference on Human Factors in Computing Systems in Denver (May 6–11, 2017). Their open-access paper is also available.


Srinath Sridhar et al. | WatchSense: On- and Above-Skin Input Sensing through a Wearable Depth Sensor


Abstract of WatchSense: On- and Above-Skin Input Sensing through a Wearable Depth Sensor

This paper contributes a novel sensing approach to support on- and above-skin finger input for interaction on the move. WatchSense uses a depth sensor embedded in a wearable device to expand the input space to neighboring areas of skin and the space above it. Our approach addresses challenging camera-based tracking conditions, such as oblique viewing angles and occlusions. It can accurately detect fingertips, their locations, and whether they are touching the skin or hovering above it. It extends previous work that supported either mid-air or multitouch input by simultaneously supporting both. We demonstrate feasibility with a compact, wearable prototype attached to a user’s forearm (simulating an integrated depth sensor). Our prototype—which runs in real-time on consumer mobile devices—enables a 3D input space on the back of the hand. We evaluated the accuracy and robustness of the approach in a user study. We also show how WatchSense increases the expressiveness of input by interweaving mid-air and multitouch for several interactive applications.

‘Strange Beasts’: Is this the future of augmented reality?

(credit: Magali Barbe)

“Strange Beasts” — a five-minute short science fiction movie produced by Magali Barbe, is in the form of an augmented-reality-game promo. Victor Weber, founder of Strange Beasts, says the game “allows players to create, customize, and grow your very own creature.”

Supervision (credit: Magali Barbe)

Weber explains that this is made possible by “nanoretinal technology” that “superimposes computer-graphics-composed imagery over real world objects by projecting a digital light field directly into your eye.” The imagery is reminiscent of Magic Leap promos — but using surgically implanted “supervision” displays.

The movie’s surprise ending raises disturbing questions about where augmented-reality may one day take us.

Reboot of The Matrix in the works

(credit: Warner Bros.)

Warner Bros. is in the early stages of developing a relaunch of The Matrix, The Hollywood Reporter revealed today (March 14, Pi day, appropriately).

The Matrix, the iconic 1999 sci-fi movie, “is considered one of the most original films in cinematic history,” says THR.

The film “depicts a dystopian future in which reality as perceived by most humans is actually a simulated reality called ‘the Matrix,’ created by sentient machines to subdue the human population, while their bodies’ heat and electrical activity are used as an energy source,” Wikipedia notes. “Computer programmer ‘Neo’ learns this truth and is drawn into a rebellion against the machines, which involves other people who have been freed from the ‘dream world.’”

Keanu Reeves said he would be open to returning for another installment of the franchise if the Wachowskis were involved, according to THR (they are not currently involved).

Interestingly, Carrie-Anne Moss, who played Trinity in the film series, now stars in HUMANS as a scientist developing the technology to upload a person’s consciousness into a synth body.