Scientists at the Vienna University of Technology get to work on some pretty cool stuff. Just last week, their boffins announced a huge breakthrough in optical transistor technology – a significant milestone in the race towards quantum computing. But what’s interesting to me is that one of their departments performs basic and application-oriented research related to virtual and augmented reality.
Hannes Kaufman is an associate professor at the University’s Interactive Media Systems Group and has been working in VR since 1999 (he recalls CGI work stations were the norm back then). Nowadays, his work at TU Vienna ranges from industrial high precision tracking solutions and innovative user interfaces, to high school education programs and VR prosthesis. After speaking to him and schooling myself for a couple of hours on the university’s projects page, I soon discovered that they were conducting all kinds of awesome research that has a ton of implications for next-gen gaming.
Prosthesis patients face a tough task. Getting to grips with the correct muscle signals required for a prosthetic limb can be a repetitive, frustrating, and sometimes hazardous undertaking.
Traditionally, prosthetic limbs are built first and then calibrated along the way, which is time-consuming for a number of reasons. Each human being has a unique pattern of nerves, which makes prosthesis complicated because controlling a prostehtic limb is all about harnessing muscle signals. Each patient’s prosthesis will also need to be custom-built for their unique body size, which means that each one must be painstakingly trialed and errored.
In the virtual world calibrations can be performed much faster, and in ways that are more engaging for the patient.
In Tu Vienna’s program, when a patient sends a signal to clench their fist – which is then detected by the Myrosensors – it is shown on the HMD as a clenched virtual fist.
If the virtual limb does not behave in the desired way, then it is calibrated and fine tuned virtually. Once the patient is happy with the way that their virtual limb is behaving, the doctors and medical engineers can apply the calibration settings and build the real prosthetic limb.
“There is no existing training software for this purpose yet”. Hannes told me. “Our software uses the arm position,and analyzes if the patient intends to grip an object in the training environment, (depending on muscle signals) and computes the current grip force of the virtual prosthesis based on their signals.
“In the HMD, a prospective virtual prosthesis is shown, correctly registered with the patient’s own arm, and the person can learn to grasp objects from different perspectives with different grip forces. In the future, different models of prosthesis (with different configurations/functionality) could be shown and simulated. The movements of patients (head, hand, body etc) can be monitored precisely to log performance over multiple weeks.”
This makes life easier for both patient and therapist, as less time and effort is required to calibrate the limb precisely.
As well as virtual rehabilitation, the TU Vienna are also collaborating with UCT’s MxR Labs on a project titled Flexible Spaces.
The program simulates huge virtual environments through ‘impossible spaces’, allowing users to track their position through infinite virtual environments. “The press also called it Holodeck 1.0, as a first approach to let users walk in infinite virtual environments”, Hannes said to me. “Our algorithm dynamically generates corridors to connect smaller rooms and their position is not predetermined.”
“Corridors have at least 2-3 turns and initial testing showed that users do not notice that they are actually walking in circles within a small space. They perceive the space to be much larger”.
“I have good relationships with the ICT/USC, especially with Skip Rizzo, who has been doing pioneering work in VR rehabilitation since 2005”, Hannes told me. “In early 2012, I got in contact with Palmer Luckey,who worked in the MxR lab at ICT at that time. He told me about a wide-FOV HMD prototype that he had, which later turned out to be the Rift, but Oculus was not born at that time. A few months later I contacted Evan Suma and Mark Bolas since a PhD student of mine was interested in doing an internship in their labs. Khrystyna had very interesting ideas (which were later published in our Flexible Spaces paper) and was working in their lab to finish and test the Flexible Spaces algorithm which she started here.”
The potential this has for gaming is huge, as we’ve already seen with the reception that Project Holodeck has received (another MxR Labs project unrelated to this one). We’re also starting to see products such as Atlas come out, that create a similar effect through a mobile phone app and QR markers. Infinitely created spaces/ levels mapped to real physical spaces (such as your lounge or your garage) will mean that we might not even need omni-directional treadmills to play VR games, we could just play them in our homes by physically walking around.
The TU Vienna have spend time developing physical interfaces, or actuated tangible user interfaces (TUIs), which allow users to control virtual objects by interacting with physical equivalents. Tangible interaction is very motivating, especially for children and adolescents, which makes it ideal for educational applications, but it is something that has an appeal for all ages since physical objects are what we, as humans, are used to dealing with.
“We build self-positioning, motorized tangible UI objects (TUIOs). Our actuated TUIOs can be equipped with different input elements (sliders, displays, buttons,…) to be flexibly used in a variety of applications. Since they are actuated, a certain state of an application with TUIOs can be saved. If loaded later, all TUIOs can move to the saved locations. For children TUIs are appealing because they are like toys and present natural interfaces. We are used to dealing with physical objects in every day life and therefore this kind of interaction seems more natural in many ways.”
I think we can all agree that truly immersive experiences require more natural interfaces, and we’re seeing that with the emergence of next gen gaming hardware. One of the points raised by the dev team at CCP was that EVR’s only real issue was that players were required to hold a controller to fly their plane. There’s only so much immersion that can be created without actually being able to reach out and ‘touch’ something physical. Hardware like the Omni and the Hydra are a step in the right direction because they substiture arbitrary controllers with real life movement.
“The Razer Hydra got on the market 2 years ago, before the VR hype started but due to the VR hype it revived”, Hannes said. “It’s a great controller. I think all these devices show that a VR market is forming and this time it’s going to stay in my opinion. Costs are low and expectations are realistic compared to the first VR hype in the late 1990s.”
It’s also definitely worth a mention that one of Hannes’ Masters students developed Cyberith – a prototype omni-direction treadmill that’s got in-built suspension, giving players even more freedom to move. The student is currently finishing off his latest prototype in time for Gamescon.
“He started development before the Omni but they were faster and crowdfunding wasn’t possible (or even legal) in Austria until about a few weeks ago.”
Interestingly, the library/SDK for tracking the PS Move on PCs (moveonPC) was also developed at TU Vienna by another one of Hannes’ masters students, showing just how in tune the research at TU Vienna is with the gaming world.
But what about the next frontier – the frontier of brain signals and truly intuitive user interfaces: Neurogaming.
“I found out that recently a group of researchers demonstrated how a simple P300 BCI (Brain-Computer Interface) detection can be done while walking outdoors. I’m convinced that body signals will be used in games soon – maybe after or simultaneously with the wave of medical products, I suppose you’ve heard of the Scanadu Tricorder. This will definitely come.”
The more multi-sensory (or to be more precise physical-sensory) our virtual reality experiences become, the better they will get.
Virtual reality is a valuable learning tool with advantages that are self-evident. It promotes active learning through participation, which means students can actually do things, rather than just watch them materialize on a flat screen and it also allows them to interact with environments that would simply be impossible in the real world.
The research being performed over at the Tu Vienna is fascinating and very important to the next generation of gaming.
Keep an eye on the progress of the university’s virtual and augmented reality projects here.