Virtual Reality Working : VR headsets like Oculus Rift and PlayStation VR are often referred to as HMDs, which simply means they are head mounted displays. Even with no audio or hand tracking, holding up Google Cardboard to place your smartphone’s display in front of your face can be enough to get you half-immersed in a virtual world.
The goal here is to create what appears to be a life size, 3D virtual environment without the boundaries we usually associate with TV or computer screens. So whatever way you look, the screen mounted to your face follows you. This is unlike augmented reality, which overlays graphics onto your view of the real world.
VR headsets use either two feeds sent to one display or two LCD displays, one per eye. There are also lenses which are placed between your eyes and the pixels, which is why the devices are often called goggles. In some instances, these can be adjusted to match the distance between your eyes, varying from person to person.
These lenses focus and reshape the picture for each eye and create a stereoscopic 3D image by angling the two 2D images to mimic how each of our two eyes views the world ever-so-slightly differently. Try closing one eye then the other to see individual objects dance about from side to side and you get the idea behind this.
In head tracking technology, the user’s face and head movements are tracked by capturing raw data via cameras, or it may require special equipment to be worn on the head to capture the movements. The facial features are recognized separately. It is possible to track the head movements from a particular distance with the use of webcams in laptops. Certain actions can be performed via applications with corresponding head movements. Direct movement and behaviors of characters within applications through face controls are also possible. Head tracking can be used in conjunction with augmented reality.
The concept of head tracking is commonly used in games where the player’s head movements are tracked and changes are carried out in game controls as per the head movements. Head tracking is now most popularly seen in integration with smartphones to support various games and user authentication. It serves as another layer of security to traditional username and password authentication. A user can customize certain movement as to ensure their identity.
Eye-tracking technology has the potential to help VR bridge two gaps. It offers users additional control and an intuitive experience, while future developments in foveated rendering will eventually decrease both the processing power necessary for rendering complex 3D environments, which will make VR more accessible overall.
As part of our efforts to explore the best VR experience possible and meeting the growing interest in combining eye tracking and VR as a research methodology, Tobii recently released a VR developer kit to create integrated eye tracking content and a development kit for research applications enabling collection and recording of eye tracking data. While we’ve only just begun to grasp the full potential for eye tracking in VR gaming and research, the near-term benefits include improvements to the following components of the VR experience.
Head tracking is one big advantage the premium headsets have over the likes of Cardboard other mobile VR headsets. But the big VR players are still working out motion tracking. When you look down with a VR headset on the first thing you want to do is see your hands in a virtual space.
For a while, we’ve seen the Leap Motion accessory – which uses an infrared sensor to track hand movements – strapped to the front of Oculus dev kits. We’ve also tried a few experiments with Kinect 2 cameras tracking our flailing bodies. But now we have exciting input options from Oculus, Valve and Sony.
Oculus Touch is a set of wireless controllers designed to make you feel like you’re using your own hands in VR. You grab each controller and use buttons, thumbsticks and triggers during VR games. So, for instance, to shoot a gun you squeeze on the hand trigger. There is also a matrix of sensors on each controller to detect gestures such as pointing and waving.
It’s a pretty similar set-up to Valve’s Lighthouse positional tracking system and HTC’s controllers for its Vive headset. It involves two base stations around the room which sweep the area with lasers. These can detect the precise position of your head and both hands based on the timing of when they hit each photocell sensor on both the headset and around each handheld controller. Like Oculus Touch, these also feature physical buttons too and incredibly you can have two Lighthouse systems in the same space to track multiple users.
Other input methods can include anything from hooking an Xbox controller or joystick up to your PC, voice controls, smart gloves and treadmills such as the Virtuix Omni, which allow you to simulate walking around a VR environment with clever in-game redirections.
And when it comes to tracking your physical position within a room, Oculus now offers an experience to match the HTC Vive, which it didn’t do out the door. Rift owners now have the option to purchase a third sensor for $79 and add more coverage to their VR play area.
The problem, though, is that this still isn’t on par with HTC. While two SteamVR sensors for the HTC Vive can deliver a tracked play space of up to 225 square feet, two Constellation sensor cameras from Oculus only provides coverage of 25 square feet, with a third camera sending the recommended space goes up to 64 square feet. That may change with Oculus Santa Cruz, the company’s hi-spec standalone headset.
Sony is also hunting around this area, if a recent patent is anything to go by. The filing details a VR tracking system based on light and mirrors that uses a beam projector to determine the player’s position, though whether such a feature would appear on the current device or second iteration of PSVR (or not at all) is all speculative at this stage.