Category Archives: Technical

HTC Vive Commercial Release – First Impressions

Shortly after receiving our HTC Vive, I rushed to set everything up in a bid to sample the delights of the virtual reality applications available through Steam. For those of you unfamiliar with Steam, it’s an online content distribution service, initially set up for gaming but has since diversified its offerings in a bid to reach out to wider audiences. We’re hoping to improve the student experience by creating engaging visual content for use in our concept classrooms and the promise of virtual reality in this area is quite something.




Unboxing the headset and its accompanying assortment of wires made me wonder how portable a solution the Vive could be. Much of what we do involves showing others what can be done in the classroom and it’s clear that, at the moment, working with a head-mounted display is something which is best kept to dedicated spaces. That is unless you have a dedicated team of technical support staff on hand. As a University with a “Learning and Teaching Innovation Centre“, we’re quite fortunate in that regard.

Initially, the headset wouldn’t connect to my laptop, which only had VGA and display port inputs. The HTC Vive comes just with an HDMI cable (despite also having a mini display port) and so I had to purchase a “mini display to display port” wire separately. Upon arrival, everything worked beautifully and I invited everyone in to have a go with some of the “the lab” demos on Steam along with “theBlu“, a marine life experience wherein the user is surrounded by schools of fish and underwater flora, all of which are interactive and react to being touched by the controllers.

People were ducking down in order to crawl through some of the underwater arches and flinching as a whale got a little bit too close for comfort, before which its giant, reflective eye gave a knowing wink. All of this took place both on the headset and on the laptop display, allowing others to see what the user was experiencing. The emotional bandwidth of these experiences is nothing short of amazing and I say that after having used the Oculus Rift Devkit 2 extensively. The affordance of the Vive is that, as described, it allows you to physically walk around and interact with an environment using your body whereas with the Oculus you are required to use a joypad at the moment. This will no doubt change in the future but, as of writing this, the HTC Vive is where we are likely to be focusing our virtual reality development.

It’s worth mentioning that the laptop we used ran the 3D experiences poorly – around 25 frames per second – (despite being an i7-4290MQ with 32GB ram) due to an under performing graphics chip (Quadro FX). It just goes to show that you can have a machine which is incredibly fast for video and high resolution image editing yet, without a proper game-based GPU, it will not perform well. There are a number of 3D benchmarks you can consult to see if your hardware is up to scratch and I opted to use a laptop if only because it provided for a much simpler setup. I will be bringing out the big guns for future demonstrations.

I’ll be posting more as we continue to experiment with things. At the moment, we’re brainstorming some usage scenarios involving role-play exercises.

Mixed Reality with the Oculus DK2

The irony of virtual reality is that, despite being a visual medium, it remains incredibly difficult to convey in a faithful manner. It’s not just about the visual impact of an experience but also the immersion factor.

I made a post a few months ago in which I filmed myself using the Oculus Rift at a desk. In that video, I cross-faded the perspectives of a bystander and user in an attempt to communicate how people can interact with a 3D environment using a headset.

Virtual Reality represents something of a growth industry right now but it will take time to convince people of its promise as a means for channelling emotional bandwidth. In the right hands, it could become a powerful educational tool. As always, the issues around how to establish best practice will take time to address and, because of this, it’s a great time for both experimentation and innovation.

In the video below, I’m using a green screen to chroma key the output of Oculus, thereby creating the effect of allowing people to see as I do during the session. This is far less complicated (and looks very 1980s) than the method used by Valve, which you can see here.

It does mean having to restrict movement in some ways, no facing the camera, not looking straight down etc… These tend to create confusing visual effects.

I’ll be posting more in the future as I continue to experiment with things. We’re at the beginning of something which promises to revolutionise the human computer interface and contribute to the human condition in ways we’ve yet to envisage.

The Logistics of Virtual Reality and Thunderbolt 3

The end game for many technologies involves integrating seamlessly with our being, turning us into space-dwelling cyborgs. The problem is that while the process of miniaturization is always in motion, there will always be a suite of technologies on the fringe which have yet to undergo such optimization and start out in something of a clunky state. Virtual Reality headsets are such things, they are new, large and cumbersome.

While the phone-based experiences made popular by the Gear VR hold promise (in that it is a lightweight solution without wires) it remains expensive. Upgrading to a new phone is also problematic. When all one needs is a faster GPU, the only option is to purchase an entirely new phone (a general purpose device) which happens to have a faster graphics chip. This is an incredibly inefficient economy.

Perhaps it’s not as much of a concern for a single user but for a large organisation looking to invest in such technologies, it presents something of a challenge. Do universities invest in VR laboratories or do they come up with something more flexible?

I don’t doubt that the future of VR involves the use of specialist equipment and spaces. To that end, a dedicated lab might present itself as a viable investment.

Valve's Lighthouse Tracking System

A team demonstrating Valve’s Lighthouse Tracking System – I have no affiliation with the people involved

In the meantime however, during this period of innovation and testing, there are ways to make life easier. When giving demonstrations of VR within our institution, we either get people to come to our offices or we attempt to set up a small stand for the duration of a conference. The issue is that we always have to lug around a giant desktop computer inside which is the equivalent in weight of three potato sacks worth of hardware.

You might think “why not use a laptop?” – the answer is because the integrated GPUs on these devices are not upgradeable. We would need to spend thousands on a machine fast enough to run a VR experience only to have it become redundant overnight. The answer lies in the thunderbolt 3 port, best described as USB 3 on steroids.

With such a port, you can directly connect an external GPU to any compatible device, no matter how small. This means you could invest in a NUC device with thunderbolt 3 connectivity and have a graphical powerhouse which occupies a tiny amount of desk space.

nuc6i7kyk-back-rwd.png.rendition.intel.web.576.324

Whilst some newer laptops are sporting these connectors, it’s worth waiting until the method through which external GPUs interact is confirmed. The beauty of the solution is also that (due its massive bandwidth) rather than having three to five wires connecting the headset, in the future, there can be just just one. Wireless connectivity is also catching up, with wireless video now proving itself usable for gaming.

To sum it up, it’s worth waiting before investing in a long-term VR solution unless you have an application which has already proven itself to be robust and workable on current generation technologies. If you want to be an early adopter (due to personal interest or for reasons of experimention), there is already plenty of choice. – Just be aware that until the prevalence of VR really comes into its own, we are just witnessing the tip of the iceberg.

Unity 3D Chat Room Version 1

Creating multi-user environments is a headache due to having to keep track of so many variables for synchronized states. My hope is to use the project as a groundwork for a variety of multi-user experiences, educational and game-based.

multi-user_test

At the moment, the environment is just a grid spanning a hundred metres or so but I’m working on something more interesting to give it a bit of flavour. Looking at Playstation Home, it’s easy to be impressed by the artwork and creativity of the different areas. I’m not sure I want to head down the path of surrealism though as I know how off-putting that can be to people for whom 3D isn’t their life blood. I’d rather have something recognisable which announces its purpose to the user just by having them explore it, something like an art gallery.

I still need to improve the prediction code as movement tends to be a bit jerky. The server is authoritative so movement is tightly regulated, perhaps more than it needs to be at the moment, testing will inform me of the direction I need to take things.

I’ll also be porting this to a Google cardboard application for Android-based devices, which I’m looking forward to as multi-user virtual reality is where I want to be at the moment.