Category Archives: Virtual Reality

The HTC Vive in a smaller space

Granted, there are plenty of videos showing how the Vive works in large, open environments but few of them deal with the real-world scenarios faced by many when it comes to introducing virtual reality to their living rooms. Much like how the Nintendo Wii needed a similar amount of space in order to allow the user to swing their arms around as they participated in sword fights, golf and bowling, the Vive needs even more to do it justice.

Having said that, it seems to work incredibly well, even in something of a cramped space. I don’t doubt that, as the prevalence of VR continues to grow, we may undergo a radical culture shift involving how we set up our home environments. With that in mind, please enjoy the quick 60 second video I’ve put together below.

As seen in the video, you can still have an incredibly immersive experience even while remaining in the same spot, just be sure to move your tea mugs out of arms reach as it’s not a question of ‘if’ but ‘when’ you knock them over.

The demos shown are just a few of those available through the Steam store on the PC. They’re a hint at what’s to come in the future and the promise offered by applications like ‘Big Screen‘ are likely to impact everything from individual to collaborative working environments. Although I only show it from a single user perspective, you can have multi-user sessions over the internet and have people join remotely to share a virtual space, all whilst sharing their individual screens (security risks need to be considered in that respect).

I’m hoping to create a few more videos in the future as they’re able to convey things far more eloquently than with words alone.

HTC Vive Commercial Release – First Impressions

Shortly after receiving our HTC Vive, I rushed to set everything up in a bid to sample the delights of the virtual reality applications available through Steam. For those of you unfamiliar with Steam, it’s an online content distribution service, initially set up for gaming but has since diversified its offerings in a bid to reach out to wider audiences. We’re hoping to improve the student experience by creating engaging visual content for use in our concept classrooms and the promise of virtual reality in this area is quite something.

Unboxing the headset and its accompanying assortment of wires made me wonder how portable a solution the Vive could be. Much of what we do involves showing others what can be done in the classroom and it’s clear that, at the moment, working with a head-mounted display is something which is best kept to dedicated spaces. That is unless you have a dedicated team of technical support staff on hand. As a University with a “Learning and Teaching Innovation Centre“, we’re quite fortunate in that regard.

Initially, the headset wouldn’t connect to my laptop, which only had VGA and display port inputs. The HTC Vive comes just with an HDMI cable (despite also having a mini display port) and so I had to purchase a “mini display to display port” wire separately. Upon arrival, everything worked beautifully and I invited everyone in to have a go with some of the “the lab” demos on Steam along with “theBlu“, a marine life experience wherein the user is surrounded by schools of fish and underwater flora, all of which are interactive and react to being touched by the controllers.

People were ducking down in order to crawl through some of the underwater arches and flinching as a whale got a little bit too close for comfort, before which its giant, reflective eye gave a knowing wink. All of this took place both on the headset and on the laptop display, allowing others to see what the user was experiencing. The emotional bandwidth of these experiences is nothing short of amazing and I say that after having used the Oculus Rift Devkit 2 extensively. The affordance of the Vive is that, as described, it allows you to physically walk around and interact with an environment using your body whereas with the Oculus you are required to use a joypad at the moment. This will no doubt change in the future but, as of writing this, the HTC Vive is where we are likely to be focusing our virtual reality development.

It’s worth mentioning that the laptop we used ran the 3D experiences poorly – around 25 frames per second – (despite being an i7-4290MQ with 32GB ram) due to an under performing graphics chip (Quadro FX). It just goes to show that you can have a machine which is incredibly fast for video and high resolution image editing yet, without a proper game-based GPU, it will not perform well. There are a number of 3D benchmarks you can consult to see if your hardware is up to scratch and I opted to use a laptop if only because it provided for a much simpler setup. I will be bringing out the big guns for future demonstrations.

I’ll be posting more as we continue to experiment with things. At the moment, we’re brainstorming some usage scenarios involving role-play exercises.

Mixed Reality with the Oculus DK2

The irony of virtual reality is that, despite being a visual medium, it remains incredibly difficult to convey in a faithful manner. It’s not just about the visual impact of an experience but also the immersion factor.

I made a post a few months ago in which I filmed myself using the Oculus Rift at a desk. In that video, I cross-faded the perspectives of a bystander and user in an attempt to communicate how people can interact with a 3D environment using a headset.

Virtual Reality represents something of a growth industry right now but it will take time to convince people of its promise as a means for channelling emotional bandwidth. In the right hands, it could become a powerful educational tool. As always, the issues around how to establish best practice will take time to address and, because of this, it’s a great time for both experimentation and innovation.

In the video below, I’m using a green screen to chroma key the output of Oculus, thereby creating the effect of allowing people to see as I do during the session. This is far less complicated (and looks very 1980s) than the method used by Valve, which you can see here.

It does mean having to restrict movement in some ways, no facing the camera, not looking straight down etc… These tend to create confusing visual effects.

I’ll be posting more in the future as I continue to experiment with things. We’re at the beginning of something which promises to revolutionise the human computer interface and contribute to the human condition in ways we’ve yet to envisage.

What do artificial intelligence, virtual reality and gene editing have in common?

I’m something of a transhumanist and in possession of a woefully optimistic view of where technology can take the human race in the next hundred years. A lot has happened in recent times, we have a surge in the public interest of VR, new genetic therapies which are starting to offer treatments for everything from HIV to cancer and the early stages of artificial neural networks which hold the potential to both dwarf and augment our collective intelligence.

Advances in these mediums have been made possible thanks to hardware improvements in GPUs (simple graphics cards). Thanks to their efficiency and number crunching abilities, they’re able to do more than simply create breathtaking visuals. NVidia are starting to invest more funding in chips dedicated to more efficient machine learning and pretty much all biochemistry in the digital medium ends up as physics-based simulations. The driving force for much technology tends to come from competition between various manufacturers but now we are starting to see the convergence of different areas of application, the results of which are incredibly exciting.


Don Quixote

The visual fidelity of real-time 3D simulations is starting to surpass pre-rendered 3D movies, giving content creators far more control over the creative process. Escapism is going to be a big thing over the next few years as people seek more and more media-rich content. The psychological impact of this is something we haven’t even started to consider, we could potentially treat phobias, rehabilitate people with neurodegenerative diseases or turn ourselves into drooling junkies forever held in the confines of a psychosis like that portrayed in Don Quixote.

The onus falls to all of us to steer things in a responsible direction and focus on how we can improve the human condition. It’s something to be excited about, never in the history of our race have so many technological breakthroughs occurred in such a short space of time – and the rate at which it’s occurring doesn’t seem to be letting up.

The Logistics of Virtual Reality and Thunderbolt 3

The end game for many technologies involves integrating seamlessly with our being, turning us into space-dwelling cyborgs. The problem is that while the process of miniaturization is always in motion, there will always be a suite of technologies on the fringe which have yet to undergo such optimization and start out in something of a clunky state. Virtual Reality headsets are such things, they are new, large and cumbersome.

While the phone-based experiences made popular by the Gear VR hold promise (in that it is a lightweight solution without wires) it remains expensive. Upgrading to a new phone is also problematic. When all one needs is a faster GPU, the only option is to purchase an entirely new phone (a general purpose device) which happens to have a faster graphics chip. This is an incredibly inefficient economy.

Perhaps it’s not as much of a concern for a single user but for a large organisation looking to invest in such technologies, it presents something of a challenge. Do universities invest in VR laboratories or do they come up with something more flexible?

I don’t doubt that the future of VR involves the use of specialist equipment and spaces. To that end, a dedicated lab might present itself as a viable investment.

Valve's Lighthouse Tracking System

A team demonstrating Valve’s Lighthouse Tracking System – I have no affiliation with the people involved

In the meantime however, during this period of innovation and testing, there are ways to make life easier. When giving demonstrations of VR within our institution, we either get people to come to our offices or we attempt to set up a small stand for the duration of a conference. The issue is that we always have to lug around a giant desktop computer inside which is the equivalent in weight of three potato sacks worth of hardware.

You might think “why not use a laptop?” – the answer is because the integrated GPUs on these devices are not upgradeable. We would need to spend thousands on a machine fast enough to run a VR experience only to have it become redundant overnight. The answer lies in the thunderbolt 3 port, best described as USB 3 on steroids.

With such a port, you can directly connect an external GPU to any compatible device, no matter how small. This means you could invest in a NUC device with thunderbolt 3 connectivity and have a graphical powerhouse which occupies a tiny amount of desk space.

Whilst some newer laptops are sporting these connectors, it’s worth waiting until the method through which external GPUs interact is confirmed. The beauty of the solution is also that (due its massive bandwidth) rather than having three to five wires connecting the headset, in the future, there can be just just one. Wireless connectivity is also catching up, with wireless video now proving itself usable for gaming.

To sum it up, it’s worth waiting before investing in a long-term VR solution unless you have an application which has already proven itself to be robust and workable on current generation technologies. If you want to be an early adopter (due to personal interest or for reasons of experimention), there is already plenty of choice. – Just be aware that until the prevalence of VR really comes into its own, we are just witnessing the tip of the iceberg.

The Weston Auditorium at the University of Hertfordshire in Virtual Reality

Much has changed these last few years in the gaming industry. Until recently, real-time/interactive 3D has had something of a stigma associated with it when it came to education. Happily, in the spirit of inquiry, research has continued to verify its potency for creating memorable and engaging experiences.

Click here to view the search trend data on Google

We’ve been in possession of an Oculus Rift (development kit 2) for a while now and it’s provided us with some exciting opportunities to do with Learning & Teaching Innovation. We’re currently working with Psychology staff to create virtual environments for use in research and have been awarded funding to pursue some other ideas with measurable outcomes. We’re trying to establish best practice in this new medium and need to figure out what works and what doesn’t. Trying to force the use of VR in a non-complimentary medium would be a massive waste of time and could end up as redundant work.

It’s not so easy to identify the areas of use however as some of the most text-heavy subjects (Land Law as an example) stand to benefit greatly from the offerings of VR. Imagine being able to walk around a virtual village and identify disputes over property boundaries or the potential VR has to create dynamic data visualizations. If you thought 2D infographics were informative, imagine where it could go with something more immersive and with a real sense of scale.

The obvious choices for VR would be things like paramedic science, subjects involving roleplay scenarios. Potentially we could have partner institutions interacting with one another overseas in ways which were previously impossible. Multi-user VR environments hold a lot of promise in this regard and I’m hoping to be able to explore those ideas further.

More recently, I’ve completed some work with the Oculus Rift, it’s a virtual representation of our largest presentation space (sometimes used as a lecture theatre), the Weston Auditorium at the University of Hertfordshire. See the video below for a demonstration!



This is based on work I did some time ago, I’ve just made it work with the Oculus Rift. It’s taken a long time but I’ve optimised my workflow substantially since the old days.

Click here to read about the creation of this environment

Getting serious with virtual and augmented reality at the VRTGO conference

Last week I attended the VRTGO conference in Newcastle, a day of talks from various industry representatives describing how they were using Virtual and Augmented Reality in their businesses. Happily, not all of it was gaming-related, something which has been difficult to exclude from these events as, understandably, the gaming sector is what is driving the VR agenda at this time. Having said that, one of the representatives from Crossrail (a London-based project to link various underground railway networks) was able to answer my question of whether there was still any stigma surrounding the term ‘gaming technology’, responding that most people with any interest in the medium already had prior exposure to gaming and that, as a generational phenomenon, it was starting to become acceptable.

Outside of Gaming

Much of the non-educational work was centered around heritage, mainly architectural visualisations with added features such as the ability to see a site as it exists in its current, ruinous, state compared to how it might have been in its prime, complete with depictions of how people of the era dressed and interacted. Much of the development work involved the use of photogrammetry and, where available, the expensive LIDAR scanning of sites with historical significance. All in all, it was inspiring to see people getting creative and innovating with technology at the forefront of its respective medium. Of note were two companies, ‘DigitalVR‘ and ‘ChroniclesVR‘, based in the UK, who have been doing some work with photogrammetry. There’s a lot to establish by way of conveying narratives but these two groups seem to be in the right place at the right time when it comes to the learning process.

Referring back to Crossrail, the presentation started with a drone flying through tunnels designed to accomodate London’s bulging underground railway system. There was talk of using augmented reality to allow workers to focus on items at fault, report them via photo, and being able to call up servicing instructions through video feedback. Things were somewhat vague and theoretical in this regard but the idea is sound and is indeed what we’re expecting to see with the advent of Microsoft’s Hololens. There was no statement of intent with respect to committing to one type of interface device and things are very much in the innovation stage. Given that the companies associated with the building work are so vast and the project so logistically intensive, it’s unlikely that anyone is going to change the way they work overnight. It does, however, present a prime opportunity for data gathering and workflow optimisation.

Tackling Navigation in VR

One of the highlights for me involved a company called ‘nDreams‘. They exist solely for the purpose of creating VR experiences, which while at this moment in time sounds like a risky venture, is something which is likely to become a growth industry. It was a wonderful surprise to see this team examining how users engage with various interface devices and head-mounted displays. They were examining everything from rotational head movement speed to navigating 3D environments using a stare and click method. The data they had sourced from hours of research and experimentation was phenomenal and will surely be of use when it comes to establishing standard control interfaces for console and PC-based titles. This is the sort of work that excites me the most, rather than stabbing in the dark and doing something which seems ‘cool’, this group of people have brought their brains to bear on this problem with a view to creating a better user experience. It was as much of an academic pursuit as a games authoring workflow.

Interestingly, Samsung also sent a representative, the head of the UK division for the development and promotion of the Gear Headsets we’ve read so much about in the news. I quizzed him on when we’d see mobile GPUs catching up to high-end desktop configurations but the focus seemed to be more on providing novel VR experiences which, rather than being graphically compelling, focussed on the strengths of mobile VR to provide unique experiences. That’s understandable, especially given that one of the benefits of the Google cardboard-based approach is the exclusion of wires. It just made me wonder how long before we see phones with thunderbolt 3 ports through which an external GPU can be added (and put in the users pocket) in order to boost the graphical fidelity of the experience, alas no comment on that. Using Gear headsets would certainly cheapen the cost of our games nights at the University of Hertfordshire.

Browser-Based Virtual Reality

Of note was the work by a company called ‘PlayCanvas‘. They’re focusing on a browser-based approach to 3D and have developed a beautiful suite of tools designed to assist in getting real-time 3D content working through WebGL and HTML5. It should be worth noting that one of the main reasons I adopted Unity 3D so many years ago was due to its ability to render 3D content online, in the browser, by simply going to a web address and downloading a plugin. Sadly, Unity’s browser-based solution is lacking at present (due to most browsers having disabled the use of such plugins) though promises great things as they’re also looking at similar solutions. Interestingly, the PlayCanvas engine comes to around 125KB, which is a fantastic achievement when it comes to optimisation and efficiency. PlayCanvas are also looking at interfacing directly with VR headsets from within the web browser window, which would increase the accessibility by an order of magnitude.

Final Thoughts

There’s so much to talk about and take in when attending these conferences but yet another curiousity was to do with interface devices, or more specifically, seats with feedback mechanisms. The representatives of a kickstarter project called VRGo were on scene showcasing their product, a seat through which you are able to navigate a virtual environment by leaning in different directions. It’s certainly the right way to go about things when considering the implications for health and safety as, once someone stands up, virtual reality has proven to be quite unsafe (many people end up losing their balance due to conflicting brain signals regarding spatial awareness).

I regret arriving a bit late and missing the Sony VR presentation but, all in all, it was a great day out and well worth the 6 hour journey there and back (on the same day) to see what people were doing. I remain very excited at having seen the work nDreams are doing and hope to factor it into my PhD studies. It’s a great time to be in this industry as an entirely new frontier unfolds right in front of us. There are questions not just of usability and engagement but also ethics and how to develop content in a responsible manner so as not to contribute to the problems of escapism through gaming and internet addiction, which we are starting to see more of. I truly believe that the VR experience has the potential to improve the human condition and contribute to our collective mental evolution. Let’s see how much we can get right at the start and set a precedent for a tradition of excellence.

3D Land Law

It’s often difficult to visualise something in a medium as text heavy as law, though an opportunity for creativity made itself available recently. A colleague approached me with a view to enhancing a sketch he had made with a 3D representation. It’s now being used to assist in conveying the principles of land law. Due to some time constraints, there were limits to what we wanted to achieve with this though I decided the best use of our time would be to work with Google Sketchup. The end result is what you see in the video below.

The idea is that students are able to navigate the scene by downloading the Sketchup viewer and pressing the scene buttons to allow the camera to focus in on various points of interest. It has more use as an in-class tool however as it’s a nice way to initiate discussions around boundaries and trespassing. The goal is to, over the course of the semester, develop it with a view to making the implementation browser-based, so students can navigate the 3D model from inside the browser. Given the subject, it’s not safe to rely on a user’s prior exposure to 3D, be it through gaming or otherwise, so the interface needs to be intuitive and simple to pick up. A simple set of scene buttons seems to serve us well in that respect.

Oculus Rift – Development Kit 2

I’ve recently been loaned an Oculus Rift with a view to creating a few technical demonstrations of what the head mounted display is capable of. This will go some way to allowing me to obtain closure on my workflow as I’ve been at odds for some time with regards to which devices I ought to spend my time developing for. The primary annoyance is that though my authoring environment of choice, Unity, exports to multiple platforms, compatibility problems still exist. The libraries I use for windows executables might not work as well on Android and with the death of the unity browser-based plugin, I am now limited to HTML5/WebGL in that particular medium.


Windows 10 offers some hope in that it is attempting to converge devices ranging from mobile to desktop, something which initially made me paranoid due to the stability of having something of a standard in place (iOS and Android for mobile). I didn’t see a compelling reason to have that ecosystem disturbed until the devices started getting more powerful and, rather than being used as portable devices, they started to branch into desktop usage scenarios.

That suits me fine, it is always easier to stick to one platform provided that it remains flexible, and that’s something Windows 10 promises. The university is investing in technology and, with it, some Surface Pro tablets. The affordance this provides is a standardised set of hardware to develop ‘normal windows compatible software’ for in a mobile/tablet-based form factor. That is quite powerful and frees us up to innovate new ideas for classroom-based learning at higher education.

Inclusivity and engagement remain key factors, it remains to be seen how this pans out but I’ll be posting my experiments with the Oculus Rift in due course.

Unity 3D realistic terrain data

In my quest to model a believable glacial environment, I’ve resorted to using real-world terrain data. It used to be that I would sculpt terrain by hand and rely on perlin noise generators to produce something from which I could use as a base. The video below shows some of the things I’ve been working on and offers a bit more of an explanation.

The difficulty comes with having to make the scene explorable form a first person perspective. By and large, terrain data doesn’t allow for high enough detail to make this achievable. There is still a lot of work to be done by way of manually adding erosion and creating some textures for close-up viewing of the scene. With a high detail scene like this, you can rely on tesselation to ensure it’s fully optimized but another way is to generate the distant scenery (anything which won’t be explored on foot) as a low detail height map. It’s still quite early on in the development process and this is a continuation of a previous project which has recently regained interest.