Category Archives: Education

Demonstrating Virtual Reality at the University of Hertfordshire

It’s not often that we get the chance to demonstrate what we’re working on as, in education, it’s more about the finished product and conclusions. Nevertheless, during the HEaTED East of England network event , we were given the opportunity to allow people to wonder around a bespoke 3D environment while using our HTC Vive headset and touch controllers. We used the controllers themselves emulate the functional behaviours of a smartphone. The reactions were all positive and I managed to have chats with some senior managers about where it is we’re hoping to take our vision for VR at the University.

You can find a brief video depicting the space below:

The idea was to illustrate how intuitive behaviours can be replicated inside a 3D space to allow for simple interactions. A lot of people tend to be confused by their initial transition into a virtual world but by including recognisable elements, it makes the experience much less daunting. It’s for that reason the 3D environment in question is a lecture theatre, based on a real-world equivalent, only a few metres away from the stand. This made the experience all the more compelling as, after having spent a few minutes in the 3D version, the attendees would then enter the same room shortly afterwards.

We’re going to be demonstrating again in the near future in a bid to capture the imaginations a few academics. VR promises a lot of interesting things, everything from multi-user role-playing exercises between people in two different locations (partner institutions overseas) to single user familiarisation exercises. We’re hoping to establish some more usage case scenarios.

HTC Vive Commercial Release – First Impressions

Shortly after receiving our HTC Vive, I rushed to set everything up in a bid to sample the delights of the virtual reality applications available through Steam. For those of you unfamiliar with Steam, it’s an online content distribution service, initially set up for gaming but has since diversified its offerings in a bid to reach out to wider audiences. We’re hoping to improve the student experience by creating engaging visual content for use in our concept classrooms and the promise of virtual reality in this area is quite something.

Unboxing the headset and its accompanying assortment of wires made me wonder how portable a solution the Vive could be. Much of what we do involves showing others what can be done in the classroom and it’s clear that, at the moment, working with a head-mounted display is something which is best kept to dedicated spaces. That is unless you have a dedicated team of technical support staff on hand. As a University with a “Learning and Teaching Innovation Centre“, we’re quite fortunate in that regard.

Initially, the headset wouldn’t connect to my laptop, which only had VGA and display port inputs. The HTC Vive comes just with an HDMI cable (despite also having a mini display port) and so I had to purchase a “mini display to display port” wire separately. Upon arrival, everything worked beautifully and I invited everyone in to have a go with some of the “the lab” demos on Steam along with “theBlu“, a marine life experience wherein the user is surrounded by schools of fish and underwater flora, all of which are interactive and react to being touched by the controllers.

People were ducking down in order to crawl through some of the underwater arches and flinching as a whale got a little bit too close for comfort, before which its giant, reflective eye gave a knowing wink. All of this took place both on the headset and on the laptop display, allowing others to see what the user was experiencing. The emotional bandwidth of these experiences is nothing short of amazing and I say that after having used the Oculus Rift Devkit 2 extensively. The affordance of the Vive is that, as described, it allows you to physically walk around and interact with an environment using your body whereas with the Oculus you are required to use a joypad at the moment. This will no doubt change in the future but, as of writing this, the HTC Vive is where we are likely to be focusing our virtual reality development.

It’s worth mentioning that the laptop we used ran the 3D experiences poorly – around 25 frames per second – (despite being an i7-4290MQ with 32GB ram) due to an under performing graphics chip (Quadro FX). It just goes to show that you can have a machine which is incredibly fast for video and high resolution image editing yet, without a proper game-based GPU, it will not perform well. There are a number of 3D benchmarks you can consult to see if your hardware is up to scratch and I opted to use a laptop if only because it provided for a much simpler setup. I will be bringing out the big guns for future demonstrations.

I’ll be posting more as we continue to experiment with things. At the moment, we’re brainstorming some usage scenarios involving role-play exercises.

The Logistics of Virtual Reality and Thunderbolt 3

The end game for many technologies involves integrating seamlessly with our being, turning us into space-dwelling cyborgs. The problem is that while the process of miniaturization is always in motion, there will always be a suite of technologies on the fringe which have yet to undergo such optimization and start out in something of a clunky state. Virtual Reality headsets are such things, they are new, large and cumbersome.

While the phone-based experiences made popular by the Gear VR hold promise (in that it is a lightweight solution without wires) it remains expensive. Upgrading to a new phone is also problematic. When all one needs is a faster GPU, the only option is to purchase an entirely new phone (a general purpose device) which happens to have a faster graphics chip. This is an incredibly inefficient economy.

Perhaps it’s not as much of a concern for a single user but for a large organisation looking to invest in such technologies, it presents something of a challenge. Do universities invest in VR laboratories or do they come up with something more flexible?

I don’t doubt that the future of VR involves the use of specialist equipment and spaces. To that end, a dedicated lab might present itself as a viable investment.

Valve's Lighthouse Tracking System

A team demonstrating Valve’s Lighthouse Tracking System – I have no affiliation with the people involved

In the meantime however, during this period of innovation and testing, there are ways to make life easier. When giving demonstrations of VR within our institution, we either get people to come to our offices or we attempt to set up a small stand for the duration of a conference. The issue is that we always have to lug around a giant desktop computer inside which is the equivalent in weight of three potato sacks worth of hardware.

You might think “why not use a laptop?” – the answer is because the integrated GPUs on these devices are not upgradeable. We would need to spend thousands on a machine fast enough to run a VR experience only to have it become redundant overnight. The answer lies in the thunderbolt 3 port, best described as USB 3 on steroids.

With such a port, you can directly connect an external GPU to any compatible device, no matter how small. This means you could invest in a NUC device with thunderbolt 3 connectivity and have a graphical powerhouse which occupies a tiny amount of desk space.

Whilst some newer laptops are sporting these connectors, it’s worth waiting until the method through which external GPUs interact is confirmed. The beauty of the solution is also that (due its massive bandwidth) rather than having three to five wires connecting the headset, in the future, there can be just just one. Wireless connectivity is also catching up, with wireless video now proving itself usable for gaming.

To sum it up, it’s worth waiting before investing in a long-term VR solution unless you have an application which has already proven itself to be robust and workable on current generation technologies. If you want to be an early adopter (due to personal interest or for reasons of experimention), there is already plenty of choice. – Just be aware that until the prevalence of VR really comes into its own, we are just witnessing the tip of the iceberg.

Experiments with Photogrammetry

Photogrammetry is the means by which a 3D model can be constructed using photos as the sole input. It’s used extensively in archaeology to document artifacts in such a way that it allows specimens and sites to be explored in great detail without fear of having the original crumble into dust upon touch. I recently made a post about the visual fidelity of a game called the Vanishing of Ethan Carter, the game is beautiful and the studio responsible for it made extensive use of photogrammetry in their workflow.

The problem with using it in gaming is that a lot of editing is needed to make the art assets fit for purpose. The geometry needs to be optimised in order to ensure it remains playable on lower-end hardware and holes in the model, which are symptomatic of missing data, need filling in. It’s not quite as easy as just taking a bunch of photos and slapping it into a gaming engine.

I’ve decided to try my hand at it, using a souvenir I bought on holiday. See below for the subject matter and end results.

Photogrammetry Testing
by Andrew Marunchak
on Sketchfab

Everything depends on the quality of the source materials, if the photos are blurry or the lighting is inconsistent, then problems will begin to emerge. I’ve used the free version of Photoscan for this and I haven’t done much touching up of the above model but you can see that it’s somewhat faithful to the original subject. Areas that the camera couldn’t see due to occlusion remain missing in the 3D mesh data. See the images below for photos and screenshots of both the physical and authoring environment.


This is a rather small object so the camera really came through for me. The lighting conditions were ideal as well, there were no spotlights or inconsistent sunlight to worry about. The image below is of the authoring environment in Photoscan.


It’s an order of magnitude cheaper than 3D laser scanning but the circumstances in which it’s useful are fewer. Having said that, I think it’s going to be a welcome feature to add to our list of services. I’m also thinking of recreating the majority of my environmental assets in my mobile games photogrammetrically though it’s far easier said than done!

3D Land Law

It’s often difficult to visualise something in a medium as text heavy as law, though an opportunity for creativity made itself available recently. A colleague approached me with a view to enhancing a sketch he had made with a 3D representation. It’s now being used to assist in conveying the principles of land law. Due to some time constraints, there were limits to what we wanted to achieve with this though I decided the best use of our time would be to work with Google Sketchup. The end result is what you see in the video below.

The idea is that students are able to navigate the scene by downloading the Sketchup viewer and pressing the scene buttons to allow the camera to focus in on various points of interest. It has more use as an in-class tool however as it’s a nice way to initiate discussions around boundaries and trespassing. The goal is to, over the course of the semester, develop it with a view to making the implementation browser-based, so students can navigate the 3D model from inside the browser. Given the subject, it’s not safe to rely on a user’s prior exposure to 3D, be it through gaming or otherwise, so the interface needs to be intuitive and simple to pick up. A simple set of scene buttons seems to serve us well in that respect.

Unity 3D realistic terrain data

In my quest to model a believable glacial environment, I’ve resorted to using real-world terrain data. It used to be that I would sculpt terrain by hand and rely on perlin noise generators to produce something from which I could use as a base. The video below shows some of the things I’ve been working on and offers a bit more of an explanation.

The difficulty comes with having to make the scene explorable form a first person perspective. By and large, terrain data doesn’t allow for high enough detail to make this achievable. There is still a lot of work to be done by way of manually adding erosion and creating some textures for close-up viewing of the scene. With a high detail scene like this, you can rely on tesselation to ensure it’s fully optimized but another way is to generate the distant scenery (anything which won’t be explored on foot) as a low detail height map. It’s still quite early on in the development process and this is a continuation of a previous project which has recently regained interest.

Implicit Learning and Information Bandwidth

This is the link to the original blog post made I made on our team site at the Learning and Teaching Innovation Center, University of Hertfordshire.

Something as nebulous as the notion of an ‘information age’ is perhaps best described by metaphor. Imagine, if you will, a raging river within the centre of which stands a protruding rock. Through erosion, this rock is shaped by the forces of an unrelenting torrent which it is unable to control and eventually its identity, insofar as it has one, will succumb to the ‘chaos’ of its surrounding environment.

It is, without a doubt, overly dramatic but serves to communicate the principle that, every day of our lives, we are exposed to an ever increasing volume of information. Where we differ from the rock, which is inanimate, is via the means of taking action based on our own discernment; we decide whether something is worth paying attention to.

The role of the traditional educational establishment is changing rapidly, information is not quite as exclusive as it once was and, in the future, its prevalence will force us to question how best our time developing relevant subject knowledge is spent. We are no longer gathering around an oasis in the desert, we have a choice.

Restricting information is difficult given the number of mediums it is channeled through, whether that be in paper-based form, blogs, video sharing, text messaging or audio podcasts. We are on the verge of these aforementioned examples integrating almost completely seamlessly with our daily lives through innovations such as tablet computing, smartphones and perhaps even the imminent prospect of Google Glass.

In the history of our race, information has never been so accessible. The affordances it provides us with are numerous though, insofar as downsides are concerned, much of it is often ‘noise’ with a distracting influence. Therein lies the role of the modern university, to use as many vectors of dissemination as are available to us to their greatest effect – thereby informing the student of that which is useful.

There are exciting developments on the horizon. As the visual medium becomes more accessible to people through computing, we will begin to see a convergence of disciplines which have, traditionally, been deemed mutually exclusive by consensus. Everything from elaborate technical visualisations to explorable 3D environments are now within our reach. Such are the fruits of the unison between computer science and creative arts. Though as wonderful a vision it is, we need to take accessibility into account. What good is something so beautiful if it can’t be seen by the majority of users, what are the alternatives?

Those are some of the questions we need to consider when working towards future trends which, in practice, is something of a balancing act. At the forefront, new technologies exist in something of a niche area and being too far behind, in the past, is an exercise in redundancy. We have to be malleable otherwise we risk sharing the fate of that rock in the river. Rather than standing against overwhelming forces, we move with them and guide their flow. That in itself is a metaphor for cultivation and our progressive evolution.

3D Glacial Hydrology

I’ve posted this for posterity, it’s one of the projects I completed during my master’s and is a tool I’ve put together in order to assist with teaching glacial hydrology. That’s the science of how water passes through those massive ice giants which carve their paths through mountains as they shift with forces so great it changes the shape of our planet. It’s likely that we will not be seeing these spectacles in a few decades as we’re rapidly seeing them vanish due to climate change. There are places where they’re reformed but, for the most part, they’re being lost. The project is a tribute to their magnificence.

Apologies for the half-hearted narration in the video, not my best!

Health and Saftey Training Environment (Unity 3D Prototype)

Yet another project I’m working on, this time a health and safety training environment for the University of Hertfordshire. The purpose of this particular app is to allow the staff to assess the ability of potential employees to spot dangerous situations in the office environment, things like low chairs, wires trailing across the floor, screen glare etc…


I’m going to try and keep the polycount low on this one as it’s going to be delivered via a web browser and you can never tell how fast another user’s machine is. To that end, I’ve allowed the user to switch between quality settings and might make some super low poly models for the ultra low setting.


Again, I’m not using real-time lighting, as beautiful as the stuff is. This is mainly in consideration of performance issues. Having said that, there’s a lot you can do with baking your own lightmaps, as you can see in the shots.

Just a quick reminder to readers. Although Unity 3D is marketed as being a game engine, it is just a 3D engine. You can do more with it, other than make games.





Work in progress – The Weston Auditorium at the University of Hertfordshire

One of my current projects involves translating the Weston Auditorium, found on the De Havilland campus of the University of Hertfordshire, into a 3D space. Again, I’m using Unity at the moment and my workflow involves 3dmax, sketchup and vray (for lightmaps). Admittedly, my proficiency with vray isn’t all that great, yet, but I’m pleased with the current results.



The most annoying part of doing a visualisation is definitely obtaining the required measurements of an area. The floorplans I had received for this didn’t indicate height and the seating had changed somewhat since the initial design. I’m looking forward to a chance to be a bit more creative and just go crazy building some surrealistic environments. I’ve got some ideas for a futuristic, dystopian interpretation of London which I’m eager to get started on but it’ll have to take a back seat until my current workload diminishes.

The multi-user version in action:

The architectural video is below: