Author Archives: Andrew Marunchak

The Narrative of Weather in Gaming

The chaos of a raging storm or the tenderness of a falling snowflake both have the ability to evoke an emotional response. They set a scene and prepare the audience for what’s to come. This audiovisual conveyance is what captures the attention of a user and provides a buffer for slower paced storytelling, whether that’s through gameplay or otherwise.

The first time I became enamoured by an in-game environment was after seeing the introduction sequence to Zelda: A link to the past running on the SNES. The scene has become iconic in gaming – the player wakes up in the midst of a thunder storm, during the night, to follow after their father who has left to rescue Zelda. It succeeded in exploiting human curiosity and maintaining dramatic tension.

Link braving the storm

Up until then I was preoccupied with 2D platformers. These, while being incredibly popular and having laid the foundation for their genre, lacked the depth that came with RPGs and had little to no focus on storytelling. Zelda was my first time exposure to what a game could be, a prescribed emotional experience.

Since that time, plenty of titles, even those light on plot, have delved into using weather as a means of enhancing the narrative design. This is the idea of using a story element as a gameplay mechanic. Think of having to find shelter during intermittent gusts of wind or putting out a fire by summoning rain.

In Jotun, snow storms can cause harm to an unprotected player

Narrative design in the context of the user experience means making sure the interactions required to play are relevant to the story. There are plenty of great examples showing how weather and the environment can be used to enhance this process and I wanted to provide an awareness of some of the ones I find most interesting.

Stalker: Call of Pripyat

A ghost town in the aftermath of Chernobyl makes for a curious setting but there’s more to it than that. In Stalker, the environment is sentient and at various intervals releases pent up psychic energy in the form of ’emissions’. A siren sounds and the player has seconds to find shelter before the sky turns against everything beneath it, showering death in the form of red light. The sense of desperation this creates if the player is in the middle of nowhere is formidable and the panic compromises any ability to think clearly.

An emission taking place in Stalker

If you’re lucky, there might be an abandoned building nearby or, if you look closely enough, an entrance to a sewer. There’s also the possibility that, in your mad visual sweeping of the surrounding area, you miss everything. Happily though, the minimap affords some mercy in that, during this time, it highlights the nearest safe zones.

Shenmue

As a revenge story and murder mystery, entire books can be written on Shenmue and it’s implementation of narrative design but I promise to stay on topic. As part of its suite of innovative features, the environment changed as the story progressed. There were situations in which the player was required to be at a very specific time and place to initiate dialog with a potential lead. Without any means of ‘skipping time’, the player is presented with the choice of whether to use it wisely or ‘squander it’. Whether it’s invested in training or playing darts is entirely at your whim and the outcome is always fun thanks to the minigames involved.

To everyone’s delight, on snowy days the environment would undergo an audiovisual metamorphosis. The quieter roads would accumulate far more snow and ice than the busy streets of the shopping district and the sound of your footsteps would reflect this change. Similarly, when it rained, pedestrians could be seen with umbrellas. Today, these adaptations no longer represent the technical feat they were at the time of release but credit goes to Shenmue for being the first to tackle dynamic open 3D environments in this manner.

Running around the town of Yokosuka in Shenmue

Dark and Light

The promise of Dark and Light was that of a vast open world which spanned 40,000 square kilometres of terrain. While it succeeded in delivering this, the initial release back in 2006 was received poorly due to a lack of content. The other unique feature, in addition to its size, was that time played a very real role in the world. As seasons passed, so did the freezing of large bodies of water, allowing players to cross on foot. Snow would also gather on mountains, allowing players to slide down on the back of their shields leaving trails in their wake.

Despite there being very little to do, the seasons added a lot of value to what would have otherwise been a blank canvas. By simply picking a direction and walking, one could become immersed in the visual representation of the journey. This wasn’t enough to save it however and the concept is being revisited as of 2017. I mention it here because it was one of the first games to attempt full seasonal transitions on such a large scale.

Left 4 Dead 2: Hard Rain

Cooperative zombie survival has never been more fun thanks to a unique twist in this campaign. The second half of the level requires that the players backtrack to a previous location but in the middle a torrential downpour. The visibility is severely diminished and the deafening roar of the rainfall drowns out the audio cues one would normally rely on to prepare for danger. The game also interferes with the voice communication making it more difficult to hear what your teammates are saying, forcing them to speak more loudly.

As if getting wet wasn’t bad enough

Zelda: Oracle of Seasons

A prevalent theme throughout this talk of weather and seasons is time. In Oracle of Seasons, the state of flow can be manipulated through the use of an item carried by the player. Moving from one season to another allows the environment to be experienced in one of four different states.

In summer, water sources evaporate and allow the player to traverse the bottom of a lake or riverbed.
In winter, bodies of water freeze thereby allowing the player to cross, unimpeded.
In spring, the winter snow thaws causing floods and a rise in water levels.
In autumn, leaves fall and create traversable bridges over holes in the ground.

Summer and winter in Oracle of Seasons

The ability to control the environment provides an opportunity for creative problem solving. Players will also have a preference regarding their favourite seasons and default to these as the opportunity arises.

There are many more games which use these narrative devices and honourable mention goes to ‘The Long Dark’ for its focus on surviving the cold and ‘Metal Gear Solid’ for making creative use of camouflage options to blend in with different environments and weather.

That’s all for this week, I’m going to continue exploring narrative devices for a few more posts as they continue to fascinate me the more I think on them. See you soon!

Beyond Inventory Systems

Going on an adventure means, at some point, having to look through your pockets for those all important bits of kit. Whether it’s a map, compass or gauntlet of power, you need to make sure your preparations weren’t in vain and that you’ll last long enough to reach the next safe spot. Inventory systems are a core component of many genres and aren’t exclusive to roleplaying games. Choosing your loadout in a first person shooter, making sure you have your long and short range options carefully considered also qualifies.

Rather than focusing on the different ways of handling a grid of images (as most discussion on this topic seems to get stuck), I wanted to reflect on some great examples of how item storage can be tied more closely to narrative elements used to enhance the emotional conveyance of an experience.

Survival Horror – Safe Rooms

In gaming, the survival horror genre came of age when titles like Resident Evil and Silent Hill were released. What makes them special, in my eyes, is how they paced the action gameplay. Much of the experience involved exploring environments and solving puzzles. Inevitably, through the sheer volume of of discoverable items, the player was required to decide what to take for another trip outside the confines of the safe room.

The safe room is, as implied, an area of sanctuary wherein an opportunity to save game progress and shuffle through an item bank exists. As a standalone mechanic, it doesn’t sound remarkable but, when considered in the context of an incredibly tense gameplay environment, it provides an indescribable sense of relief. There is a genuine feeling of progression when entering into one of these rooms, even with all the backtracking the gameplay demands.

An item box in a save room – Resident Evil

Roguelikes – Permanent death

Roguelike games are defined by a number of characteristics, the most prevalent of which are permanent death (one life only) and procedurally generated content. If you visit any online community wherein this genre is being actively discussed, you’ll often find heated debate surrounding what constitutes a true roguelike experience versus a more diluted one, usually referred to as a ‘rogue-lite’. That aside, these sorts of games have a lot to offer by way of variety and much of the experimentation seen in the indie gaming scene, these days at least, occurs in this genre.

Item storage in roguelikes is tricky business to say the least. Upon death, any evidence that the player once walked the world is usually erased from existence. There are a few exceptions however and they remain, to this day, unique in their implementation.

Nethack is an older game which, much like those before it, is rendered via ASCII characters. Though never designed to be visually compelling, it delivers an incredibly challenging dungeon crawl in which all learning is implicit. That is to say, you learn through trial and error and, true to form in this pit of despair, most experiences end in utter failure. Upon death however, there is a chance that the game will retain the state of the level in order to be loaded again in a future play session. After encountering the ghost of the previous player, there is an opportunity to recover their inventory but with the caveat that much of it may be ‘cursed’.

As implied, anything which is cursed shouldn’t be used unless the player is able to mitigate against the risks. Whether those are unpredictable behaviours, reduced efficiency or something directly harmful to the user.

Open Worlds – Morrowind

Despite their current prevalence, open world games weren’t always so popular. The big change in recent times has been the focus placed on the user/player experience with titles like World of Warcraft raising what was, admittedly, a rather low baseline. Much has been learned about how players interact with large environments and how we, as humans attribute meaning to our actions. To keep this commentary on track though, I’m going to use The Elderscrolls: Morrowind as my primary example.

Dr Who and his TARDIS in the lands of Morrowind

In Morrowind, you are homeless for the majority of the game. Disembarking the imperial prison ship, with not even a penny to your name, you find yourself scorned by all those who inhabit the realm. Shortly after finding your way however, it’s not long before you realise that you can’t carry everything you ‘find’ / steal.

Unlike most modern games which seek to optimise system resources like memory, in Morrowind, if you drop something on the floor it stays there. There are no magical cleaning fairies employed by some centrally-funded government program and that means you can find a ditch someplace and make it your home. As the player ekes out their miserable existence, they may find themselves living anonymously in someone’s attic or, as I preferred, the balcony of the thieves guild in Balmora. Granted, the game does provide you with a small house which you inherit after the passing of a friend though, for many, the joy of the game lies in claiming a nest of their choosing and filling it with all manner of shiny objects.

That’s all for now, there are many more examples of excellence though the takeaway message for me is that even something as simple as an item bank can be a powerful tool in a gaming narrative when used creatively. There are things I want to experiment with such as allowing a player to bury items and mark the location on a map, I see it as a more acceptable way of keeping an environment clutter free whilst providing an opportunity to personalise an isolated area.

I’ll be posting more on my thoughts in the near future.

Cartography in Gaming

To this day, I continue to get excited whenever I see a game map. I feel the urge to explore and my imagination goes into overdrive, creating stories for the areas depicted and quickly convincing myself that the world is more expansive and fantastic than it usually ends up being. This ‘runaway imagination train’ effect is something I always look to experience when I pickup a new game and hope to be able to replicate in future projects.

In games, maps tend to provide context. They show us where we sit in the scheme of things and sometimes allow us to gauge our progress en route to a terminal destination, be it a snowy mountain or the lair of a fire breathing dragon. It’s also the role of a map to further enhance the narrative elements, often through some sort of visual abstraction. To sustain the emotional bandwidth which keeps a player immersed is no simple feat, and the map is another tool the developer has at his his/her disposal.

map1

A stylized map from Final Fantasy: The Four Heroes of Light

Cartography is a complex idea and goes beyond being a substitute word for ‘maps’. There are many different ways of portraying an environment ranging from photo-realism to more thematic approaches.

While a map is designed to convey information, it’s normally of one type. Whether it’s a depiction of political boundaries or the location of valuable resources, it’s rarely a good use of space to provide as much visual ‘noise’ as possible. The idea of complexity being synonymous with depth isn’t always right and often leads to confusion.

xolotl_map

An example of ancient man getting things wrong again #easytarget

Maps in games are often romantic and, in the spirit of nostalgia, I created an homage to one of my favourite MMO games of all time ‘Ultima Online’, hereafter referred to as ‘UO’. Much of the content in UO was player generated and the real tragedy was that it was incredibly hard to find from inside the game. People would often spend hours on forums attempting to locate points of interest and the player experience usually suffered from this disconnect. The video below is based on some of the locations found on an unofficial player-run server from some years ago. I attempted to mimic the functionality of Google Maps with a view to further expanding on some of the features though, as time passed, this distraction lost its appeal (shortly after realising what I had gotten myself into)!

Accompanied by the OST

Games like UO and Everquest initially shipped with cloth maps depicting the game world. This, in addition to being a lovely collectors item, served to bewitch people by manipulating their escapist tendencies, drawing them away from their loved ones back into the imaginary world whilst drawing their life energies away as they suckled on the poisonous, proverbial, teat of the fearsomely addictive early generation MMOs. There are even games which allow the player to create his/her own maps and this remains a hallmark feature of titles like Etrian Odyssey on the Nintendo 3DS.

That’s all I have to say on this subject at the moment. I’ve avoided talking about minimaps in games as that’s not what I’m exploring currently, though I acknowledge their importance. My curiosity lies in seeing how emotional conveyance can be enhanced by the use of visual aids and will no doubt be revisiting these ideas in the future.

Demonstrating Virtual Reality at the University of Hertfordshire

It’s not often that we get the chance to demonstrate what we’re working on as, in education, it’s more about the finished product and conclusions. Nevertheless, during the HEaTED East of England network event , we were given the opportunity to allow people to wonder around a bespoke 3D environment while using our HTC Vive headset and touch controllers. We used the controllers themselves emulate the functional behaviours of a smartphone. The reactions were all positive and I managed to have chats with some senior managers about where it is we’re hoping to take our vision for VR at the University.

You can find a brief video depicting the space below:



The idea was to illustrate how intuitive behaviours can be replicated inside a 3D space to allow for simple interactions. A lot of people tend to be confused by their initial transition into a virtual world but by including recognisable elements, it makes the experience much less daunting. It’s for that reason the 3D environment in question is a lecture theatre, based on a real-world equivalent, only a few metres away from the stand. This made the experience all the more compelling as, after having spent a few minutes in the 3D version, the attendees would then enter the same room shortly afterwards.

We’re going to be demonstrating again in the near future in a bid to capture the imaginations a few academics. VR promises a lot of interesting things, everything from multi-user role-playing exercises between people in two different locations (partner institutions overseas) to single user familiarisation exercises. We’re hoping to establish some more usage case scenarios.

The HTC Vive in a smaller space

Granted, there are plenty of videos showing how the Vive works in large, open environments but few of them deal with the real-world scenarios faced by many when it comes to introducing virtual reality to their living rooms. Much like how the Nintendo Wii needed a similar amount of space in order to allow the user to swing their arms around as they participated in sword fights, golf and bowling, the Vive needs even more to do it justice.

Having said that, it seems to work incredibly well, even in something of a cramped space. I don’t doubt that, as the prevalence of VR continues to grow, we may undergo a radical culture shift involving how we set up our home environments. With that in mind, please enjoy the quick 60 second video I’ve put together below.


As seen in the video, you can still have an incredibly immersive experience even while remaining in the same spot, just be sure to move your tea mugs out of arms reach as it’s not a question of ‘if’ but ‘when’ you knock them over.

The demos shown are just a few of those available through the Steam store on the PC. They’re a hint at what’s to come in the future and the promise offered by applications like ‘Big Screen‘ are likely to impact everything from individual to collaborative working environments. Although I only show it from a single user perspective, you can have multi-user sessions over the internet and have people join remotely to share a virtual space, all whilst sharing their individual screens (security risks need to be considered in that respect).

I’m hoping to create a few more videos in the future as they’re able to convey things far more eloquently than with words alone.

HTC Vive Commercial Release – First Impressions

Shortly after receiving our HTC Vive, I rushed to set everything up in a bid to sample the delights of the virtual reality applications available through Steam. For those of you unfamiliar with Steam, it’s an online content distribution service, initially set up for gaming but has since diversified its offerings in a bid to reach out to wider audiences. We’re hoping to improve the student experience by creating engaging visual content for use in our concept classrooms and the promise of virtual reality in this area is quite something.




Unboxing the headset and its accompanying assortment of wires made me wonder how portable a solution the Vive could be. Much of what we do involves showing others what can be done in the classroom and it’s clear that, at the moment, working with a head-mounted display is something which is best kept to dedicated spaces. That is unless you have a dedicated team of technical support staff on hand. As a University with a “Learning and Teaching Innovation Centre“, we’re quite fortunate in that regard.

Initially, the headset wouldn’t connect to my laptop, which only had VGA and display port inputs. The HTC Vive comes just with an HDMI cable (despite also having a mini display port) and so I had to purchase a “mini display to display port” wire separately. Upon arrival, everything worked beautifully and I invited everyone in to have a go with some of the “the lab” demos on Steam along with “theBlu“, a marine life experience wherein the user is surrounded by schools of fish and underwater flora, all of which are interactive and react to being touched by the controllers.

People were ducking down in order to crawl through some of the underwater arches and flinching as a whale got a little bit too close for comfort, before which its giant, reflective eye gave a knowing wink. All of this took place both on the headset and on the laptop display, allowing others to see what the user was experiencing. The emotional bandwidth of these experiences is nothing short of amazing and I say that after having used the Oculus Rift Devkit 2 extensively. The affordance of the Vive is that, as described, it allows you to physically walk around and interact with an environment using your body whereas with the Oculus you are required to use a joypad at the moment. This will no doubt change in the future but, as of writing this, the HTC Vive is where we are likely to be focusing our virtual reality development.

It’s worth mentioning that the laptop we used ran the 3D experiences poorly – around 25 frames per second – (despite being an i7-4290MQ with 32GB ram) due to an under performing graphics chip (Quadro FX). It just goes to show that you can have a machine which is incredibly fast for video and high resolution image editing yet, without a proper game-based GPU, it will not perform well. There are a number of 3D benchmarks you can consult to see if your hardware is up to scratch and I opted to use a laptop if only because it provided for a much simpler setup. I will be bringing out the big guns for future demonstrations.

I’ll be posting more as we continue to experiment with things. At the moment, we’re brainstorming some usage scenarios involving role-play exercises.

Mixed Reality with the Oculus DK2

The irony of virtual reality is that, despite being a visual medium, it remains incredibly difficult to convey in a faithful manner. It’s not just about the visual impact of an experience but also the immersion factor.

I made a post a few months ago in which I filmed myself using the Oculus Rift at a desk. In that video, I cross-faded the perspectives of a bystander and user in an attempt to communicate how people can interact with a 3D environment using a headset.

Virtual Reality represents something of a growth industry right now but it will take time to convince people of its promise as a means for channelling emotional bandwidth. In the right hands, it could become a powerful educational tool. As always, the issues around how to establish best practice will take time to address and, because of this, it’s a great time for both experimentation and innovation.

In the video below, I’m using a green screen to chroma key the output of Oculus, thereby creating the effect of allowing people to see as I do during the session. This is far less complicated (and looks very 1980s) than the method used by Valve, which you can see here.

It does mean having to restrict movement in some ways, no facing the camera, not looking straight down etc… These tend to create confusing visual effects.

I’ll be posting more in the future as I continue to experiment with things. We’re at the beginning of something which promises to revolutionise the human computer interface and contribute to the human condition in ways we’ve yet to envisage.

What do artificial intelligence, virtual reality and gene editing have in common?

I’m something of a transhumanist and in possession of a woefully optimistic view of where technology can take the human race in the next hundred years. A lot has happened in recent times, we have a surge in the public interest of VR, new genetic therapies which are starting to offer treatments for everything from HIV to cancer and the early stages of artificial neural networks which hold the potential to both dwarf and augment our collective intelligence.

Advances in these mediums have been made possible thanks to hardware improvements in GPUs (simple graphics cards). Thanks to their efficiency and number crunching abilities, they’re able to do more than simply create breathtaking visuals. NVidia are starting to invest more funding in chips dedicated to more efficient machine learning and pretty much all biochemistry in the digital medium ends up as physics-based simulations. The driving force for much technology tends to come from competition between various manufacturers but now we are starting to see the convergence of different areas of application, the results of which are incredibly exciting.

donquixote1

Don Quixote

The visual fidelity of real-time 3D simulations is starting to surpass pre-rendered 3D movies, giving content creators far more control over the creative process. Escapism is going to be a big thing over the next few years as people seek more and more media-rich content. The psychological impact of this is something we haven’t even started to consider, we could potentially treat phobias, rehabilitate people with neurodegenerative diseases or turn ourselves into drooling junkies forever held in the confines of a psychosis like that portrayed in Don Quixote.

The onus falls to all of us to steer things in a responsible direction and focus on how we can improve the human condition. It’s something to be excited about, never in the history of our race have so many technological breakthroughs occurred in such a short space of time – and the rate at which it’s occurring doesn’t seem to be letting up.

The Logistics of Virtual Reality and Thunderbolt 3

The end game for many technologies involves integrating seamlessly with our being, turning us into space-dwelling cyborgs. The problem is that while the process of miniaturization is always in motion, there will always be a suite of technologies on the fringe which have yet to undergo such optimization and start out in something of a clunky state. Virtual Reality headsets are such things, they are new, large and cumbersome.

While the phone-based experiences made popular by the Gear VR hold promise (in that it is a lightweight solution without wires) it remains expensive. Upgrading to a new phone is also problematic. When all one needs is a faster GPU, the only option is to purchase an entirely new phone (a general purpose device) which happens to have a faster graphics chip. This is an incredibly inefficient economy.

Perhaps it’s not as much of a concern for a single user but for a large organisation looking to invest in such technologies, it presents something of a challenge. Do universities invest in VR laboratories or do they come up with something more flexible?

I don’t doubt that the future of VR involves the use of specialist equipment and spaces. To that end, a dedicated lab might present itself as a viable investment.

Valve's Lighthouse Tracking System

A team demonstrating Valve’s Lighthouse Tracking System – I have no affiliation with the people involved

In the meantime however, during this period of innovation and testing, there are ways to make life easier. When giving demonstrations of VR within our institution, we either get people to come to our offices or we attempt to set up a small stand for the duration of a conference. The issue is that we always have to lug around a giant desktop computer inside which is the equivalent in weight of three potato sacks worth of hardware.

You might think “why not use a laptop?” – the answer is because the integrated GPUs on these devices are not upgradeable. We would need to spend thousands on a machine fast enough to run a VR experience only to have it become redundant overnight. The answer lies in the thunderbolt 3 port, best described as USB 3 on steroids.

With such a port, you can directly connect an external GPU to any compatible device, no matter how small. This means you could invest in a NUC device with thunderbolt 3 connectivity and have a graphical powerhouse which occupies a tiny amount of desk space.

nuc6i7kyk-back-rwd.png.rendition.intel.web.576.324

Whilst some newer laptops are sporting these connectors, it’s worth waiting until the method through which external GPUs interact is confirmed. The beauty of the solution is also that (due its massive bandwidth) rather than having three to five wires connecting the headset, in the future, there can be just just one. Wireless connectivity is also catching up, with wireless video now proving itself usable for gaming.

To sum it up, it’s worth waiting before investing in a long-term VR solution unless you have an application which has already proven itself to be robust and workable on current generation technologies. If you want to be an early adopter (due to personal interest or for reasons of experimention), there is already plenty of choice. – Just be aware that until the prevalence of VR really comes into its own, we are just witnessing the tip of the iceberg.

The Weston Auditorium at the University of Hertfordshire in Virtual Reality

Much has changed these last few years in the gaming industry. Until recently, real-time/interactive 3D has had something of a stigma associated with it when it came to education. Happily, in the spirit of inquiry, research has continued to verify its potency for creating memorable and engaging experiences.

Click here to view the search trend data on Google

We’ve been in possession of an Oculus Rift (development kit 2) for a while now and it’s provided us with some exciting opportunities to do with Learning & Teaching Innovation. We’re currently working with Psychology staff to create virtual environments for use in research and have been awarded funding to pursue some other ideas with measurable outcomes. We’re trying to establish best practice in this new medium and need to figure out what works and what doesn’t. Trying to force the use of VR in a non-complimentary medium would be a massive waste of time and could end up as redundant work.

It’s not so easy to identify the areas of use however as some of the most text-heavy subjects (Land Law as an example) stand to benefit greatly from the offerings of VR. Imagine being able to walk around a virtual village and identify disputes over property boundaries or the potential VR has to create dynamic data visualizations. If you thought 2D infographics were informative, imagine where it could go with something more immersive and with a real sense of scale.

The obvious choices for VR would be things like paramedic science, subjects involving roleplay scenarios. Potentially we could have partner institutions interacting with one another overseas in ways which were previously impossible. Multi-user VR environments hold a lot of promise in this regard and I’m hoping to be able to explore those ideas further.

More recently, I’ve completed some work with the Oculus Rift, it’s a virtual representation of our largest presentation space (sometimes used as a lecture theatre), the Weston Auditorium at the University of Hertfordshire. See the video below for a demonstration!

 

 

This is based on work I did some time ago, I’ve just made it work with the Oculus Rift. It’s taken a long time but I’ve optimised my workflow substantially since the old days.

Click here to read about the creation of this environment