Imagine flying a modern passenger aircraft but without the complex array of instruments you’d expect to see informing you of air speed, altitude and heading. All you have at your disposal is the flight yoke, which you recognise from having seen countless pilots pull out of nose dives, moments before impact, in movies. There’s also a throttle control (for speed) and let’s throw in another dozen or so anonymous levers and buttons of varying importance, perhaps based on how colorful they are or how many can be pressed at once.
Just to make things interesting, we’ll start you off at 36,000ft in the air. You can tell you’re pretty high because you can’t see any cars and assume all is well on account of the lack of flashing lights and cockpit fires.
High stake implicit learning
The above is an example of how quickly things can become complicated in the absence of intuitive visual cues to help make sense of a situation. To contrast the complexity of avionics, when riding a horse you at least have some idea of how much stamina it has based on its gait and breathing. It’s also easier to gauge how fast you’re going given your close proximity to the ground and you can presume your steed isn’t about to burst into flame (unless evolution deems it advantageous).
Whether or not this is something you can discern through gameplay depends on the degree of visual fidelity available. Making all these extra details visible presents something of a design challenge however and many developers get around it by adding 2D interface elements as a means of conveying the same information. Some examples of those design challenges, in the past, involved working with low resolution displays and graphics processors which just couldn’t handle the load. These days however, with more capable hardware, more developers than ever are experimenting with innovative ways of removing or severely limiting the 2D interface clutter we’ve become accustomed to seeing.
The approach is not suitable for everything. A complex system needs a high bandwidth interface in order to convey information to the user and that’s best handled through things like the HUDs used in aircraft. This same technology is what has given birth to augmented reality interfaces but I’m going slightly off track here, all that stuff is on the other end of the spectrum. What I want to talk about today is how some developers have succeeded in minimizing the role of 2D interfaces and how it impacts everything from narrative design to user experience.
Using intra-diagetic principles to convey information
Diegesis, in the context of gaming, refers to showing information through the gameplay environment. In principle it means rather than showing a crude 2D overlay on top of a scene, it appears inside the scene itself, either as another, similar, interface or some other set of visual cues.
The ‘Pip Boy’ device in Fallout 4
Some games go further than others in this regard and, despite the efforts made by Fallout 4, shown in the video above, few go beyond the tried and tested panel-based approach (it remains a 2D exercise).
An example of taking things a step further would be using the graphical representation of the player to depict information. Games like Deadspace and Resident Evil are titles which do this well. Seeing your character limp through dark corridors in dangerous and spooky environments, after taking damage, certainly increases suspense and adds to immersion. It’s also a great way of communicating the fact that things aren’t going so well and you probably won’t come out on top of your next melee with a slavering zombie.
Going back to Deadspace, the health of your character is portrayed visually by the lights on his suit. The absence of a more contrived graphical overlay means that, when a traditional interface element appears, it’s in keeping with the narrative. Even when looking at your inventory, the character is present in the foreground, serving as a reminder that both he and you are sharing in the experience.
Deadspace uses a blend of both diagetic and more traditional interface elements
One of the joys of the survival horror genre involves using the items you find sparingly however and, to that end, the traditional 2D interface has gone a long way to enhancing gameplay. In Resident Evil 4, for instance, many players enjoyed rearranging their item grid so as to remove any wasted space. In this case, the ‘tetris therapy’ makes for a nice break between moments of tension.
Some games try to limit dialog prompts by using the environment itself as an interface. As an example, think of the process involved with chopping down a tree or picking apples. The tree has other affordances too, it can also be climbed or, after being chopped down, used to create a bridge over a ravine. Rather then these interactions taking place through a complicated heirarchy of menus, they can also occur in-game as shown in the video below.
Hitting the tree with an axe inflicts damage which, upon being destroyed, causes it to fall and apply physics modifiers (now that it is no longer rooted to the ground). Depending on which side you chopped the tree, it’s direction of fall will also change. Lastly, if the player wants to climb the tree, they just walk into it to initiate the ascent.
In Breath of the Wild, players are encouraged to experiment
The last point I want to make is that an interface doesn’t have to be entirely visual, but multi-modal. Having the player character literally narrate their circumstances, shouting out how much ammunition they have remaining whenever a clip is reloaded (as seen in ‘Peter Jackson’s King Kong‘) is another way of bringing the player into the world. Interfaces which are primarily audio-based remain something of an unexplored area outside of music-based games so there’s still plenty of room for experimentation.
It’s easy to see how, in failing to question the norms, we end up adhering to conventions without understanding the reason for their prevalence. I’ve enjoyed exploring the examples above, if you have too, leave a comment.
That’s all for now, see you next time!