VR guidelines – Mike Alger

We’ve built a lot of our understanding on designing for virtual environments on Mike Alger’s master’s thesis on designing a VR work environment. He builds his work on research done by Google and Samsung, which we have also been using.

Much of his work isn’t so applicable to augmented reality, but the basic foundations of his work have been invaluable in building our understanding of how to begin designing in 3D user interfaces. Below are two of his videos detailing his work – you can find the text of his master’s thesis in the notes for the first video on YouTube. Big thanks to Mike for sharing his work so openly!

 

angular dimensioning

One of the challenges facing VR and AR designers is the problem of scale. It is relatively straightforward to dimension 2D UI elements for mobile or other screens. When it comes to three-dimensional interfaces, like those in AR environments, we need to take into consideration the intended viewing distance of each element.

There have been quite a few good videos and guidelines posted on this – Mike Alger addresses ergonomic viewing distances (post on his work to come), Richard Yao does as well (his video can be seen in another post), and Google has proposed a new unity of measurement, that they call the Distance Independent Millimetre (dmm) – detailed in the first few miniutes of this video:

The basic premise is that as our viewing distance increases, the scale of our designs should also increase to support readability. Using font sizes, pixels or picas would lead to us needing to specify measurements for each and every intended viewing distance. This is a cumbersome task in VR or AR, as viewing distances can change frequently.

Switching to an angular dimension, such as the dmm, allows designers to design UI elements once, and then simply scale them to the appropriate size depending on viewing distance.

We’ve been able to implement this when sketching in Unity by laying out all of our UI elements in one canvas positioned 1m from the camera, and testing the scale. Once we have a scale we like, we simply scale the whole canvas (and all the nested UI elements) by the distance it will be seen from the camera.

We are somewhat lucky in the maritime context here, as we are often dealing with fixed distances – the bridge is a fixed size, and UI elements placed outside the ship will always be ‘projected’ as a fixed distance.

full scenario sketches

Below are the full sketches, based on the layered scenario process we shared in an earlier post.

These sketches helped us to think through how the system should behave in different situations and for different users. These kinds of sketches can be refined, as in the example below.

Continue reading “full scenario sketches”

complex spatial relationships

This next post is melting my brain a little bit… check out the next post for the video sketches that started me down this rabbit hole, and this post for more background on types of spatially fixed content.


Digital designers need to be able to communicate their visions to other people, including clients and developers. Today, we use programs like sketch and principle to make mockups, and we use a common language (pixels, hover-over, global nav, workflow diagram, etc.). The elements and relationships we are describing today are primarily 2D and are displayed on one screen (the move to responsive design has changed this a little).

Our context adds a few complexities – it is a distributed system, meaning it has multiple inputs and outputs, so we aren’t designing for one screen anymore; and it includes augmented reality – a technology that demands new approaches to designing interactions.

Describing the relationships that UI elements in augmented reality have to the physical world requires us to use new systems and new terms. Even seemingly simple elements, like the wireframes in the video below – have complex and dynamic relationships to multiple points of reference in the physical world. How would we describe them to a developer, if we wanted our sketch to be prototyped?

As designers tackling 3D interfaces, we need a system for communicating these spatial relationships – in three dimensions – based on the position and orientation of objects (scale is another factor not fully addressed here). In the sketch above we see UI elements that are spatially locked: to the ship’s window, to the horizon, to a ship moving on the horizon, and several combinations thereof.

What follows is my attempt to make sense of this… through writing, diagrams and a sketching template.

Continue reading “complex spatial relationships”

Sketching AR Elements in Layered Scenarios

Based on the Layered Scenario Mapping tool developed by Sigrun Lurås, we are developing this sketching template as a means of illustrating VR elements and transitions on the ship bridge.

In my mind, this template takes on the role of traditional wire-framing in interaction design, but it allows for sketching layered information elements that may have different qualities or transitions.

Continue reading “Sketching AR Elements in Layered Scenarios”

applying a grid to information spaces

Augmented Reality displays free the bridge from the confines of conventional screens and interfaces. This freedom is – to be honest – totally overwhelming at times. A system of information spaces and interactions types is not enough – we need to think about how and where we display certain kinds of information within those spaces in order to support situational awareness and prevent information overload.

Continue reading “applying a grid to information spaces”