Pdf of my final presentation:
Above you can see a video of the final presentation we presented last week. In the video you can see the click-through of the interface. To see the presentation with notes, check out this link:
Above you see the final AR user interface concept for the trainer (icebreaker). This is all compressed down to one screen for demonstration purposes.
To the top left is the selection of the different ice-tags the trainor can choose between, these are based on the ice categorizations from “The Nautical Institute, Ice Navigator Accreditation Standard”.
The tag in the center includes the ice-type and the ice-symbol. The area of interest is in perspective, so the students (convoy) know what area they are looking at.
The circle in the bottom left corner is the mini-map which show the landscape. The mini-map has a radius of 1 nautical mile, which is standard for a normal radar, but the trainer can choose to zoom in and out. The data in this mini-map is based on the latest generation ice-radar, which can show the ice more clearly then a normal radar.
Within the ice-radar, the user sees the ships in the convoy, and the users ship is marked as yellow. The little square in the minimap is a representation of the ice-tag in the environment, and is possible to move around. The component pointing out from the square, is the ice-symbol which the user can choose to delete, or scale up and down. It is placed on the outside so the user know which tag is which, for instance if the user where to place may ice-tags.
This was quite a tricky part, because the little square in the minimap doesn’t provide the user with much information except where the tag is placed. Therefore there was need to place it outside the mini-map with the ice-symbol. But what if there is multiple ice-tags, how will these components outside the mini-map be ordered?
I tried several alternatives, and here are a few of them:
As you can see, when all the tags are placed in front of each other there is no problem (all to the left). But as they are moved around they lines overlap, and if becomes difficult to see which one is which (center). Therefore, if the user places multiple tags, they will be arrange so that the closest tag in the environment is the closest one to you in the mini-map (all to the right). This is all very hard to explain, but at the end I think I found a way. It could of course be solved in different ways, and inspiration can be taken from tools such as Photoshop or Illustrator where they sort components in layers:
The students in the convoy, has many of the same components in their UI.
But in the ice-tag, they’ll be presented with more information within each ice-tag. They can also choose to take a “screengrab” so they can look at that later when there is downtime on the ship. This is a different tool I haven’t decided to design this round.
Additionally as the student get better at the different ice-type, they can choose to deselect the tagged ice (top left) if they feel they don’t have to see it.
Through my research and through interviews, I found that in the Arctic it is important that all ships communicate well, but also that they use the same language. Especially since different languages can lead to communication failure. After reading The Nauticals Institute’s “Ice Navigator Accreditation Standard”, I gained insight into what standard “language” is and how important it is that all ships traveling in the Arctic can afford this taxonomy to understand each other in communication. As icebreakers and ships in the convoy want to communicate in my concept, I saw the importance of building on these standards. Therefore, the “tags” to the students (in convoy) contain the standard ice-symbol also called ice nomeclature. (read more here: http://www.aari.ru/gdsidb/docs/wmo/nomenclature/WMO_Nomenclature_draft_version1-0.pdf)
I believe it is important to use these symbols so that ships can get a symbolic representation of the ice-type, when language can be a challenge. I also think that it is important considering future communication between ships in the Arctic.
Ice class is also an important element in navigation in the Arctic. “Iceclass denotes the additional level of strengthening as well as other arrangements that enable a ship to navigate through sea ice.” (https://en.wikipedia.org/wiki/Ice_class). Therefore it’s highly important that a navigator on a ship is aware of the type of ice the ship can go through, so that the ship avoids drastic accidents such as happened with MS exploder in 2015. MS explorer drove into a thick ice flood in Antarctica, which caused a hole in the hull, and the ship sank. (https://www.aftenposten.no/verden/i/dmX8O/Her-synker-MS-Explorer)
A tagg should therefore include both Ice-symbol, which ice-class the ice demands, the hallmarks so the crew can recognise the ice, and recommended speed to have when breaking through the ice. As we learned when visiting an icebreaker; having the correct speed is crucial to not get stuck.
As my co-students Gustav and Chris found out, it is also important that the information is above the horizon, where there is a bunch of space, and do not interfere with whats happening in the ocean, aka. the area where navigators need to have a clear view. Hence the vertical line between the area of interest, and the rest of the information.
Here is a link to our final presentation – the first few slides, in blue, make up a common introduction to the class.
C + G final presentation
In this figure I am basically trying to picture what a scenario using these AR-tags could look like. And even though it’s just a superquick sketch, I was able to see a challenge; that the tag absolutely needs to be placed at the right point in the environment. If it’s not, the convoy will maybe look at a wrongly placed tag. I want to emphasize what an important interaction-component this “placing of tag” action is in the concept. Again referring to the mini-maps I posted earlier, which I think could be a good way of solving this challenge. It is indeed a “tough nut to crack”.
Since we didn’t arrange for a third field-trip on a ship soon enough, I had to find other ways of testing out the AR-user interface. I came across an online tool which allowed me to test out where the different components in the UI should be placed, according to a persons field of view. The tool didn’t allow me to test out any gestures, but at least I got a sense of the combination of the widgets.
I also saw that for instance yellow and orange on white/blueish background doesn’t really work that well. Since I am designing in an environment full of ice and water, I know that the colours I choose on the UI should probably not be blue or white. Yellow didn’t work well in the test I did, and red may signalize alarms og danger. Additionally the colours should be lighter than the environment, so the AR-glasses can project them. This excludes quite a big spectre of colour to use, but colours like purple, pink, green are still available. This is of course not tested out very clearly, rather based on my assumptions.
Since the tests with the hologram landscape didn’t quite work in Augmented Reality, I researched other ways of showing a landscape. So called “minimaps” are often used in games, and allows the user to know where in the landscape they are, and can prepare the user for their next action, whether is it taking down an enemy or just moving to a different place.