As designers, we create user flows and give them to developers, product managers, clients, and sometimes even users for testing. At its best, a user flow is a concise, clean, and compelling way to showcase the scope and vision of your application before it is developed. Oftentimes user flows are the key piece of documentation that we provide developers to actually bring our apps to life.
Most user flows for web and mobile products look something like this:
This user flow can be deciphered pretty easily by anyone. It lays out a clear roadmap for how a user can accomplish tasks within the application. There is only one input option available: clicking.
But if we were to adapt this format for a mixed reality experience, it would get cluttered quickly. Imagine adding voice input options to this flow. And 3D objects that can be resized and rotated. And 360 sound. And Cortana. And multiple application windows…and more. This user flow, if kept in this same format, would likely get very messy and difficult to read if it were translated directly for the medium of mixed reality.
Mixed Reality User Flows: Scene, not Screen
In mixed reality, there are so many different components to design for that the format of a standard web/mobile user flow doesn’t work very well. There’s a lot going on in any given mixed reality experience. Designers working in mixed reality aren’t creating a screen-based experience — we’re creating scene-based experiences. Devices like the HoloLens bring computing to your personal landscape. Mixed reality has everything to do with the physical world, timing, and a variety of input actions. A user can accomplish the same task in a number of different ways, all while other things are happening (e.g. a user can select an object at the same time audio and video are playing). On top of that, the actions and objects in a mixed reality scene are probably influenced by the physical landscape they’re placed in. So as a designers, how can we account for all of this in a user flow? How can we describe mixed reality scenes in a clear and concise way to developers, product managers, and other stakeholders? How can we help others understand the intention of our mixed reality app, as well as the precise interactions available to users in any given MR scene?
We need a different format for mixed reality user flows — something clean, clear, and concise, but also something that shows the full scope of our application and all the various components that make up a mixed reality experience. And ideally, this document should also provide specific interaction details, so that we can give our mixed reality user flows to a developer and she or he could begin coding with a clear understanding of the design/flow/interactions in each scene.
Last year, I worked with a small team to design and develop an app for the HoloLens. This was our solution for creating user flows for mixed reality:
Our design process was messy (you can read more about that here). We clocked in dozens of hours deciding which interactions would be available in each scene. We didn’t usually have a HoloLens on hand during our design meetings, so we came up with other ways to prototype our ideas. We kept meticulous track of our interaction design decisions in a spreadsheet. And then, the amazing Katie Chen boiled it all down into the clean and concise document above (she is a genius!).
The mixed reality user flow my team and I created describes a clear user journey and takes into account the variety of components inherent in a mixed reality experience. Our mixed reality user flow is something in between a storyboard and a traditional web/mobile use flow. It’s like a storyboard because it’s describing a scene and telling a story (the user’s journey), and it’s like a web/mobile user flow because it describes specific interactions and user goals.
Here’s a breakdown of the user flow:
- Scene: This is what the user is seeing. This is in first person, but you could easily change it to be in third person.
- Input Options: These are the different input options available in any given scene. Note how some input options are available in some scenes, and aren’t in others.
- Audio: This is what the user is hearing. Our app had a significant amount of audio narration, so we used this space to write out actual scripts.
- User Action: What does the user do to move within a scene or move from one scene to another? Does the user air tap an object? Double air tap? Bloom? Gaze at something to select it?
- Icon Key: Last but not least, we have a key telling us what all these icons mean.
Put a bunch of these panels together in a sequence, and you get a mixed reality user flow. Or, what I’ve been calling a “Reality Sequence.” As a medium, mixed reality is so fundamentally different from web and mobile, that I don’t think “user flow” is the most accurate description of this document. Sure, it shows a flow, but it also tells a story and takes into account time and the physical world. I think Reality Sequence is a more accurate descriptor, because it implies that many things are happening in real time, and the outcome depends on the way a whole host of factors interact with one another.
I’ve created a mixed reality user flow template, which you can find here. Feel free to use this to get started on your own mixed reality design projects.
This article was originally published on Lillian's Medium page.