Three Factors for Wearable Computing User Experience — Reactions to 2 Weeks with Google Glass
Posted: December 10, 2013 Filed under: Uncategorized | Tags: google glass, user experience, wearable 1 CommentI started participating in the Google Glass Explorers program a couple weeks ago and have started to notice some recurring themes/patterns in using a wearable computing device.
1) It’s annoying to people around you if they can’t tell if you’re paying attention to them. Developers of future wearable computing devices should seek to set visible conventions and expectations about when a device is “on” or “off”.
It’s really obvious when the user of a smartphone or PC/laptop is paying attention to the people around them vs. the device. Not so with wearables. Google Glass is actually off most of the time, not displaying anything to the user. But the fact that the device is so visibly present in the user’s line of sight makes people around the user wonder what they’re seeing and whether or not the user is actually interacting with the people physically around them or the virtual interface in the device. I’ve wound up adopting the convention of only putting on Google Glass when I’m intending to actually pay attention to the display or when I’m ready to react to Glass notifications (e.g., when I’m working, walking on my own, etc.). When I’m meeting or talking to other people, I’ll rotate Glass up and out of my view, putting them on top of my head like a pair of sunglasses.
2) Most of the apps for Glass (e.g., Evernote, Path, Wordlens, etc.) are single-feature apps and I think this is a bad idea. App developers should think about long-running, persistent computing sessions instead of “micro” sessions.
I remember listening to a speech by Phil Libin, CEO of Evernote. He said that his team had been experimenting with Google Glass and other wearable devices and that they had come to the conclusion: “The key difference with wearable vs prior computing devices is session length. You have 1.5 seconds to deliver an experience.” I think that’s a profound statement, but I also think that it’d be a huge mistake for developers to take a big app like Evernote and try to break it up into a collection of individual applets that each have their own unique voice command in the Google Glass navigation menu. Doing so would turn wearable computing into a sort of super-twitchy ADD-riddled cousin of the smartphone experience — where users cycle between a million little applets, one at a time, for 1.5 seconds each. That’s be a looming usability disaster… That mode of app interaction may work for smartphones — because the device is off, stowed in a pocket most of the time, and only comes out for brief bursts of activity.
Wearable computing is fundamentally different than smartphone computing. Once I decide to put Google Glass on, it’s going to stay on my face for a while. I’m not going to keep putting them and taking them off as rapidly as I may take out my phone and return it to a pocket. Given that, I’d like to see wearable OS and wearable app developers move towards a paradigm of really long-running computing sessions. I’d like to see Glass constantly record everything all the time and offer all that data to many apps in the background. And then after the apps have processed all that data in smart ways, present back to me the best of what they found. So I just wear Glass and every once in a while I get a notification — “Hey looks like you just were in a cool place, Glass saved 5 photos for you. Want to keep them? [Nod to keep, swipe to review them now, or do nothing to discard in 5 seconds]”. Or “Sounds like you were just having a conversation with two coworkers, here’s a transcription of the meeting notes. Want to save them for editing later?” Each of those notifications could drive a very brief interaction, but the smarts of the interaction are occurring in a persistent, long-running computing session.
3) When I’m wearing Glass, it becomes the perfect place for realtime notifications that I can take action on. I want Glass to tell me that I should take 280 in order to avoid an accident on 101, or that the milk is on aisle 7, or that rain is expected today when I’m standing at the front door in the morning.
A wearable device should know everything about my micro-environment — my location, direction I’m facing, velocity of movement, ambient sounds around me, recognizable visual landmarks in front of me, etc. And if wearable computing has been happening in a long-running persistent session (#2, above), then the wearable OS/apps should also understand the bigger picture of where the current moment fits into my overall day/week/month/year patterns of my life.
I’m saying all this with just a couple weeks of experience walking around with Google Glass, so please read this all as just initial, early feedback. So far, if I had to reconstruct a “better” Google Glass experience it would look something like this:
- Device is “on”, aware and recording everything around me while I’m wearing it (and a little flashing red indicator light on the front tells the world that’s what’s going on)
- Lifting up the device out of my line of sight but still keeping it on my head like a pair of sunglasses would turn off video/photo recording but still keep listening to audio signals around me (and a little flashing blue indicator light on the side, near my ear, tells the world that’s what’s going on)
- Taking the device off my head altogether should turn it off (sleep, silent, muted, not recording/transmitting)
- While device is on, all the data streams are passed up to the cloud for processing by apps/services
- Apps are long-running, simultaneous, parallel background services that sift through all the data streamed up from the device and generate smart notifications that I can act on quickly or save for later viewing/editing on a smartphone/tablet/laptop.
Insightful read!
Another potential use might be recommended Facebook friends based on the face recognitions of your latest interactions, although quite a few people might not be willing to publicize their facial structure.
Were you able to test Glass through 2014 as well? If so, I’d be interested to hear how the experience changed or improved.
Thanks for the article!