Three Factors for Wearable Computing User Experience — Reactions to 2 Weeks with Google Glass

kid glass

I started participating in the Google Glass Explorers program a couple weeks ago and have started to notice some recurring themes/patterns in using a wearable computing device.

1) It’s annoying to people around you if they can’t tell if you’re paying attention to them.  Developers of future wearable computing devices should seek to set visible conventions and expectations about when a device is “on” or “off”.

It’s really obvious when the user of a smartphone or PC/laptop is paying attention to the people around them vs. the device.  Not so with wearables.  Google Glass is actually off most of the time, not displaying anything to the user.  But the fact that the device is so visibly present in the user’s line of sight makes people around the user wonder what they’re seeing and whether or not the user is actually interacting with the people physically around them or the virtual interface in the device.  I’ve wound up adopting the convention of only putting on Google Glass when I’m intending to actually pay attention to the display or when I’m ready to react to Glass notifications (e.g., when I’m working, walking on my own, etc.).  When I’m meeting or talking to other people, I’ll rotate Glass up and out of my view, putting them on top of my head like a pair of sunglasses.

2) Most of the apps for Glass (e.g., Evernote, Path, Wordlens, etc.) are single-feature apps and I think this is a bad idea.  App developers should think about long-running, persistent computing sessions instead of “micro” sessions.

I remember listening to a speech by Phil Libin, CEO of Evernote.  He said that his team had been experimenting with Google Glass and other wearable devices and that they had come to the conclusion: “The key difference with wearable vs prior computing devices is session length. You have 1.5 seconds to deliver an experience.”  I think that’s a profound statement, but I also think that it’d be a huge mistake for developers to take a big app like Evernote and try to break it up into a collection of individual applets that each have their own unique voice command in the Google Glass navigation menu.  Doing so would turn wearable computing into a sort of super-twitchy ADD-riddled cousin of the smartphone experience — where users cycle between a million little applets, one at a time, for 1.5 seconds each.  That’s be a looming usability disaster…  That mode of app interaction may work for smartphones — because the device is off, stowed in a pocket most of the time, and only comes out for brief bursts of activity.

Wearable computing is fundamentally different than smartphone computing.  Once I decide to put Google Glass on, it’s going to stay on my face for a while.  I’m not going to keep putting them and taking them off as rapidly as I may take out my phone and return it to a pocket.   Given that, I’d like to see wearable OS and wearable app developers move towards a paradigm of really long-running computing sessions.  I’d like to see Glass constantly record everything all the time and offer all that data to many apps in the background.  And then after the apps have processed all that data in smart ways, present back to me the best of what they found.  So I just wear Glass and every once in a while I get a notification — “Hey looks like you just were in a cool place, Glass saved 5 photos for you.  Want to keep them?  [Nod to keep, swipe to review them now, or do nothing to discard in 5 seconds]“.  Or “Sounds like you were just having a conversation with two coworkers, here’s a transcription of the meeting notes.  Want to save them for editing later?”  Each of those notifications could drive a very brief interaction, but the smarts of the interaction are occurring in a persistent, long-running computing session.

3) When I’m wearing Glass, it becomes the perfect place for realtime notifications that I can take action on.  I want Glass to tell me that I should take 280 in order to avoid an accident on 101, or that the milk is on aisle 7, or that rain is expected today when I’m standing at the front door in the morning.

A wearable device should know everything about my micro-environment — my location, direction I’m facing, velocity of movement, ambient sounds around me, recognizable visual landmarks in front of me, etc.  And if wearable computing has been happening in a long-running persistent session (#2, above), then the wearable OS/apps should also understand the bigger picture of where the current moment fits into my overall day/week/month/year patterns of my life.

I’m saying all this with just a couple weeks of experience walking around with Google Glass, so please read this all as just initial, early feedback.  So far, if I had to reconstruct a “better” Google Glass experience it would look something like this:

  • Device is “on”, aware and recording everything around me while I’m wearing it (and a little flashing red indicator light on the front tells the world that’s what’s going on)
  • Lifting up the device out of my line of sight but still keeping it on my head like a pair of sunglasses would turn off video/photo recording but still keep listening to audio signals around me (and a little flashing blue indicator light on the side, near my ear, tells the world that’s what’s going on)
  • Taking the device off my head altogether should turn it off (sleep, silent, muted, not recording/transmitting)
  • While device is on, all the data streams are passed up to the cloud for processing by apps/services
  • Apps are long-running, simultaneous, parallel background services that sift through all the data streamed up from the device and generate smart notifications that I can act on quickly or save for later viewing/editing on a smartphone/tablet/laptop.

How does Facebook determine the six friends that you see when looking at someone’s profile?

Answer by Yee Lee:

Facebook calculates a "social closeness" score for every pair of nodes in its social graph.  They use this social proximity score to rank each person's friends and this ranking is reflected throughout the site — it's used for determining which people's posts show up in Newsfeed, which mutual friends are shown on Profile pages, which people's Likes and social ads are shown in the right rail, etc.

Since the FB social graph changes every day (e.g., as people add new friends), the social proximity score needs to be updated regularly.

The actual scoring algorithm itself is proprietary to Facebook, but is likely based on traditional social graph distance metrics developed in academia.  E.g., Facebook might take into consideration factors like:

  • # of friends in common that each pair of people shares (e.g., the number of unique 2-edge paths between any two nodes in the graph)
  • the total # of friends that each person has (e.g., if Person A only has 20 Facebook friends and shares all 20 with Person B then A might be considered "closer" to B than, say, to Person C who has 2000 friends and shares 20 mutual friends with A)
  • # of interactions between a pair of people (e.g., if Person A and Person B regularly share mentions/comments/likes with each other, then A might be considered closer to B's friends than, say, to Person C's friends where A and C never comment/like each other's posts)
  • the type of interactions between a pair of people (e.g., if Person A tags Person B in a photo, that might be a stronger signal of closeness than just having some friends in common)
  • pace and recency of interactions between a pair of people (e.g., if Person A has recently checked-in to a location with Person B, that might be a stronger signal of closeness than if they checked-in a year ago and haven't been seen together since)

There are lots and lots of graph structure and interaction signals that can go into a social proximity score!  Note that similar techniques/algorithms are utilized by just about every company that works on social apps and communication products.  E.g., Google utilizes this type of scoring/ranking when they suggest email addresses to include in the "To:" line of a GMail message.  E.g., Zynga utilizes this type of scoring when they suggest friends to invite to a social game.

View Answer on Quora


Four Characteristics of Startup Engineers, Product Managers and Designers

Startups are hard.  And hiring great technical staff members may be the most difficult and important challenge in a startup.

So how do we hire the right people? Start with the end in mind — i.e., have a clear definition of who you’re looking for before you begin. Most hiring managers start by trying to breakdown the technical “do-er” roles in a company (engineering, product, design) into lists of specific knowledge or behavioral requirements.  I don’t think it’s enough to just find people who can check the boxes of “knows Cocoa” or “expert with Photoshop” or “wrote Pivotal Tracker stories”.  Founders and early-stage startup team members need to have an extra dose of character because they’re not just building a product, they’re forming a culture. Here are four characteristics to look for in each technical role that has a direct hand in building a startup.

(Note: This post is a follow up and extension to http://framethink.wordpress.com/2013/02/25/five-behaviors-of-awesome-engineers/ with many thanks to Laura Klein and George Lee for inspiration and feedback.)

Engineers

  1. Craftsman — pride and deep understanding of work
  2. Goalie — defender of the codebase, uses test-suite as a protective shield
  3. Macgyver — builds the minimum viable version quickly and then iterates forward
  4. Coach — mentors others and is coachable

 

Visual Designers

  1. Artist — weaves beautiful, delightful, enticing designs into and throughout a product
  2. Literalist — must get ideas/concepts down on paper or screen-pixels in order to understand and discuss them
  3. Eye of Sauron — notices 1px differences in baselines and kerning
  4. Toolmaker — creates guides, libraries, templates, etc. for other team members to follow

 

UX Designers

  1. Shortstop — covers all the gaps, thinks through corner cases and potential dead-ends
  2. Occam — cuts flows like a razor, down to the most simple path
  3. Empath — guided by deep caring and viscerally feeling the pain of the user problem
  4. WeebleWobble — absorbs input/feedback; bounces back from any knock, converting it into action

 

Product Managers

  1. Guru — customer use-case and domain expert
  2. Mad Scientist — sees the big world-altering vision, plots experiments to probe the path forward
  3. MC — confident and persuasive public presenter
  4. Logician — skillfully works the development process to deliver on-time, on-budget

 


If you just missed becoming a millionaire or even a billionaire, do you regret it?

Answer by Yee Lee:

Of course I regret it, how could you not?! 

I was at PayPal pre-IPO and had a chance to work with Steve Chen and Chad Hurley.  Back then, PayPal's product development process was very Product-driven and as a young PM.  I loved that about the place; it was an exhilarating job/experience.

When I was getting ready to leave PayPal in late 2005, I remember having a conversation with Steve about product management positions at YouTube.  He said, "Y'know, we didn't really like the way product managers and engineers interacted at PayPal.  So here at YouTube, we want engineers to lead product decisions.  So we kinda just want PM's to take notes on what the engineers decide and make sure that stuff gets done."

I mulled that over, thought that sounded terrible, shook my head, and told Steve that maybe we should chat again when he was ready to bring on-board a *real* product manager.  I WAS (maybe still am) SUCH AN IDIOT!  I had an opportunity to talk with the founders of YouTube about being one of the first (if not the first) PM and I didn't even engage in the conversation because I had held precisely one (1) product management role before in my life and therefore assumed I had it all figured out…  Sigh.

In hindsight, I think careers in Silicon Valley are so dependent on luck, timing, and who-you-happen-to-know that a lot of us probably have gone through these kinds of near-misses.  In fact, I think that's one of the things that makes the Valley special — the business ecosystem is connected enough here that just about everyone knows someone or has a friend-of-a-friend that experienced major personal career success.  That proximity to success creates a lot of readily available role models and exemplars that drive the aspirations of the whole Valley.  It's not just the top universities, the long tradition of technical innovation, and presence of venture capital or other legal/support functions…   Other countries have tried bringing all those factors together into "innovation hubs" and not been able to replicate Silicon Valley.  It may be because they're missing the social-connectedness and success role models that we have!

I think the regret of a few missed-opportunities is a small price to pay for working in the greatest innovation center on the planet!

View Answer on Quora


What are some examples of really good backlogs?

Answer by Yee Lee:

Remember: the purpose of a backlog story is just to cause the right conversations between team members.  So, it's difficult for an outsider to gauge the "goodness" of a backlog just by looking at the story headlines. 

In some cases, 1-liner story could be all you need to remind team members of a whiteboard conversation they had.  Usually, though, you'll see a lot of back and forth interaction between team members while they groom a story as it moves up the backlog.  E.g., if you follow a backlog over several sprints, you should see:

  1. A story starts off in the Icebox or low down in the Backlog as a 1-liner or a very simple description of the desired user experience
  2. As the story moves up towards the top of the Backlog, you may see, mocks/attachments, implementation notes or sub-tasks get added to the story — indicating that the development team has been grooming the story and thinking about how to implement it
  3. Ideally, the conversation about "how" will feedback into the "what" of the story and you may notice that stories with broad scopes get broken down into multiple smaller stories that will each get prioritized.
  4. By the time a story gets close to the top of the Backlog, it should meet a few criteria:
  • Granular enough that a single individual can implement, e.g., no intra-team dependencies for a single story.
  • Developers have discussed/groomed sufficiently so that they understand exactly what needs to be done, e.g., if you ask multiple team members what the story is about, you'll get identical answers
  • Each development team member has a good enough idea of how they would implement the story that they can give a confident estimate.  Side note on story estimation: development team members typically are pretty good at estimating story points up to the equivalent of a day's worth of work.  Any points estimate that indicates multiple days of work should be taken as a red flag of story uncertainty.  High-performing teams will take the time to do multiple rounds of estimation, breaking down stories until they all fit under a low points-ceiling.

Here are a couple public examples of backlogs for on-going, active projects…

Jasmine JS:
https://www.pivotaltracker.com/s…

The agile backlog used to build the Jira Agile app (how meta):
JIRA Agile (formerly GreenHopper)

And if you google around for "public backlog" (in quotes), you'll find other examples.

View Answer on Quora


AI-SMART Objectives

I loved reading about Kapta’s insightful software service that helps organizations align day-to-day actions with strategic objectives and wanted to revisit the topic of SMART objectives that I wrote about back in 2007.

Kapta’s CEO, Alex Raymond, has some great insights about what makes for a good organizational objective.  Specifically, I like his notions of:

  • Ambitious — “The goal should be bold and exciting, something for people to rally around. Not impossible to reach, but still aggressive.”
  • Inclusive — “Everyone in the company needs to understand how they contribute to each goal. Otherwise, employees can lose motivation and clarity.”

Adding those notions to the SMART framework, one would obtain “AI SMART” objectives:

  • Ambitious — aim high
  • Inclusive — everyone can contribute
  • Specific — concrete, actionable
  • Measurable — we’ll know if we hit the mark
  • Attainable — realistic to achieve
  • Results-oriented — produces a meaningful impact
  • Time-bound — by a certain due date

I’m going to start using this framework for assessing goals.  It’s the start of a quarter…   I hope this is helpful for all of you out there setting quarterly objectives and OKR’s!


Android is for Work, iOS is for Play

I’ve been using a Nexus S (now updated to Android Jellybean 4.2.2) and an iPhone 5 (now at iOS 6.1.2) on a daily basis for a couple months now.  Much has been written comparing Android and iOS already, so I’m just adding my anecdotal experience to the wood pile.  I’ve found myself increasingly thinking of the Nexus S as a “work” device and the iPhone as a “play” device.  Here’s why:

Android for Work

  • Google integration — I rely on Google apps like Gmail, Google Calendar, and Google Drive extensively, both personally and for work.  It’s super easy to sign into an Android phone with multiple Google accounts.  Once signed in, all email, calendar events, contacts, and docs associated with your Google accounts immediately become available on the phone.
  • Voice dictation — Google-powered voice dictation is incredible.  I find myself often dictating entire emails now — it’s so much faster than thumb-typing on the phone!
  • Swiping keyboard — the swiping keyboard that’s built into the latest versions of Android is much more accurate (and fun to use!) than the regular touch keypad.  I discovered it accidentally when one of my fingers slipped across the regular keyboard once and haven’t gone back ever since.  Between swiping and voice dictation, the Nexus S has become my preferred mobile device for text-entry while on the go.
  • Google Now — a question I wonder nearly every morning is: “should I take 101 or 280?” to commute to work.  Google Now proactively figured out my commute routes and now one of the notifications at the top of the Nexus S every morning is an alert from Google Now that tells me which is the best route to take this particular morning.  I love that.  It’s magical and useful to have my mobile phone smartly offer assistance.
  • Portable wifi hotspot — the Nexus S’es built-in portable wifi hotspot turns on really quickly and has been a more reliable connection for me than the iPhone’s wifi tethering.  I’ve been using both devices on AT&T’s 4G network, tethered to a Macbook Air — the Nexus S would often provide hotspot connectivity in places where the iPhone would refuse to connect (for some reason, the iPhone refuses to allow tethering while in “4G” mode, not “LTE”).
  • Aggregated notifications — the way Android aggregates and displays email notifications in the “windowshade” is super useful.  I like being able to take action (like archiving an email) right from the windowshade.

iOS for Play

  • Camera and camera apps — the iPhone 5′s tap-to-shutter lag is barely noticeable and I think the native iOS camera app does a good job of handling light-metering by tapping on dark/light areas of the viewfinder.  The iPhone camera seems to take better photos and videos, in general, than the Nexus S.  Also, my favorite video and camera apps are all on iOS, too: Videokits, Facebook Camera, Manga Camera, Looker, Instagram, Snapseed, and Photosynth, etc.  I also really like the “swipe up” to go into camera mode
  • Shared Photostreams — my parents and in-laws have iOS devices and want to see the latest photos of their grandkids; what better way to share than with iOS’es built in Shared Photostreams?   I find that I share way more photos this way than I do through social-network sites…  I tend to pick and choose which photos to share on Facebook or Twitter because I don’t want to bombard online friends/followers with dozens of pics taken at, say, a kiddie birthday party.  But the grandparents are more than happy to receive all 50 photos in a Shared Photostream — they “like” every single one!  :-)
  • AirPlay — this is really the killer app of iOS for me: being able to AirPlay any content (whether it’s photos/videos from the camera roll, a YouTube video, a song from iTunes, or a Netflix movie) from the phone to a bigscreen TV.  The seamlessness of the experience is awesome.  I love how the iPhone has become an integral part of the living room experience.
  • Games — there are still just a ton more games and apps available in the iTunes App Store than on Google Play.  Most of the big titles from Rovio, Electronic Arts, or Zynga are available on both.  But many educational and kids games are still only on iTunes App Store.  That makes a critical difference on which device gets shared with kids on the couch.

So, net-net I’m using the Android device more for day-to-day productivity and the iOS device more as a media and gaming device.  In a way, this actually lines up with Steve Jobs’ reported focus on taking over the living room.  Both the iPhone 5 and Nexus S are beautiful devices with well-executed operating systems.  Looks like both Apple and Google are hitting their strides, respectively, though I doubt that they’d intended to segment the market for mobile OS’es by work vs. play…   Still it seems to have come out that way for me, personally!

Anyone else “dual-holstering” both and Android and iOS devices these days?  What are your thoughts?


Follow

Get every new post delivered to your Inbox.

Join 2,414 other followers