AIM, Messenger, and Skype were supposed to have completely ensnared the global population by now. But instead we have a cacophony of comm apps that seems to grow every year: Viber, Tango, Kik, TextMe, TextPlus, Hangouts, Line, QQ, WeChat, WhatsApp, Snapchat, Secret, Whisper, Hipchat, Yammer, Slack, … I find it remarkable that the list of popular apps just keeps growing.
Remember: comm apps were supposed to create network effects that would keep users coming back to the most popular apps. New entrants into the space were supposed to face an enormous uphill battle to get any users to “defect” from their existing communication networks. But instead we’ve seen a continuous parade of new (mostly mobile but some desktop/webtop) communication apps launch and grow.
Twelve Dimensional Communications
If you examine each of the communication apps listed above, you’ll notice that the designers of each app made different choice on these dimensions, below. Each new permutation leads to a novel looking/feeling communication experience and creates a new reason to try out a comm app.
- real ID
- anonymish (identified by “friend”, “friend of friend” or location)
- Visibility of identity
- friends-requested/followings by user
- anyone with user’s contact info
- anyone within geo-radius
- ACL set by user
- short text
- long-form text/prose
- short video/clip/animation
- streaming video
- Network formation
- unilateral add contacts
- reciprocal add friend & add-back
- unilateral follow
- ask to follow
- random 1×1
- random group
- threaded conversations
- (reverse) chronological newsfeed
- topic / hashtag feeds
- upvote-ordered feed
- feed with stickies/pins
- Visibility of messages
- anyone within geo-radius of sender
- all friends/contacts of sender
- all friends of sender and commenters/respondents
- all friends + friends-of-friends of sender
- all friends + friends-of-friends of sender and commenters/respondents
- private group
- private 1×1
- Editability of messages
- ACL/group of editors
- any viewer
- permanent unless deleted by author/editors
- session-permanence (e.g., until hangup/disconnect)
- ephemeral w/ self-destruct by time/popularity/response
- ephemeral unless screenshot/fave/saved/clipped by viewers
- ephemeral w/ rescue by author
- thumbs up / hearts / likes
- re-posts / re-shares
- Response permission
- any viewer can respond
- subset of viewers (e.g., friends of sender, location radius around sender, time limit, first X respondents, etc.)
- unilateral push by sender to recipient for any message
- push by sender after 1-time accept/add/follow recipients
- push by sender only for specific events (e.g., mentions/direct/tags/nearby)
- curated push by system/moderator
- Skype is a pseudonymous text/voice/video comm app based on group and 1×1 conversations with reciprocated connections.
- Facebook Messenger is a real identity text/sticker comm app based on group and 1×1 conversations with reciprocated connections.
- Secret is an anonymish text/photo comm app based on semi-public unilateral broadcast to friends & friends-of-friends with limited response permissions.
There are probably even more dimensions to examine (please leave your thoughts in comments!) — so suffice it to say that communication apps are playing in a (at least) 12-dimensional space that is IMMENSE and has room for tons of innovation. When we think about how large of a design space is available for communication apps, the number of popular apps suddenly doesn’t seem so large at all and it’s actually kind of odd that app developers have limited themselves to a relatively small corner of the available design space (mostly involving text/photo sharing with private groups of reciprocated connections).
There’s room for many hundreds more comm apps, each picking out a different and interesting combination of design decisions along these 12 dimensions.
I wouldn’t be surprised if there are many more WhatsApp-sized opportunities waiting to be discovered in communication app space. And as the cost of developing new apps continues to drop, my advice to entrepreneurs who are interested in this space is: try to systematically test and probe as many different combinations of the 12 dimensions as you can!
If you are working on a novel communication app and you’re running into venture capitalists who won’t give you the time of day because they think messaging is “done”, please ping me on LinkedIn (https://linkedin.com/in/yeeguy) or AngelList (https://angel.co/yeeguy) — let’s talk!
I started participating in the Google Glass Explorers program a couple weeks ago and have started to notice some recurring themes/patterns in using a wearable computing device.
1) It’s annoying to people around you if they can’t tell if you’re paying attention to them. Developers of future wearable computing devices should seek to set visible conventions and expectations about when a device is “on” or “off”.
It’s really obvious when the user of a smartphone or PC/laptop is paying attention to the people around them vs. the device. Not so with wearables. Google Glass is actually off most of the time, not displaying anything to the user. But the fact that the device is so visibly present in the user’s line of sight makes people around the user wonder what they’re seeing and whether or not the user is actually interacting with the people physically around them or the virtual interface in the device. I’ve wound up adopting the convention of only putting on Google Glass when I’m intending to actually pay attention to the display or when I’m ready to react to Glass notifications (e.g., when I’m working, walking on my own, etc.). When I’m meeting or talking to other people, I’ll rotate Glass up and out of my view, putting them on top of my head like a pair of sunglasses.
2) Most of the apps for Glass (e.g., Evernote, Path, Wordlens, etc.) are single-feature apps and I think this is a bad idea. App developers should think about long-running, persistent computing sessions instead of “micro” sessions.
I remember listening to a speech by Phil Libin, CEO of Evernote. He said that his team had been experimenting with Google Glass and other wearable devices and that they had come to the conclusion: “The key difference with wearable vs prior computing devices is session length. You have 1.5 seconds to deliver an experience.” I think that’s a profound statement, but I also think that it’d be a huge mistake for developers to take a big app like Evernote and try to break it up into a collection of individual applets that each have their own unique voice command in the Google Glass navigation menu. Doing so would turn wearable computing into a sort of super-twitchy ADD-riddled cousin of the smartphone experience — where users cycle between a million little applets, one at a time, for 1.5 seconds each. That’s be a looming usability disaster… That mode of app interaction may work for smartphones — because the device is off, stowed in a pocket most of the time, and only comes out for brief bursts of activity.
Wearable computing is fundamentally different than smartphone computing. Once I decide to put Google Glass on, it’s going to stay on my face for a while. I’m not going to keep putting them and taking them off as rapidly as I may take out my phone and return it to a pocket. Given that, I’d like to see wearable OS and wearable app developers move towards a paradigm of really long-running computing sessions. I’d like to see Glass constantly record everything all the time and offer all that data to many apps in the background. And then after the apps have processed all that data in smart ways, present back to me the best of what they found. So I just wear Glass and every once in a while I get a notification — “Hey looks like you just were in a cool place, Glass saved 5 photos for you. Want to keep them? [Nod to keep, swipe to review them now, or do nothing to discard in 5 seconds]”. Or “Sounds like you were just having a conversation with two coworkers, here’s a transcription of the meeting notes. Want to save them for editing later?” Each of those notifications could drive a very brief interaction, but the smarts of the interaction are occurring in a persistent, long-running computing session.
3) When I’m wearing Glass, it becomes the perfect place for realtime notifications that I can take action on. I want Glass to tell me that I should take 280 in order to avoid an accident on 101, or that the milk is on aisle 7, or that rain is expected today when I’m standing at the front door in the morning.
A wearable device should know everything about my micro-environment — my location, direction I’m facing, velocity of movement, ambient sounds around me, recognizable visual landmarks in front of me, etc. And if wearable computing has been happening in a long-running persistent session (#2, above), then the wearable OS/apps should also understand the bigger picture of where the current moment fits into my overall day/week/month/year patterns of my life.
I’m saying all this with just a couple weeks of experience walking around with Google Glass, so please read this all as just initial, early feedback. So far, if I had to reconstruct a “better” Google Glass experience it would look something like this:
- Device is “on”, aware and recording everything around me while I’m wearing it (and a little flashing red indicator light on the front tells the world that’s what’s going on)
- Lifting up the device out of my line of sight but still keeping it on my head like a pair of sunglasses would turn off video/photo recording but still keep listening to audio signals around me (and a little flashing blue indicator light on the side, near my ear, tells the world that’s what’s going on)
- Taking the device off my head altogether should turn it off (sleep, silent, muted, not recording/transmitting)
- While device is on, all the data streams are passed up to the cloud for processing by apps/services
- Apps are long-running, simultaneous, parallel background services that sift through all the data streamed up from the device and generate smart notifications that I can act on quickly or save for later viewing/editing on a smartphone/tablet/laptop.
Answer by Yee Lee:
Facebook calculates a "social closeness" score for every pair of nodes in its social graph. They use this social proximity score to rank each person's friends and this ranking is reflected throughout the site — it's used for determining which people's posts show up in Newsfeed, which mutual friends are shown on Profile pages, which people's Likes and social ads are shown in the right rail, etc.
Since the FB social graph changes every day (e.g., as people add new friends), the social proximity score needs to be updated regularly.
The actual scoring algorithm itself is proprietary to Facebook, but is likely based on traditional social graph distance metrics developed in academia. E.g., Facebook might take into consideration factors like:
- # of friends in common that each pair of people shares (e.g., the number of unique 2-edge paths between any two nodes in the graph)
- the total # of friends that each person has (e.g., if Person A only has 20 Facebook friends and shares all 20 with Person B then A might be considered "closer" to B than, say, to Person C who has 2000 friends and shares 20 mutual friends with A)
- # of interactions between a pair of people (e.g., if Person A and Person B regularly share mentions/comments/likes with each other, then A might be considered closer to B's friends than, say, to Person C's friends where A and C never comment/like each other's posts)
- the type of interactions between a pair of people (e.g., if Person A tags Person B in a photo, that might be a stronger signal of closeness than just having some friends in common)
- pace and recency of interactions between a pair of people (e.g., if Person A has recently checked-in to a location with Person B, that might be a stronger signal of closeness than if they checked-in a year ago and haven't been seen together since)
There are lots and lots of graph structure and interaction signals that can go into a social proximity score! Note that similar techniques/algorithms are utilized by just about every company that works on social apps and communication products. E.g., Google utilizes this type of scoring/ranking when they suggest email addresses to include in the "To:" line of a GMail message. E.g., Zynga utilizes this type of scoring when they suggest friends to invite to a social game.
Startups are hard. And hiring great technical staff members may be the most difficult and important challenge in a startup.
So how do we hire the right people? Start with the end in mind — i.e., have a clear definition of who you’re looking for before you begin. Most hiring managers start by trying to breakdown the technical “do-er” roles in a company (engineering, product, design) into lists of specific knowledge or behavioral requirements. I don’t think it’s enough to just find people who can check the boxes of “knows Cocoa” or “expert with Photoshop” or “wrote Pivotal Tracker stories”. Founders and early-stage startup team members need to have an extra dose of character because they’re not just building a product, they’re forming a culture. Here are four characteristics to look for in each technical role that has a direct hand in building a startup.
(Note: This post is a follow up and extension to https://framethink.wordpress.com/2013/02/25/five-behaviors-of-awesome-engineers/ with many thanks to Laura Klein and George Lee for inspiration and feedback.)
- Craftsman — pride and deep understanding of work
- Goalie — defender of the codebase, uses test-suite as a protective shield
- Macgyver — builds the minimum viable version quickly and then iterates forward
- Coach — mentors others and is coachable
- Artist — weaves beautiful, delightful, enticing designs into and throughout a product
- Literalist — must get ideas/concepts down on paper or screen-pixels in order to understand and discuss them
Eye of Sauron — notices 1px differences in baselines and kerning
- Toolmaker — creates guides, libraries, templates, etc. for other team members to follow
- Shortstop — covers all the gaps, thinks through corner cases and potential dead-ends
- Occam — cuts flows like a razor, down to the most simple path
Empath — guided by deep caring and viscerally feeling the pain of the user problem
- WeebleWobble — absorbs input/feedback; bounces back from any knock, converting it into action
- Guru — customer use-case and domain expert
- Mad Scientist — sees the big world-altering vision, plots experiments to probe the path forward
- MC — confident and persuasive public presenter
- Logician — skillfully works the development process to deliver on-time, on-budget
Answer by Yee Lee:
Of course I regret it, how could you not?!
I was at PayPal pre-IPO and had a chance to work withand . Back then, PayPal's product development process was very Product-driven and as a young PM. I loved that about the place; it was an exhilarating job/experience.
When I was getting ready to leave PayPal in late 2005, I remember having a conversation with Steve about product management positions at YouTube. He said, "Y'know, we didn't really like the way product managers and engineers interacted at PayPal. So here at YouTube, we want engineers to lead product decisions. So we kinda just want PM's to take notes on what the engineers decide and make sure that stuff gets done."
I mulled that over, thought that sounded terrible, shook my head, and told Steve that maybe we should chat again when he was ready to bring on-board a *real* product manager. I WAS (maybe still am) SUCH AN IDIOT! I had an opportunity to talk with the founders of YouTube about being one of the first (if not the first) PM and I didn't even engage in the conversation because I had held precisely one (1) product management role before in my life and therefore assumed I had it all figured out… Sigh.
In hindsight, I think careers in Silicon Valley are so dependent on luck, timing, and who-you-happen-to-know that a lot of us probably have gone through these kinds of near-misses. In fact, I think that's one of the things that makes the Valley special — the business ecosystem is connected enough here that just about everyone knows someone or has a friend-of-a-friend that experienced major personal career success. That proximity to success creates a lot of readily available role models and exemplars that drive the aspirations of the whole Valley. It's not just the top universities, the long tradition of technical innovation, and presence of venture capital or other legal/support functions… Other countries have tried bringing all those factors together into "innovation hubs" and not been able to replicate Silicon Valley. It may be because they're missing the social-connectedness and success role models that we have!
I think the regret of a few missed-opportunities is a small price to pay for working in the greatest innovation center on the planet!
Answer by Yee Lee:
Remember: the purpose of a backlog story is just to cause the right conversations between team members. So, it's difficult for an outsider to gauge the "goodness" of a backlog just by looking at the story headlines.
In some cases, 1-liner story could be all you need to remind team members of a whiteboard conversation they had. Usually, though, you'll see a lot of back and forth interaction between team members while they groom a story as it moves up the backlog. E.g., if you follow a backlog over several sprints, you should see:
- A story starts off in the Icebox or low down in the Backlog as a 1-liner or a very simple description of the desired user experience
- As the story moves up towards the top of the Backlog, you may see, mocks/attachments, implementation notes or sub-tasks get added to the story — indicating that the development team has been grooming the story and thinking about how to implement it
- Ideally, the conversation about "how" will feedback into the "what" of the story and you may notice that stories with broad scopes get broken down into multiple smaller stories that will each get prioritized.
- By the time a story gets close to the top of the Backlog, it should meet a few criteria:
- Granular enough that a single individual can implement, e.g., no intra-team dependencies for a single story.
- Developers have discussed/groomed sufficiently so that they understand exactly what needs to be done, e.g., if you ask multiple team members what the story is about, you'll get identical answers
- Each development team member has a good enough idea of how they would implement the story that they can give a confident estimate. Side note on story estimation: development team members typically are pretty good at estimating story points up to the equivalent of a day's worth of work. Any points estimate that indicates multiple days of work should be taken as a red flag of story uncertainty. High-performing teams will take the time to do multiple rounds of estimation, breaking down stories until they all fit under a low points-ceiling.
Here are a couple public examples of backlogs for on-going, active projects…
The agile backlog used to build the Jira Agile app (how meta):
And if you google around for "public backlog" (in quotes), you'll find other examples.
I loved reading about Kapta’s insightful software service that helps organizations align day-to-day actions with strategic objectives and wanted to revisit the topic of SMART objectives that I wrote about back in 2007.
Kapta’s CEO, Alex Raymond, has some great insights about what makes for a good organizational objective. Specifically, I like his notions of:
- Ambitious — “The goal should be bold and exciting, something for people to rally around. Not impossible to reach, but still aggressive.”
- Inclusive — “Everyone in the company needs to understand how they contribute to each goal. Otherwise, employees can lose motivation and clarity.”
Adding those notions to the SMART framework, one would obtain “AI SMART” objectives:
- Ambitious — aim high
- Inclusive — everyone can contribute
- Specific — concrete, actionable
- Measurable — we’ll know if we hit the mark
- Attainable — realistic to achieve
- Results-oriented — produces a meaningful impact
- Time-bound — by a certain due date
I’m going to start using this framework for assessing goals. It’s the start of a quarter… I hope this is helpful for all of you out there setting quarterly objectives and OKR’s!
I recently joined TaskRabbit as VP Engineering and the experience has reinforced lessons learned from prior startups about the importance of having a systematic way to recruit, retain and promote great engineers. One way to help recruit, retain, and promote more effectively is to identify and define a consistent set of behaviors that the company expects of engineers. I’m constantly amazed at how many companies fail to do this for their technical staff.
When it comes to defining explicit objectives or desired actions, it seems like many companies take the time to specify those kinds of expectations for marketing, sales, or product management team members. But when I ask engineers at other companies how they know if they’re doing a good job, I often hear answers like: “we’re generally supposed to kick ass and ship code” or “we just do what PM’s tell us to do.” IMHO, having a group of prodigious coders or friendly team-players on the engineering staff is a good start, but not sufficient. And leaving engineers with ambiguous or undefined company expectations borders on neglect!
I’d like to push more companies into defining clear, specific expectations for how they’ll assess engineers at all points in their career life-cycle — from being pitched as candidates, to being on-boarded as new hire, to getting rewarded as a high performer (or being counseled out). Ideally, the way a hiring manager describes an engineering role to a recruit should match the role expectations that the company places on that engineer once they’re on board. And the way promotions happen should further reinforce those behavioral expectations.
To give an example, here are the five behaviors that we expect all TaskRabbit Engineers to exhibit:
- Culture Fit — TaskRabbit is driven by a very lofty vision and company values. We take those really seriously and folks don’t tend to stick around at the company if they don’t visibly buy into the vision and values; even if they’re brilliant at their particular functional role.
- Craftsman — We want engineers to really understand the underlying mechanisms that their code relies on. The majority of TaskRabbit’s codebase is in the form of Ruby-on-Rails apps. Mature Rails developers will appreciate just how easy it is to get yourself into hot water by including gems without really understanding what dependencies those gems are creating under the hood… Whenever we can’t find a gem that does exactly what we want, in the way we want, we take pride in developing our own gems and contributing them back to the Rails community, e.g., makara, storehouse, sudo.js, and more.
- Goalie — We don’t have a QA team, so our Engineers are the first and only line of defense against bugs. We expect engineers to write their own feature tests and bugfixes, to deploy code only after integration and acceptance testing, and generally to think a lot about how code might break. That kind of goalie defense against bugs and regressions is an important part of an engineer’s job at TaskRabbit.
- MacGyver — Like the TV show character, given a tight set of timelines and resource constraints, we expect engineers to be able to successfully identify the minimum-viable development investment that will make a meaningful impact for users. This is really another way to say: we solve the classic Time-Quality-Features tradeoff by de-scoping/decomposing features. We explicitly are unwilling to compromise on time or quality (i.e., we deliver on a pre-set weekly sprint cycle and we hold the quality bar really high for ourselves). So, within a given sprint, we always focus on paring down to the smallest possible unit of customer value.
- Coach — TaskRabbit’s initial web and iOS apps came out of consulting engagements with Pivotal Labs, where we did a lot of pairing. Coaching and mentoring each other has become an important part of our engineering practices and we really value people who are willing to actively teach their colleagues and learn from each other through 1-on-1 pairing as well as All-Hands meetings. We specifically like to see engineers standing up in front of a crowd, talking about and demo’ing their work.
Every new recruit hears about these five expectations so they can make an educated decision about what they’re getting into at TaskRabbit. Our engineering team members talk to each other about their jobs using these terms. And these five behaviors are baked into our self-assessment and peer-reviews.
By enumerating these five desired behaviors and weaving them all through out TaskRabbit’s engineering practices, I’m trying to create a more consistent and reliable way to recruit, retain and promote an A+ team.
Feel free to borrow these if they’re helpful… YMMV, of course, with transplanting these expectations into your company’s particular culture. But I think the most fundamental point is to put some thought into defining a consistent set of behavioral expectations for your technical staff.
Thoughts? Comments? Let me know!
I like the “linkable assets model” in this SEJ post: http://www.searchenginejournal.com/effective-link-building-using-event-blogging/34104/
It’s a nice, structured way to think about how to get inbound links for your event blog (if you’re an event blogger and care about that kind of thing) 🙂
Three UGC UX principles that I’m working towards in my products:
1) Fast — sub-200 millisecond response time to any user input
2) Assistive — give users something to react to; rather than forcing them to generate their own novel content
3) Learning — system improves with every user click or action
What are your top three?