Last night’s BayCHI event was a good experience. A panel of champions of user research (Director, Manager, Lead, etc.) at key Silicon Valley companies (Google, Yahoo, Adobe, Intuit, and eBay) attracted a large and energetic crowd. The pre-meeting dinner was extremely well-attended. It felt like the discipline is having a good moment in the zeitgeist.
Each panelist gave a 10-minute summary of what’s going on at their firm. What types of methods they are using, how they are feeding design, strategy. How they might interface with market research and other areas of the business. Where they have been, historically, and what may have changed in how they are embraced (or not).
Rashmi Sinha moderated, and only spent a bit of time asking her own followup questions after the panelists finished; and then it was basically an hour (?) of audience questions. Although I stood up and asked a question, I would rather have had more questions from her and less from the audience (that’s my bias, I guess, as someone who’s played that role in the past); the audience questions are not usually about creating conversation, and the moderator is obviously in a role to do that. By the time we finished with questions people were asking about how to recruit participants for studies; a tactical question that had no place in this meeting, if we were there to discuss the practice and how it integrates into corporate America, then let’s not deal with newbie process questions. I’m not minimizing the importance of that question to the person who asked it, but it wasn’t on topic and kinda brought things down for me.
People mumbled afterwards about wanting to see some conflict between the panelists, who represented competitive firms (but maybe didn’t see themselves individually as competitors) and who sometimes expressed different points of view on how to use the tools of user research (it’s hard for me to be specific from memory, but there were several examples from Google about how user research wasn’t always necessary, but they were ridiculous examples, as in, “should we have told people not to build a search interface until they had done years of research” as the strawman questions, when I don’t think anyone was advocating years of research, more so the opposite). It wasn’t in panelists charge to debate what they heard from the others; they were there to tell their own story and they all did that very well.
Perhaps the comments about conflict are proxies for my desire for more conversation; something that (as user reseachers know) takes good questions, and frankly, audience members just aren’t going to ask good questions. This sounds terribly snobby and let me clarify – there are questions that are informational (what type of deliverables do you use? how do you recruit) and there are questions that provoke conversation and interaction.
There’s another panel phenomenon at work here – “question drift” – whoever answers the question first is the most on target; as other answers come from the panelists, we end up hearing about an entirely different question, and we’ve lost the thread. I don’t have a solution to this. Sometimes the drift is interesting, but often it’s just a bit frustrating.
So it was hard to take much specific away from the evening – there was a lot of info; a lot of bits of perspective and insight and jargon thrown out quickly, with something new on the heels, so I felt like it was an immersion more than an education.
But here’s what I took away:
- this is a mature field; you can see the newer practices (Google) presented as adolescent next to their more wizened counterparts
- I’ve lived as a consultant for a very long time; there’s a whole set of challenges and benefits that these corporate folks have that are almost alien to me – I felt very aware of how I can deal with some of their situations so much more easily, and how there’s so much more formalized and permanent processes being created that I’m not at all engaged with
- it’s just tradeoffs and contrast, one isn’t better than the other, we need to be in-house and out-of-house both
- it’s not clear to me how research is different from design (and I don’t mean Research and Design) – this was my question and I don’t know that I got an answer except a fallback to corporate structures (and one person pointed out that designers and researchers have very different skill sets, but that wasn’t my question – if this is collaborative work, can’t teams of people with complementary skills deliver ONE thing – “design” – rather than breaking it down so much) and formalized processes
- this is a bit of a hot topic among software/tech/design types right now
- MORE I REMEMBERED: Christian Rohrer from eBay defined success criteria for user research as impact (which I really liked), and he defined impact as
1. credibility
2. consumability
3. relevance