37. Nizar Saqqar of Snowflake

This episode of Dollars to Donuts features my interview with Nizar Saqqar, the Head of User Research at Snowflake.

For a domain that takes a lot of pride and empathy and how we can represent the end user, there’s a component that sometimes gets overshadowed, which is the empathy with cross-functional partners. With every domain, product design, research, there’s people that are better at their job than others. I dobelieve that everybody comes from a good place. Everybody’s trying to do their best work. And if we have some empathy to what their constraints, what they’re going through, what their success criteria is, how they’re being measured and what pressures they’re under, it makes it much, much easier for them to want to seek the help of a researcher to say, “Help me get out of this. Let’s work together and let me use research for those goals that are shared.” – Nizar Saqqar

Show Links

Help other people find Dollars to Donuts by leaving a review on Apple Podcasts.

Transcript

Steve Portigal: Welcome to Dollars to Donuts, the podcast where I talk with the people who lead user research in their organization. Today’s guest, Nizar Saqqar, actually brings this up in our conversation, but I’m going to remind you myself that there is a new edition of my classic book, Interviewing Users. It’s now available 10 years after the first edition came out. Of course I’m biased, but I highly recommend it. And hey, it makes a great gift for the researcher or research-adjacent person in your life, or persons. If you haven’t got the second edition yet, you can use the offer code “donuts” that’s D-O-N-U-T-S for a limited time to get a 10% discount from Rosenfeld Media.

But now, let’s get to my conversation with Nizar Saqqar. He’s the head of user research at Snowflake. Well, Nizar, thank you for coming on the podcast. It’s great to get to chat with you.

Nizar Saqqar: Thank you for having me. I’m really excited for it.

Steve: Let’s start with an introduction from you. You want to say a little bit about your role, your context, anything to kind of get us rolling? And we’ll go from there.

Nizar: Absolutely. I’m Nizar. I lead user research at Snowflake. I’ve been here for about three years. It’s been a pretty exciting adventure. When I started first researcher at a company that’s been around for 10 years and really doubling down on showcasing the impact of research, why we need to scale, and we’ve been scaling nonstop in today’s environment, which has been a pretty exciting challenge. It comes with the fun of it, but comes with the challenges as well, and I think more to come. To take a step back and try to simplify it as much as I can,

Steve: What kind of company is Snowflake?

Nizar: Snowflake enables organizations to store huge amounts of data from many sources in one place. So it empowers organizations to make the most out of that data. And as we’ve scaled the company, we’re continuing to push the envelope on platform-level offerings that try to enable native app developers to do the development of data applications.

Steve: What are some examples or vertical scenarios that we might know about?

Nizar: So the simplification of it is get your data in a place, make the most out of it. Snowflake will help companies do that as efficiently as possible with as many use cases as we can. It’s definitely not in the day-to-day conversation.

Steve: What are some examples of what Snowflake is?

Nizar: It’s not a B2C product, but at the core of it, it really starts with the data warehousing of getting the data engineer brings in all of the data from many different places, many different sources into one place for storage, and then making it usable for other users, maybe like the data analyst or the data scientist who makes something happen out of it as an outcome. And as I mentioned earlier, building the native app development framework, it’s been exciting to see all of the, think of them as the more classical kind of software developers that are now into our ecosystems to get closer to the data. So it’s a pretty complex ecosystem.

We also have a marketplace that then kind of introduces the dynamic of a provider and a consumer and the business decision makers who are coming in for that transaction. So it’s a pretty intense ecosystem that magically all connects into just making the most out of your data.

Steve: So when you came in as a researcher, what did you observe about how this company was thinking about its users or thinking about what it knew or didn’t know? Do you remember that early process?

Nizar: Yeah, and to be honest, that process started before I even started. Even in the interview process, I really wanted to be sure that the company is thinking a lot about their users. They’re thinking a lot about how research can integrate. They’re thinking about challenging some of maybe the perceptions of what research can bring to the table and just having some of these tough conversations even before. And I will say that where we are definitely lucky is that Snowflake does put, do what we can to make it as great of a product for our users as like a core value of the company.

The interesting thing with that is that it brings in a lot of data points from a lot of pieces. Now you have a lot of user perspectives from the sales team, directly from the product team. Then you have all of the metrics and dashboards that you’re following. So you actually get a lot of, you get a lot of data. You get a lot of points that might actually make it harder for the product teams to action on or prioritize.

So as I started, I kind of wanted to first take a moment to better understand the domain, really kind of find my footing, know what’s going on, build the right relationships and start with something that’s very low-hanging fruit of saying, “Hey, let me just build credibility. Let me just come in and say I can add value very quickly and then scale that up.” It’s been interesting to see how the role has continued to evolve since that day one. It really started off with, we’ve been here for 10 years. This guy is here. And it’s really kind of evolving into user research is just a critical component of how we think about product development.

But it’s taken many phases that we’ve had to adapt as we continue to go, starting off with the very tactical, then zooming out into something that perhaps is more strategic, then shifting focus into our hiring strategy and our hiring rubrics and how we interview, going all the way into what we define as success criteria, performance evaluation and how we integrate research into the overall product process. And it just doesn’t stop. So the role itself has been changing over the past three years and I perceive it to continue to do that.

Steve: Can you give an example of a tactical, sort of quick win that you would approach kind of coming in, in those early days?

Nizar: Yeah, absolutely. And I think for me it really starts with what is something that is tactical enough, close enough to wanting to launch. There’s enough resourcing, but there’s some level of disagreement in the organization how to proceed. And seeing if there’s an appetite to make research kind of be a tiebreaker of sort or really find the right balance between the two. I found that it’s very rare that there’s option A or option B wins and there’s always like components of each that kind of resonate that when you bring them together, you kind of find something that really resonates to the actual flows.

And if I remember correctly, one of my very, very, very early contributions was kind of around something as simple as a concept evaluation. And I think those are the methods that are just going back to the basics and some people take for granted, but you’re just coming in and you’re saying, “Let’s just test it and see what’s resonating and what’s not.” And coming up with some actionable steps that align multiple teams that might have dependencies on each other to find the solution that may not make everyone happy, but at least everyone is aligned that, “Okay, this seems to align on a path forward from a user lens.” So then it just continues to evolve.

I was talking recently about how coming off the bat, just seeing that there’s an overwhelmed product manager who says, “Hey, I have 20 features that I’m asked to ship.” And my role there was to come in and say, “Let me help you do a MaxDiff survey to just make a case for some things that you should actually deprioritize so you can make progress towards some of the top features that you want to run through.” And I think that was part of the evolution of, “Okay, we could use research for many different use cases and in different areas where we could integrate with the product roadmap.”

Steve: So I think it’s super interesting that you’re using the interview process, I guess, to understand the context that you’d be coming into.

Yeah, I’m wondering, and I don’t know, I’m going to ask it like a binary, obviously it’s not, but, you know, how much of a mandate were you given versus how much you were trying to figure out what the needs were and, you know, make recommendations appropriately? That’s a terribly leading question.

Nizar: It’s not, though. It’s kind of interesting, because there’s– when someone opens a position, when somebody asks for a headcount, for the most part, they have an idea of what they’re looking for. They have an idea of what they think the success criteria is. In my case, I was hired by a head of design who had an idea that I could help elevate the design team. That was kind of like the primary premise. And the interview process, that comes out, and it’s a really exciting thing. And a head of design is really excited to be like, “Now I get to finally get to support design with research.” I started digging into the appetite of, “But how do we kind of expand beyond the design?” If we’re to look at the pie and we’re to say, “Instead of making that piece more efficient, how do we just make the pie bigger? How do we get the design team holistically more involved and have that impact from earlier stages as well?”

So look beyond the design research component and don’t worry about it. I’ll make sure you have a good story for where your organization is growing in terms of impact beyond the pixels. And I think that was a really good back and forth that early on showcased that there’s a lot of action and appetite for, “Hey, if you can define something outside of what I have in mind that you perceive could be even more value-adding for the organization, that’s what we’re optimizing for.” And I think that was a good start of saying, “Okay, I won’t be in a situation where somebody comes in and says, ‘This is what you need to do.'”

I do hear a lot of stories of, “The researcher comes in and all you can do is one-week sprint and stuff, just do a study every week, and it’s non-negotiable.” And it was pretty important to gauge that, “Hey, can we just align on value for users and value for the company and value for the team as the criteria, and let me do what I need to do without a specific framework of how I should be operating within those objectives?”

Steve: I love the phrase impact beyond the pixels. That’s like, that’s a pull quote or that should be a title of your next talk. So that sets you up then to find that overwhelmed PM. And if I understand it correctly, you’re kind of saying to them, like, did you know, hey, this is a situation you’re in, like, here’s an approach that would help you. That’s kind of where my, maybe where my mandate versus discovery question comes from. It sounds like you are finding opportunities or finding places to have impact where that PM is not going to ask you, hey, can you do a MaxDiff survey? You’re coming in, seeing the situation saying, yeah, here’s a way that research can unblock you.

Nizar: This was a fascinating story altogether, and there’s some more context behind it, which is kind of funny when you look at it. That PM was super excited before I started, messaged me before I started, started telling everybody that me joining is going to be a game changer, was the friendliest PM I’ve ever met when I started. And then back then, my manager was saying, “Hey, we think this is the most ambiguous thing. We need to redefine a roadmap. You need to put a really found– I think this is a really foundational research problem.” So when I went to the PM and I told him, “Hey, we can work together on this,” and I’m actually excited to team up, he actually said, “No, I’m not interested.” And I told him, “Let’s take a step back and let’s speak about why you’re not interested, what’s–just what’s on your mind. Let’s not talk about the research. Let’s talk about the problem you’re solving.” And his take was, from my experience with research in the past, a lot of the times it does take a lot of attention to keep up with all that’s happening, be part of the interviews, and then you come back with a lot of insights that I frankly don’t have resources to do anything with.

So if you come to me and you say, “Here’s 10 things you need to build,” I’m just going to put it at the end of the JIRA board as items 21 through 30, and I’m not going to get to them. So the key learning for me back then was, okay, everybody kind of perceives my role and how I can solve this problem very differently, and I really need to set some shared language and shared expectations of why I’m here. So that’s when I was like, “Hey, how about we do this? How about I’m just going to go into your JIRA board? I’m just going to steal the things that you have there. I don’t need you to be involved, and let’s make a case of why you don’t need to pursue all of these features at once. And let me do the heavy lifting. We’ll team up on it.” And then going back to my manager back then and saying, “I don’t think I need a multi-month, huge effort to start. Let’s just kind of help him get out of the weeds for a bit and just aligning the expectations over what we can do with the research.”

And in this case, it was a core example of research really used to de-scope, to de-prioritize, to say that not everything is equally important. At a high level, when you take single-off, one-off stories, they all come up as high needs, but are they all at the same level of importance for when you look at our user needs and the business value that they bring? And that’s essentially what came out of that Maxx a lot of these are way below when you compare it to what’s really bubbling up to the top. And how do I make the case that doesn’t say everything is there for a reason? But let’s make a case that with the limited engineering resources that we have, we can drive the most value if we really focus most of it on those very specific areas and get those to a place where our end users are really happy with the experience that we’re offering. And that was a different mindset. That was a different principle for that PM where it’s like, I didn’t know that we could do that. I didn’t know we could do research that helps me tell a story to executives of why I shouldn’t be doing work or why I should say no to some of the work that’s coming up.

And then that led to me really wanting to define some of the languages used around why research exists and why research is at that company. And the wording that I tend to use, which may not apply for everyone, but I try to take it around driving the allocation of limited resources into the most impactful efforts for our users or organizations. And if we can have that be a shared language mandate for what research is optimizing for, it takes away some of the misconceptions here and there and where there comes some of the tactical things like changing the title names from UX researcher to user researcher or changing some of the way that we present decks or reports or internal documentation. So there’s some tactical things that come with it, but at the core of it really linking it to the intersection of user value and organizational value.

Steve: So to get somebody unstuck, when we get so overwhelmed, we can’t even see our way out of something. I don’t have time to do your solution. I’m just, you know, I’m treading water here. So I love that aspect of the story that you found an approach that also is about limited resources and that took that person where they were at. And I don’t hear complaining about a stakeholder that wouldn’t commit to the project, you found an approach that you had a 10,000 foot view and it could kind of see how you could add value. And still, I think you were fairly new to the organization at that point. Is that right?

Nizar: I almost had no idea what was going on. I needed to rely on them to make sure that the items in my MaxDiff actually made sense. Even when I was mentioning it earlier, and I come from a B2C company. I come from a place where that ecosystem was extremely new to me. So of course there was some collaboration there, but I tried to keep it as lightweight as possible as I make sure that I have the right pieces but without overwhelming them.

And I love what you’re saying. I love the way you’re describing it. For a domain that takes a lot of pride and empathy and how we can represent the end user, there’s a component that sometimes gets overshadowed, which is the empathy with cross-functional partners. With every domain, product design, research, there’s people that are better at their job than others. Sure, for the most part, I do believe that everybody comes from a good place. Everybody’s trying to do their best work. And if we have some empathy to what their constraints, what they’re going through, what their success criteria is, to be honest, how they’re being measured and what pressures they’re under, it makes it much, much easier for them to want to seek the help of a researcher to say, “Help me get out of this. Let’s work together and let me use research for those goals that are shared.” And at the end of the day, it is still user-driven. It’s still based on the data that we’re getting and we’re able to drive direction. Finding ways to go with the flow while still having a strong perspective over what’s best for the users, rather than feeling that the role of research is always to be on the opposing end of cross-functional partners, could be a really powerful tool. And in these cases, all it leads to is that intersection of product impact and user impact, which I think is the end goal.

Steve: It is a great story that this person was enthusiastic for you and reaching out and was excited about research. And when you think about what opposition is sometimes, I think it’s easy to sort of demonize someone and say, well, they don’t, they don’t get it. They don’t believe in research. They don’t like me, whatever kind of escalate. And here you started with the best possible out the gate dynamic with this other person. They were a fan. They were like welcoming you and they couldn’t wait. And still they had a concern. And so by identifying that and coming up with the right approach that suited all those constraints, you get to them the kind of impact that you’re looking to have.

Nizar: And it kind of makes sense. I mean, if you really think about it, the definition of what a designer does, the definition of what an engineer does, for the most part, is pretty material. You finish your effort, you pass it on. Eventually, the thing that that designer or engineer touched is the product that you end up using. What that does is that for areas like research, there’s more fluidity, and the perception of what you’re here for. So that fluidity could be a great thing and could be an awful thing, because at the end of the day, it opens up a lot of opportunity to set the expectation of why the researcher is here.

But it takes a lot of work to get people to align, because they’re also basing it on their past experiences, basing it on their biases, basing it on whatever good experiences they had, but also whatever bad experiences they’ve had with research, and a lot of that bias of how the work may or may not be precisely presented to what the end user sees just opens a lot of these gaps and unknowns that I think just plugging these holes and making sure that the narrative is clear around why the researcher is coming in, to me, I see it as an opportunity. It’s not always the most fun process to cover some of these holes and make sure that there’s no gap in the perception of why the researcher is here.

Steve: We started with the foundational, this tactical stuff that you’re doing, but maybe we can look at the whole arc of creating that more evolved understanding in the organization across all these folks about what research is here to do.

Nizar: Building up on that first story that I was saying, I made my success criteria be less about the research output and more about how’s the research being used, how’s the research actually integrated directly to the roadmaps, and some of that lives till today with the team. You’ll see a lot more emphasis on how often is your research referenced in a cross-functional document. Then you’ll see about what is the quality of your report, for example, as some of the success criteria that we have.

But taking it back to that initial journey, there’s a disadvantage and advantage of being the only researcher back then. The disadvantage is that it’s overwhelming. There’s a lot to cover. The advantage is that gives you the ability to say, “I’m not going to do it all. I can’t do it all.” And you get to pick and choose a bit in terms of where you can foresee some of the most opportunity is.

And at the same time, you look at where some of the path of least resistance could exist, too. So if there was a huge problem where the path of resistance is pretty significant, the question that I need to ask myself is, is this where I want to continue proceeding? Should I continue butting heads to be included there, or should I go find that place that is a bit more welcoming to changing their processes and their approach and just kind of use that as a case study? Once that case study lands, how are we showcasing that case study?

And I’ve never been a fan of visibility for the sake of visibility, but especially in the earlier days of research, there’s a lot of advantages for visibility as a case study of how research could work to empower those around. And that became key. And basically what happened right off the bat is we started hearing the sentence, “Well, I want research. Where’s my research? Why don’t I have a researcher?” And the demand for research support started coming more organically from the cross-functional teams. So it wasn’t on me to necessarily say, “I need people. I need to grow the team. I need to do this.” It wasn’t an ask from the research department to grow. It was an ask from cross-functional partners who have seen how much more effective and how much more efficient they could be with the appropriate level of research support. And that just kind of creates more of that shared language, the shared narrative of what the organization is looking to do with the research and work closely with it, but also what’s the success criteria for the research team.

And as we started to scale, it became more and more important to set pretty stable goalposts to gauge what success looks like and what our objectives are and being very intentional about kind of not falling into the trap of making research the end goal where you’re out of the loop of what decisions are actually being made and you’re trying to do like a one-size-fits-all approach to research that says, as you get more senior, your research gets more complex, which I don’t believe is the best definition of researcher seniority, but really kind of anchoring it in how we’re able to continue driving the product roadmap forward. Even then, there’s a lot of back and forth that goes into it.

Steve: When you started to get these requests from people, we want research, where’s my research, the kinds of things that folks were hoping for or asking for, did that line up with what you would want to or hope to support them with?

Nizar: When you’re starting the team from scratch, the default is actually about, hey, since you only have one researcher, there’s only two of you. Do you need to do some intake form to take everybody’s input and then try to cover as much as possible? And I put my foot down that I don’t think this works. I don’t think that’s the most effective way to do it, and I don’t think hiring a researcher starting off with a service model that says, hey, you’re not part of the team. You’re an outsider who will do research and come back will be the most effective way to drive meaningful change.

So I approached it with a lot of support and I approached it from a point of view of let’s continue that as a proof of concept. Let me hire a researcher and embed them directly in one of our most critical strategic teams that has significant strategic significance for the company as a whole, but also has a lot of open questions and a lot of things that we can benefit from a researcher, and that’s all they’re working on.

And of course, you get the pushback of, well, what about these other teams? And my take is you’ve been operating with that research for 10 years. We can weigh it a bit more, and let’s continue gauging how things go there. And that kind of starts it off with setting up the researcher for success and empowering them as a core member of the team, challenging the notion that they’re there to take requests or answer questions and have them be able to actively predict where there will be blockers and how they can get their research ahead, maybe like three months ahead, six months ahead, to be able to actually be ready for the decisions for when the actual time to make the call comes.

So it’s essentially making sure that research is proactive rather than reactive. And that model worked. That model worked great with that team. We hired a phenomenal researcher. Until today, you’re always excited about the first hire being a phenomenal person on the team, and you start replicating that model with different teams for areas that also are strategic to the company and have a lot of ambiguity, and that kind of becomes the framing in terms of unblocking, creating alignment, efficiency, and how we can just continue to scale from there.

Steve: At the risk of oversimplifying, I guess I’m hearing in your answer where I was going wrong on my question, it’s the difference between this team needs a researcher and this team needs research. I was starting with this team needs research and you’re putting a researcher in there and they are figuring out the questions, being proactive, that’s very different than that service model intake form thing.

Nizar: Yeah, correct, and generally I think teams that start off being user-centered at times think they’re doing research. There’s a lot of types of research, right? So sometimes you’re talking to, for us, you know, we’re a B2B company that has great relationship with our customers. So you think that, hey, I’m talking to their sales engineer or somebody called me for a meeting. Like, I’m doing some level of research. I kind of have a take of, you know, I’m not here to keep, go do your thing, but at the same time, I’m not here to demarcate. I’m not here to, like, empower as many people to do research as possible.

You know, my role is to be a cross-functional stakeholder, and I will jump in with what the problems are that we need to solve together, and let me find ways to deal with them. So I think in this specific case, there was always an acknowledgement of, hey, there’s stuff that we don’t know, and we need some form of research. The definition of what research is and how it’s going to be incorporated is the thing that needed to be tightened up a bit more, and then integrating the researcher as in the framing of this is a cross-functional partner, not a source of research, if that makes sense.

I started changing the language around the expectations of even when they’re invited, even when people go to them, even when what topics they’re having in their one-on-ones. So it’s less about, like, here are some questions that I want and more about, hey, I’m struggling with this thing and we talk through it. So they all tie in together. I think to my point earlier where there’s a little bit of the path of least resistance when you’re starting and you can pick one team, you know, that was, you know, you can call it a lucky privilege of saying, okay, there’s a team that could be ready. I’m seeing conditions that are priming a researcher to be successful here. Let’s go with that model, with that team, and continue scaling from there.

Steve: I mean that reminds me of just your interview process, you’re looking for those conditions to understand what that is before you start and now as you grow your team, you keep looking for those conditions within different parts of the organization to see where research is going to, could go next and have the most, again you’re really focused on the impact and the product and the experience in the company.

Nizar: That’s a good summary, and it feeds into our interview process. I do, you know, we try our best to make our interview process as applicant-friendly as possible where it’s not convoluted, but at the same time, it covers a lot. So a key part of it is who joins the team and what’s their approach as well. We do tend to see, like, there’s a specific type of researcher that tends to do best, and usually we look at researchers who do have the depth and soundness of research methodology as kind of like a core expectation. But then they layer on top of it the user-centric process and thinking, you know, when do you integrate at different stages of product development?

You start seeing the kind of the business sense of really wanting to be integrated deeply with the team and solving the problem at heart rather than solving the open question. And then cross-functional collaboration as a core area. I do think that every researcher needs to fully understand what resources the team is working with, whether it be engineering, design, any other blockers, to be able to come forward with the most effective size of recommendation.

And then we always have that over-layering, overarching umbrella of leadership and teamwork, really kind of looking for people who have a growth mindset, who are looking to help others succeed, who don’t necessarily see it as like their world and like their thing, but just really collectively looking for everybody to succeed together, which I think has been pretty key as we’ve scaled the culture of our team into a team that’s pretty collaborative, a team that’s looking to help each other, and a team where people aren’t competing. And there’s no incentive for people on the team to compete. There’s actually an incentive for them to make each other better and learn from each other. So that’s been an exciting part of scaling from a cultural perspective within the research team.

Steve: I want to ask you to clarify, you used the phrase the problem at heart versus the open question.

Nizar: Absolutely.

Steve: Can you explain what that looks like for, what does that mean for any particular problem?

Nizar: One thing I’ve become really sensitive to, maybe too much so, is when I see kind of a research plan that says our objective is to answer these five questions or six questions, and my take is that’s actually a step removed from what you’re going to do with the answers that you’re going to get. So I like to start it off with what’s the perceived outcome? What’s the perceived objective? What are you looking to learn and why in terms of what’s actionable? And then take that to go a step backwards and say, okay, to get to that effectively, let’s now go into what questions do we need to ask? And based on that leverage, what method is the most efficient and appropriate for what we’re trying to accomplish?

What I’ve seen a lot in the past is even your stakeholders think they’re asking you the right questions. How many people have been asked, can you create personas? Can you tell me the different types of users? And a researcher goes off, does this for a month or two, and then they come back and nobody knows how to use them. And to me, that’s the problem that I’m trying to avoid as much as possible and just saying, okay, you want personas. What are you going to do with them? What’s the decision you’re trying to make? And often coming to the conclusion that you don’t need that at all. What you need is something much more simplified. Or we could actually get a pulse check to start getting you some signal of the answers that you’re looking for that will help with that decision-making process. And then we can decide when to iterate or if it’s necessary to iterate.

With open questions, I find that there’s sometimes the danger of over-scoping research efforts. For what you’re trying to do with it. It’s just that outcome of you show up with a deck that has 100 slides, but the team can only act on the first two. And so my question becomes, was this the best use of the researcher’s time versus trying to focus on those first two slides and then connecting that to a longer-term program that we can then create follow-ups on as we continue to learn throughout the process. So in a way, it’s forced efficiency and early hypotheses of how we connect to the impact before even starting off with prioritizing the effort.

Steve: I want to follow up something else that you said, you were describing a lot of the qualities that you’re looking for, the mindsets and the kind of abilities, you know, how do applicants demonstrate that information in your process?

Nizar: I could talk about that for a long, long time. So I generally don’t believe that these buckets are a pass or a fail. I don’t think it’s are you good at user-centered thinking or not. How I see it is everything kind of sits on a continuum. And what I’m trying to optimize for is for the level of seniority that this role will require specifically, am I seeing enough of ability to handle different situations effectively that then puts them in a position where they’re going to be able to know what they need to do regardless of what’s thrown at them.

So, for example, let’s say somebody is, we hear this a lot with the break apart of the tactical versus foundational or that, where it’s like, I do this and not the other. And my question becomes why? Why create that separation between this form of research and the other if you’re able to tell a story around pretty much your ability to come in at multiple different stages of product development and say that I can help you across every stage and I know exactly how to do it. And I can help you across every sort of limitation that you have and I know how to do it. And I can help you address multiple different type of issues that we’re facing, whether they be needing of some qualitative research or something that’s more quantified or something that’s quick and dirty or something that just needs a brainstorming workshop and I’m able to just be flexible in where I integrate with the team.

So I know that was a long-winded answer that went all over the place, but it’s really hard to — the reason it’s hard to describe is, I really don’t think research is good or bad, yes or no kind of domain in general. And what I’m really trying to optimize for as much as possible is, does the research applicant have the breadth to be able to tackle as many problems as possible? To me, that’s a much better predictor of seniority and success than somebody coming and saying, “I did this multi-country 12-month research project that was really complex logistically,” which is impressive in its own way. It’s great, but for me, it’s not what we’re trying to optimize for in general.

Steve: And so you’re looking at past experiences that the applicant can, I guess, describe to you, or those kinds of clues to the breadth.

Nizar: We look both at past experiences and we look at some of the hypotheticals as well. So we do have some scenario-based questions where we try to gauge some of the thought process. It’s the thing that I tell people that there’s no right or wrong answer. You’re just going to get a hypothetical and I just want to hear how you think about it. And I want to hear what are the different considerations that you take into account or into place when you’re making your mind up about the best approach and what you’re going to do. How often are you coming in and having the hard conversations about what needs to be done versus steering the conversation in a completely different way versus just saying, “You know what? This isn’t worth the back and forth. Let me just do something quick and move forward.” So at the end of it, when we’re combining the hypothetical with the past experiences, I’m really looking for effectiveness and efficiency under the umbrella of strong and sound research.

Steve: Those are words that are sometimes seen as at odds with each other, but I think you’re talking about how they’re in support of each other, that effective and efficient doesn’t mean that you’re not sound, doesn’t mean that you’re not, like you said, solving the problem at heart versus the open question. That seems like a key mindset that you’re bringing to this.

Nizar: A hundred percent. And I hear that sentiment every now and then. I hear the sentiment of, “Oh, if you go too scrappy, you’re doing really terrible research.” Or, “It either has to be great research or it’s terrible research that is fast.” And I don’t agree with that mindset or that context. For me, it really depends around what you’re trying to learn, what your objectives are. If you’re trying to do something that is an extremely small pulse check, for example, you don’t need to boil the ocean.

I still remember earlier in my career, I joined a team and they just had no idea, they knew nothing about their users. Absolutely nothing. And I was telling them, “Do you have any hypothesis? Do you have any open questions? Do you have anything there?” And they’re like, “We just don’t know. We just know that nobody’s using this feature. That’s all we know.” And we look at our dashboards, we look at our metrics, we have a target addressable market of millions, and we have tens using it. So we don’t know why. And I just came in and I said, “Look, the best use of time right now for me is just to do some sort of a small single pulse check survey. One question, pretty much trying to understand the state of everything. Just for me to have context to get started, just give me some perspective. Am I planning to use this in road-napping? Probably not, but I need some form of context from end users to be able to tell me, “Okay, I have an idea of what’s happening there, and I have an idea of the value add, I have an idea of why they’re churning, I have an idea of why maybe they’re not seeing some value. Let it be scrappy.”

And you get the pushback of, “Well, this is qualitative. You need to do in-depth interviews for that.” I’m like, “No, I don’t. I really don’t. I don’t need to invest 40, 80 hours just to get an idea of what’s going on if you can do this in 24 hours, and then take that as an entry point into something that’s more detailed, that’s more rigorous.” So for me, it just goes back into linking the amount of effort to the projected outcome and really just finding the thing that works for next steps. And in this case, we did end up actually needing to go very in-depth with foundational interviews and a full design sprint, and then going to concept evaluations and stack rank.

It ended up being a really complex process over maybe the course of a year that really turned around a product that wasn’t used, a product that had millions of users. But at the start of it, I did not have the luxury of saying, “I just need to go away and do in-depth interviews,” because the research domain says that qualitative is not allowed in a survey. So sometimes I think just breaking the rules in our domain is very okay, as long as you know why you’re breaking the rules and what you’re going to do with the insights that you have.

Steve: This research effort that you’re scoping at any point may not be, or probably isn’t, the only time you’re ever going to learn anything. And so, you know, as I’m taking that away from you, then I can sort of feel some of my own anxiety just like ebbing away, like, of course, right? If you think of research as a longer-term thing, like, what’s the question we need to ask now? What’s the right amount of effort for right now? Okay, everybody, we’re not going to get everything. We’re not going to boil the ocean, as you said. There’s more to do, but here’s where we are right now. And so, yeah, that good versus bad research thing is it says we’re only going to do it once, and it’s kind of this monolith that’s either going to answer everything or not answer anything. And these gray areas you’re describing are, it’s a gentle reframe for me, I think, about where I sometimes feel anxious about trying to tamp down the commitment or the investment.

Nizar: As long as we have the right data point for the right decision that’s being made, if we’re coming in and saying, “We need to invest all of our engineering team of this one single customer satisfaction open-ended box,” I get an anxiety attack. I get it. But sometimes that’s not the decision that you’re making. Sometimes the decision — the pros and cons that you weigh, the cons of having something that’s scrappy and fast are justified when you look at the pros of being able to get ahead and then establish a research roadmap that actually gets you ahead of the product team. So that’s the consideration for me.

And to your point earlier, product development is iterative, and I think people forget that sometimes. People forget that even if you launch something, that team is still there, and that team will still continue to want to optimize it in some shape or form. So if anything, I feel researchers should take some comfort in that and saying that, “Okay, if I miss the boat now, how do I get ahead and say, ‘Okay, for the next iteration, for the next thing that’s happening, I’m able to get ahead and have some things ready in time?'” And acknowledge that the same way that product development is iterative, even the most foundational research efforts, you’ll end up having to iterate on in some capacity. I haven’t seen a world where a research deck is still relevant years later, and nobody has ever touched that topic again.

Of course, you want to minimize how often we redo work that’s already been done, but everything changes. Once you start having a user base, the kind of data that you have is different. In this case, when you have tens of users and you go into the thousands, the kind of feedback you’re starting to hear is already different. The usage data that you’re starting to get is different. You’re able to use telemetry a little bit more than when you had nobody. You start being able to triangulate in a way that you just weren’t able to earlier. So iterations are good, and that doesn’t mean don’t do really deep foundational generative efforts. They’re just a time and place to say, “This is my time to get scrappy, and this is my time to dive deeper into the topic.”

Steve: If I didn’t think about it too deeply, I might, you know, sort of have this reflex that says, well, when we know nothing, that’s when we have to learn everything, that the foundational work comes at the beginning. But you’ve got a number of examples where you’re coming in and seeing a big gap and saying, no, it’s not, this is not the boil the ocean time, it’s the quick win or the thing that we can act on or the scrappy thing. And no one believes that A is B. No one believes that the scrappy quick thing is, in fact, going to answer all the questions. But you’re helping take action, you know, within the constraints that are there.

Nizar: I’d say caveats. Tell people there are limitations to what I’m doing. We are aware of that. Every research method we do has limitations, and I’m yet to come across any research study that has solved everything or is now claiming that we have learned everything about our entire user base or our entire feature area. And that’s the reason researchers continue to be in the same role or on the same team for years. There’s always a lot to uncover. And a lot of the time, just really weighing the cons of coming in early and saying, “You know what? I just started. I’ve been here for a week. Let me go disappear for three months or so.”

And I get it. There are ways that you can incorporate your cross-functional partners. But for the most part, especially as somebody’s building credibility, starting with data is much more effective than starting with raw data and giving you some form of, “Here are some next steps,” where often the next steps are research. When I did that Pulse survey early on, the next steps were research. Now I needed to go and actually do in-depth interviews to learn more, but at least I had some litmus of, “What am I talking about? What is my script going to have?” So I’m not finding myself in a situation where I’m interviewing, even if it’s 10 end users, I’d be like, “Can you tell me anything? I don’t know where to start. The team doesn’t know where to start.” But I had something. And for me, the value of that effort, even if it just fed into the definition of a research template for the next steps, that was value-adding, and that saved me a lot of time.

Steve: I’m going to switch topics a little bit here and go back to something you said, I don’t know, before. And I just — maybe you can unpack this or just clarify it. I think you were saying that, you know, that you look at, for people on the research team, you look at number of references or citations of research work in the work product of other cross-functional teams. I’m saying this really poorly, but did I capture that at all?

Nizar: It’s a good summary, and I’ll also give it an asterisk and say, “Not as the only signal, but it’s an effective signal.”

Steve: Yeah. So I have a bias against that. And that’s, of course, coming from someone that doesn’t work inside an organization. So my bias is maybe just hypothetical, but — or just from conversations. And maybe that — maybe that asterisk is really, really important. I agree it’s a signal. I do worry about researchers either getting external pressure or pressuring themselves that this thing, which is essentially out of their control, whether somebody else does something or not, is kind of — is a measure of their worth. Where there’s lots of reasons why people don’t do things and don’t listen to things. And I think you’re talking so much about how to — how to prevent that from happening. Right? The right work at the right time, with the right collaboration, with the right understanding, and all that stuff being kind of scaled appropriately. But, you know, just having spent my career giving people stuff that we’ve agreed was going to be important. And then seeing all kinds of things happen and don’t happen. And to a certain point, there’s a certain amount of surrender, right? Like, I’m going to give you everything that we agreed you need and maybe more, but I can’t control what happens.

So I don’t know. I don’t want to frame this as a debate or anything like that. But I’m open to you telling me that, like, I’m wrong, that I’m framing that wrong. I’m just curious what — you know, how you think about this. It’s not the only signal, but how do you think about sort of how to use that signal or how we should all think about that signal of what somebody else does?

Nizar: The conversation is super valid. That’s where the asterisk comes in. And if anything, I always love the counter perspectives here as well. The reason that I added asterisks here, too, is exactly what you’re saying, that you can’t really control what somebody else does, where it ends up putting some emphasis is encouraging research teams to be very strategic in terms of where they’re prioritizing their time and how they have ownership over the product direction as well. But that doesn’t only go on the researcher. A lot of the conversations that have to take place as well do have to happen at the leadership level. And kind of talking about if we’re to say that the researcher also is to be held accountable for what’s going on there, what’s the collaboration model, and where are they coming in, and are they left out of being able to have that, or is the expectation set that there’s some form of path for them to do it?

And there are also multiple ways from my perspective to showcase that. I think when you look at the referencing, it’s as direct as it gets usually, but even that, it can be optimized. Sometimes you have to take it the other way. Don’t optimize for making sure that your research isn’t docs. That’s not what we’re optimizing for either. But what are the ways in which, as a research team, that for better or worse needs to continue kind of driving the narrative of the value that we bring and connecting the dots to the different decisions that have been made because of the research and the leadership role that each researcher is taking and actually guiding the product roadmap? How could we make sure that we are being very intentional about collecting the evidence and documenting it and being sure that we’re telling our story in a way that does a service to the team members? And often you’ll find researchers in environments where that’s just really hard with their teams. That’s just not how their stakeholders are wired.

And when that happens, my question becomes, what’s the role of leadership and me in a lot of cases in streamlining that, but also what are other ways that are effective in gauging the success of that researcher that do not rely on that being the only mechanism? And that’s where that asterisk plays a huge role. And yeah, absolutely. There are some full quarters. We do performance reviews quarterly, which is pretty intense, and sometimes you don’t get to finish things that are meaningful in a quarter. So we want to give people the benefit of the doubt as well into how the research efforts bleed into the quarters after. But there are some quarters where you’re deep in the research, the team itself doesn’t even have a document, and there’s no way to say, “Hey, this is what’s happening.” But we look for other ways to continue connecting the dots there.

But for me, one thing that I do genuinely care about, and maybe it’s just from previous experiences in the past of seeing where research can get thrown under the bus sometimes, I think for the past many years, I’ve been very intentional about just telling the story of the ROI of the researcher themselves. So not the ROI of research, which I think sometimes gets confused of the ROI of the researcher. I find it that often, and of course it depends environment to environment, company to company, but I find it that often people don’t debate the value of research. They sometimes debate the value of the researcher doing the research. That’s the topic that comes up here and there, and I try to be as intentional as possible to position the researcher and position the organization to give the space for the researcher to be a product leader, not only a research delivery mechanism, if that makes sense.

And with that comes some of the expectations that end up changing. Fingers crossed it worked for me throughout my career. At the same time, I always want to acknowledge that when I say it works for me, there’s also a right time at the right place component of it, and it’s not always on the way that the research is conducted or what the researcher is doing.

Steve: Let’s just switch topics again. We haven’t talked at all about, you know, your overall trajectory. And it’d be great maybe to get a — maybe a summary of how you found user research, what you started off doing, what some things that you did that kind of led you to this role. Maybe have set the context that we haven’t talked about for what you have been sharing.

Nizar: Sure, yeah, I could take it many, many years back. So I actually went to the University of Jordan to study industrial engineering and I was one of the few people who actually cared to have an emphasis on human factors. I don’t know why, but I was always fascinated by the intersection of humans, computers, and business and psychology and all of these together, and I didn’t really know what you could do with it. And it was as early as undergrad that I thought that, okay, this domain seems to cover a lot of those areas, graduated and worked at a company in Jordan under a title that back then was something along the lines of process engineer or something, but in reality it was more understand the inefficiencies in the process and how people are coming in and out of their day to day and how we can make it more efficient. So it had a big kind of research component, and I was aggressively reading about what are some of those programs that I could continue learning in that space, and because where I lived, nobody knew what that was. That wasn’t the thing. And you couldn’t even say, there’s visual design, but you couldn’t really say user experience or UX research.

And moving on to grad school, I went to San Jose State for the master’s program there, and the beauty of that was just the amount of exposure that I had to a lot of different companies and different people who are doing some of the user research work, and it was pretty much a straight shot from there. I went into consulting in a user researcher role, went into a startup where I built up from the ground up into a research and design org. So I was managing research and design, moved on to Google, at YouTube specifically, where I spent about three years, and then when the opportunity came out at Snowflake, it was just too hard to say no to that opportunity. So moved on. I’ve been there for about almost three years now, which is crazy to think about.

Steve: Do you think of yourself as someone that has a superpower?

Nizar: It’s a humbling question, to be honest. The thing I take pride in is I’m always open to being wrong. I’m always open to challenging the status quo and being told that there’s a better way to do it. And I think where I take some of that pride is a lot of the times I hear people that even you look up to throughout your career, and then you get to a point where you’re like, “I kind of disagree. I see a different way.” And trying to challenge status quo for something that could be better is just something that gets me pretty excited.

Is it a superpower? I don’t know. Maybe it hinders me at times, but at the same time, especially in the conversation that’s taking place around research right now, you start seeing a lot of the consistent perspectives of this is right, this is wrong, this is what you do, this is what you don’t. And I try to be very intentional in hearing what are those different perspectives and why are they seeing things differently and what works for me and how do I acknowledge that what works for me at the environments that I’m in may not work for somebody else in the environment that they’re in as well and give people the benefit of the doubt and keep running with what I’m doing.

Steve: There’s sort of two facets, I think. You started off saying that you’re okay being wrong yourself, but you’re also looking for when the conventional wisdom is wrong. Did I get that right? There’s sort of two aspects. It’s like you’re willing to forgo needing to be right, but you’re also embracing or curious about, hey, maybe something out there that’s established as right, the status quo, like you said. Maybe that’s wrong and you’re like challenging that.

Nizar: And think of it like they’re intertwined. Think of it as somebody who is a user researcher. For the most part, you’re kind of looking for best practices, you’re looking for the perspective, you’re looking for the voice of the crowd. They’re kind of intertwined a bit. And I think it’s a solid starting point. It’s much better than starting from zero. Learning from someone is always significantly better than just figuring it out on your own. I mentor upcoming researchers every now and then and I say, “If you can avoid being the first researcher out of grad school, I would avoid that.” But I want to acknowledge that not everybody has the luxury of picking and choosing especially their first job. So take it and learn on the job is better than not having anything.

But it becomes a starting point of, okay, we think this is the best practice. I guess then there’s a tough conversation of, does the best practice make sense? Does the best practice work for me and my approach? Does the best practice work for the environment that I’m within? And where do we continue optimizing it? And how do I continue doing the internal reflection, the internal research on what’s working in the processes that I’m establishing and what’s not? And how do we continue treating honestly my career trajectory as a product that you continue learning, iterating and hopefully making it better?

Often it leaves you at odds with what a lot of voices have in place, but it’s okay accepting that as well and being like there’s no reason for everybody to align on one topic. And it’s always fun. It’s always fascinating when you’re the, I want to look at things from both perspectives. Jon Stewart came back on The Daily Show last week and he got a lot of hate because he was in between two sides. But from my perspective, these are the voices that often bring in a lot of reason and just say let’s just call out everything as it is and see how we can look inwards of how we can continue to be better. But it’s always fascinating because you’re going to get some pushback from the side that agrees on one thing and then pushback from the side that disagrees on the other if you’re optimizing in your own way.

Steve: Are you seeing patterns in the people that you’re mentoring in terms of what topics or questions you’re helping them with?

Nizar: I think the biggest one is there’s an obsession with the research as the end goal. I think that’s the one that’s just becoming more and more and more apparent. There’s different reasons that when somebody’s starting their career, it makes sense that they think, okay, I’m here to do research. There are some people that are more mid or later career where that’s what they’ve learned how to optimize throughout their career because that was their success criteria. So there’s various flavors of that same thing. But you often hear a lot about like I want my methodology to be, like I’m focused on my methodology or I want to do more foundational research or it’s very anchored on research as the end goal.

Even in our interview process, we interview a lot of amazing candidates with amazing resumes and as they’re presenting their case studies, they gloss over why they did the research or they gloss over what happened after it. But they take a lot of pride in the thing that they did, kind of the actions that they took as a researcher. I think that’s the biggest, for me, gap that I see between a lot of the conversations that I have and where I believe research should be positioned as more of a tool to drive decision-making rather than an end goal.

Steve: I don’t know a ton about how mentorship could or should work, but are there things that you are able to say or do in these interactions to help somebody shift their perspective to what you’re talking about?

Nizar: It depends on the relationship I have with the person too. So to be honest, that kind of dictates a lot of the conversations that happen and honestly, like how hard I push back. There are some people that I used to manage in the past who I’m very comfortable telling, “You’re just absolutely wrong and stop doing it the way you’re doing it and here’s how you can be more effective.” You can’t do that with somebody you barely know. And you try to nudge it in terms of like how do you expand your thought process beyond what you’re doing into why you’re doing it? And how do we kind of like reset your tone in terms of the perceived outcomes of the work that you’re doing?

And I do a lot of resume reviews and I think that’s a place where people seek feedback and I usually call out that a lot of resumes that I see for researchers read like job descriptions. And I try to tell them, “What’s your superpower? What’s your story?” When I read your resume and I see conduct tactical and strategic research, conduct qualitative and quantitative research, that reads like the job description that doesn’t give you an edge over other applicants and I don’t have any context over why you’re doing it, why you’re doing it, what you’re able to do, move forward. And of course, at the end of the day, the research in itself is core. That’s table stakes. Being a strong researcher with broad methodology and being able to tackle, again, different types of problems is core to the job. So we don’t want to hire a researcher who doesn’t know how to do research. But how does that researcher connect that great research to why they’re doing it and what impact that it’s having is the biggest gap that I tend to see across resumes, some interviews, some mentoring calls, that I think there’s a continued opportunity there.

Steve: What didn’t we talk about yet today that you think we might want to cover?

Nizar: I can interview you.

Steve: I’m willing to try.

Nizar: The question I have for you is you have a new edition of a phenomenal book, so congratulations. That’s something that I hope every researcher has read the book. I recommend every researcher read that book as well. Ten years later, what are the areas that you’ve seen evolve or change?

Steve: Yeah, the context in which research takes place is totally different. You know, we didn’t have language around operations, for example, and operations as being separate. Like it took me a long time and very recently even it sort of distinguished operations from logistics. And I was, in fact, I was resistant to the idea of research ops because it takes away some of what the researcher needs to do. You know, if you try to recruit a part, recruit participants, you understand the space just by going through that. And so I had this naive view.

Now I’m not even answering your question, but I used to have this naive view that like, oh, well, that, you know, recruiting is part of figuring out how this population thinks and works and how to work with them and so on. And that ops is going to take that away. And I think I’ve just only recently sort of started to understand that research operations is about supporting the organization to do research, not to take the burden of tactics and logistics away. It’s about sort of infrastructure and so on. So, you know, trying to have a more sophisticated conversation in the second edition about logistics and operations. These were things that like things stuff that’s really important from a legal and compliance perspective right now in research was just kind of, let’s just see if we can avoid legal finding out about this kind of in the past. I think, you know, like this podcast couldn’t really have existed 10 years ago. There were, well, I shouldn’t say that I’ve been doing it for a long time. Maybe that’s the wrong number. I don’t know.

At some point, there were far fewer people who were doing what you’re doing, who are building teams, you know, bringing leadership and management to research. Researchers were abandoned or worked for design manager or something like that. The idea that research could be a peer. I mean, all the things started off talking about could be a peer to another function and work proactively. Those were sort of ideas and aspirations, but you didn’t see that as much. So I think, you know, the profession has matured and there are more teams with leaders with career ladders and, you know, clear ideas how to interview what they’re looking for that as an in-house profession. It’s just much, much more mature.

And I was just thinking today about, you know, that phrase that Kate Towsey came up with, People who Do Research, like that’s, she came up with that term in 2018 as far as I was able to determine. And that’s fairly recent, but I feel like, oh, giving it a name, like there are researchers and you’ve mentioned a few times, like researchers and research. We kind of come back and forth on that there. And like you said, there’s all sorts of customer contact going on, but give, you know, creating a name that sort of says, here’s what, who’s who researchers are. And here’s this other category of people that’s also are doing research like that. Having that label, I think, clarifies a lot. A

nd yes, we have democratization debates. And it’s not like the problem is solved by giving it a name, but we’re clearly at a point where we can say there’s different types of research happening, which is a point that you’ve made. And there’s different people doing research, whatever we mean by that. And that those are all sort of different considerations. So it’s not a solved issue, but it’s a much more clarified issue than it was 10 years ago. You know, and again, not really what you asked, but, you know, I think a lot of the fundamentals that my write about, like how to ask a question, how to ask a follow up question, how to listen. You know, I’ve had 10 more years of doing that and 10 more years of teaching it. So I’m, I have just more examples and more stories and more clarification and more nuance. So that those are the fundamentals I think of interviewing, but I think I can explain it better than I could before because it’s just practice, practice, practice.

Nizar: This is great. You talked a lot about kind of the evolution of research and some of the things that are different. What’s the hot take about the world of user research that you find yourself disagreeing with?

Steve: Wow.

Nizar: Putting you on the spot there.

Steve: Yeah, no, that’s good. I mean, I am kind of just like, I feel like that Grandpa Simpson meme of like, you know, old man, shout, yells at cloud, whatever that is, I feel like, I don’t know, just being my age and my grumpiness like I am. I don’t know. And I guess I could justify that, you know, you just the longer you live, the more sort of hype cycles you kind of go through. And I’m just sort of reluctant to engage with, with, with hype stuff. So yeah, AI is a big hype topic. I’m amazed that I’m the one bringing it up in this conversation, because usually everybody else that I talk to has to bring it up right away. So we’ve gone. So thanks for making me do that. And sometimes I’m sort of resentful about, it’s like, I don’t want to have a hot take on research. And I’m resentful about the overwhelming amount of hot takes about research.

The most recent episode of this podcast that I posted is all about Noam Segal’s hot take on research. And so no disrespect to him for that he thought about it a lot. So I don’t know, my hot take is almost like an anti-hot take, like, can we all can everybody just chill out about AI or about the end of everything? I think we have a lot of, like short term ism or just like immediacy, we don’t we see what’s right in front of us, and we overreact. I include myself in that. And that’s not just that’s just human nature, I think. And you know, whatever, like the LinkedIn pundit info cycle, you know, entertainment complex, whatever that is, we all have to have an opinion about something and see what’s happening with research right now, but we’re at an inflection point. And so we don’t know. And maybe it’s okay not to know, which something that you’ve said a few different ways.

So I don’t know, my hot take is like to be anti-hot take is like, it’s maybe okay. And maybe that’s just my privilege speaking, like, you know, if I was younger, trying to make a certain kind of name or get a job, I might feel like I need to come in with an opinion about something. But I think there’s a little bit of peace and calmness that I would like to, you know, nurture within myself to like not react so much to kind of the change around us, the world feels very dynamic and uncertain and complex and worrisome. And, you know, I would I like when our conversations that we have in our professions collectively, sort of soothe that and not add to that.

Nizar: I love what you’re describing and just to add to it a little bit. I’m hearing a lot of the takes on how research is now fundamentally different. It’s doing things wrong. There’s definitely an over-exaggerating, an over-exaggerated sentiment behind what’s going on there. There’s a macroeconomic condition. You hear a lot of over statements across domains. I think when you’re in research, you just feel it more because a lot of your circles. At the heart of it, a lot of different domains got hit pretty hard. If you ask many people, they’ll tell you that it’s their area that got hit the hardest. I hear the same from product managers. I hear the same from software engineers. Let’s not even get started with the recruiting and some of the operational support.

At the same time, I see it as always a good opportunity to just reflect on what we’re doing and what works and what doesn’t, and we continue to iterate. It doesn’t need a big research is dying kind of header to encourage some of the discussions and the conversations, how we continue to evolve. It wasn’t too long ago where every researcher was called a usability engineer, and things will continue to evolve and things will continue to change, and that’s okay. This was part of the course. I’m just excited for the continued trajectory of researchers becoming key drivers and business leaders who represent users as their core mission. But it takes a few steps, takes a few optimizations, and I think it’s just part of the exciting journey of a domain that’s still relatively young. When you look at the grand scheme of, at least in the tech world for example, the other areas that have been around as fundamental to the product development process.

Steve: I think that’s us ending on a high note. Thank you so much for a great conversation, turning the tables a little bit and sharing so much. It was lovely to get to chat with you.

Nizar: I really appreciate it. It was a wonderful chat, and thank you so much.

Steve: There you go. That’s our episode. If you made it all the way to the very end, give yourself a hearty pat on the back for listening. Please spread the word about Dollars to Donuts. You can find Dollars to Donuts in most of the places that you find podcasts. You can raise awareness even more by reviewing the show on Apple podcasts or wherever it is that you’re finding it. Check out Portigal.com/podcast to find all the episodes, including show notes and transcripts. Our theme music is by Bruce Todd.

About Steve