36. Noam Segal returns

This episode of Dollars to Donuts features a return visit from Noam Segal, now a Senior Research Manager at Upwork.

AI will help us see opportunities for research that we haven’t seen. It will help us settle a bunch of debates that maybe we’ve struggled to settle before. It will help us to connect with more users, more customers, more clients, whatever you call them, from all over the world in a way which vastly improves how equitably and how inclusively we build technology products, which is something that we’ve struggled with traditionally, if we’re being honest here. – Noam Segal

Show Links

Help other people find Dollars to Donuts by leaving a review on Apple Podcasts.

Transcript

Steve Portigal: Welcome to Dollars to Donuts, the podcast where I talk with the people who lead user research in their organization.

I went to the dentist recently for my regular teeth cleaning. I was in the chair while the hygienist was working away. This obviously wasn’t the best situation to ask a question, but I had a moment of curiosity and I found a chance between implements in my mouth. I should say that at this dentist, their cleaning process, first go over your teeth with something called an ultrasonic scaler. I had assumed this was just like an industrial strength water pick, or like a tightly focused pressure washer for the mouth. After that, they follow up with a metal scraping pick. So during the metal scraping pick portion, I asked the hygienist, “Does the water soften it?” I was wondering if the first stage softens up whatever gets removed by this mechanical pick. Somehow my weird question prompted her to give me a 101 lesson on how teeth cleaning works, what is being cleaned and how the tools are used to accomplish that. Anyway, she starts off by telling me that the water is just to cool the cleaning head. The water isn’t doing the cleaning. There’s a vibrating cleaning head that does that work. I was very excited to learn this because I had the entirely wrong mental model. I had assumed that this device was just water and I hadn’t ever perceived any mechanical tip. Of course, I’ve never seen what this device looks like, other than when it’s coming right at my face when I’m the patient. And I had made all these assumptions based on what I experienced from being in that role.

It was a lovely reminder about how we build mental models based on incomplete information, based on our role or interaction and how powerful those mental models are. And of course, this was also a reminder of the power of asking questions, where even this simple question in non-ideal circumstances led to a lot of information that really changed how I understood a process that I was involved in.

It was a great reminder about one aspect of why I do this work and some of the process that makes it interesting and insightful. Speaking of interesting and insightful, we’ll get to my guest, Noam Siegel, in a few minutes, but I wanted to make sure you know that I recently released a second edition of my book, “Interviewing Users.” It’s bigger and better. Two new chapters, a lot of updated content, new examples, new guest essays, and more. It’s the result of ten more years of me working as a researcher and teaching other people, as well as the changes that have happened in that time.

As part of the “book tour,” I’ve had a lot of great conversations about user research and interviewing users, and I want to share an excerpt from my discussion with Larry Swanson that was part of his podcast, Content Strategy Insights. Let’s go to that now.

Larry Swanson: It also reminds me, as you’re talking about that, it’s like you show up at a place like in the old days, you drive up and you’re in the car with a team, and that’s a good reminder that this is like a business activity. In fact, you open the book with a chapter about business and the business intent of your interviews, and I also like that you close the book with a chapter on impact, which I assume is about the measurement and the assessment of that satisfying that business intent. Was that bookending intentional, or am I just reading into that?

Steve: This is where I just laugh confidently and say, “Oh, of course, you saw my organizing scheme.” I hadn’t thought about it as bookending, which that’s a little bit of nice reflecting back. In some ways, I think I was just sort of following a chronology, like why are we doing this, how do we do it, and then what happens with that? So no, but sure.

Larry: Yeah, sorry, I didn’t mean to project on that. But anyhow, that’s sort of the — maybe just focusing on the business part of it, because I think that’s something that’s come to the fore in the content design world, and particularly the last couple years. I think it might have to do with the sort of economic environment we’re in, but also even before that, there were people talking about increasing concern with the ROI of our work and alignment with business values, and maybe we’re focusing too much on the customer and not balancing that. But how do you balance or kind of plant your business intent in your head as you go into an interviewing project?

Steve: I think it kind of — maybe it’s like a sine wave where it kind of comes in and out. We were just talking about transitioning into talking to Marnie, a hypothetical person, for 30 minutes. I really want people’s business intent to be absent during that interview, so that’s maybe the lower part of a curve. But leading up to that, who are we going to talk to, what are we going to talk to them about, who’s going to come with us? That’s very much rooted in — I don’t know why I made up this metaphor of the sine wave, but we’re very highly indexed on the business aspect of it.

We designed this project to address some context that we see in the business, either a request or an opportunity that we proactively identify. So we think about what decisions have to be made, what knowledge gaps are there, what’s being produced, and what will we need to help inform decisions between different paths kind of coming up.

I talk in the book about a business opportunity or a business question and a research question. So what do we as an organization — what decisions or tasks are kind of coming up for us? So what do we have to do? We want to launch a new X. We’re revising our queue. We need to make sure that people doing these and these things have this kind of information. That’s about us. Then from that, you can produce a research question. We need to learn from other people, our users, our customers, people downstream from them, whatever that is. We need to learn from them this information so that we can then make an intelligent response to this business challenge that we’re faced with. So all the planning, all the logistics, all the tactics, what method are we going to use, what sample are we going to create, what questions are we going to ask, what collateral are we going to produce to evaluate or to use as stimulus or prompting?All that is all coming from what is the business need and how we can go at it. Yes, there still is a sideways. So then we set that aside to talk to Marnie, to talk to everybody. We really embrace them. we have all this data. We have to make sense of this data. And then here, I think we sort of straddle a little bit because you’re going to answer the questions you started out with. I think if you do a reasonable job, you’re going to have a point of view about all the things that you wanted to learn about. But you always learn something that you didn’t know that you didn’t know beforehand.

And I think this goes to the impact piece. This goes to sort of the business thing that’s behind all this. What do you do with what we didn’t know that we didn’t know? I want there to be this universal truth like, oh, if you just show people the real opportunity, then they’ll embrace it. And then everybody makes a billion dollars and the product is successful. I think that principle from improv is of yes and. I think we have to meet our brief. We’re asked to have a perspective on something. Part of the politics or the compassion way of having impact is to not leave our teammates and stakeholders in the lurch.

So we have these questions. We have answers to these questions. And also, we feel like there’s some other questions that we should have been asking. We we want to challenge how we framed this business question to begin with. We see there’s new opportunities. We see there’s insights here that other teams outside the scope of this initiative can benefit from.

There’s all sorts of other stuff that you get. And I think it behooves us to be kind about how we bring that up, because no one necessarily wants a project or a thing to think about that they didn’t ask for. So how do you sort of find the learning ready moment or create that moment or find the advocate that can utilize the more that you learn that can have even more kind of impact on the business? That’s not a single moment. That’s an ongoing effort. Part of the dynamic that you have for the rest of the organization.

Again, that was me in conversation with Larry Swanson on the Content Strategy Insights podcast. Check out the whole episode. And if you haven’t got the second edition of interviewing users yet, I encourage you to check it out. If you use the offer code donuts, that’s D O N U T S, you can get a 10 percent discount from Rosenfeld Media. You can also check out portigal.com/services to read more about the consulting work that I do for teams and organizations.

But now let’s get to my conversation with Noam Siegel. He’s a research manager at Upwork, and he’s returning as a guest after four years. You can check out the original episode for a lot more about Noam’s background. Well, Noam, thank you for coming back to the podcast. It’s great to chat with you again.

Noam Segal: It’s absolutely my pleasure, Steve. Great to see you and great to be here.

Steve: Yes, if you are listening, we can see each other, but you can’t see us.
So that’s the magic of technology, although Noam is wearing a shirt that says cat GPT on it. So we’ll see if we’re going to get into that or not.

Noam: I do love silly t-shirts. I just ordered a few more silly t-shirts yesterday. My partner is not very happy about that particular aspect of who I am. But, you know, it is what it is and you get what you get.

Steve: Right. You got to love all of you.

Noam: Yeah.

Steve: So that’s an interesting place to start. Let’s loop back. And so it’s to maybe a more normal discussion starter. We spoke for this podcast something like four years ago, early part of 2020. So, you know, I guess maybe a good place to start this conversation besides T-shirts and so on is what have you been up to professionally in the in the intervening years?

Noam: A lot has happened. I can tell you that quite a lot given it’s not that much of a long time period in the great scheme of things.

Steve: Yes. Mm hmm. Right.

Noam: When we chatted last, I was working at Wealthfront, a wonderful financial technology company, and I was head of UX research there at the time.

Steve: Right. Yes.

Noam: I left Wealthfront for a very particular opportunity within Twitter, now X, because I was very interested in contributing to the health, so to speak, of our public conversation. And I had an opportunity to join what was known at Twitter, now X, as the health research team. But we don’t mean health as in physical or mental health. We mean indeed health as in the health of the public conversation. In other companies, these types of teams are called integrity or trust and safety, et cetera. And we were dealing with everything to do with things like misinformation and disinformation, privacy, account security, and all sorts of other trust and safety related issues. Sadly, a few months, really, or less than a year after I joined, Elon Musk took over the company. And one of the first layoffs that happened at the company were a layoff of basically the entire research team. And so I left before the layoffs, but that was the situation there. And I’d love to talk more about what that means in terms of how we build technology, et cetera. We can jump into that.

From Twitter, now X, I moved to Meta, and I joined the team working on Facebook feed, which some people might view as the kind of brand page of Facebook or even the internet for some people. It’s a product used by billions of people daily. And it was a very interesting experience to work on both the front end of the feed and the back end as well, so to speak. So that was a very interesting experience. And in addition, I was also playing a role in what we were calling Facebook Research’s center of excellence or centers of excellence, where we were trying to improve our methodologies, our research thinking, our skills and knowledge, kind of working on how we work, which is very related to what we spoke of in the last podcast we did together a few years ago, when we talked all about research methodology, et cetera. So that was an interesting experience.

But in April of 2023, along with a couple of tens of thousands of other people, I was laid off from Meta, as was I think approximately half of the research team at Facebook at the time. And several weeks later, I joined Upwork, which is where I work now. Upwork, for those who don’t know, briefly is a marketplace for clients looking to hire people for all sorts of jobs and freelancers who are looking to work, primarily in kind of the knowledge worker space, I would say. Upwork also caters to enterprises who are looking to leverage Upwork as a way to, you know, augment their workforce and hire freelancers or people for full-time positions as well. And at Upwork, I’m a senior research manager. I focus on the kind of the core experiences within the product, which includes the marketplace for clients and freelancers, in addition to everything to do with payments and taxes and work management and trust and safety, which I’m very happy to still be involved in. It’s a topic I care a lot about.

Steve: For Twitter, because you were particularly interested in that health, that trust and safety aspect of it, I guess I want to ask why, what is it about that part of designing things for people to use that as a researcher or as a person that it’s something that you strongly connect to?

Noam: Yeah, I think we have a set of societal ills, let’s call it, very troubling societal ills that I think we need to address urgently and with great care and with great responsibility. And one of those societal ills is the evolution of public conversation, of how we interact with each other as people and how hateful and nasty and unkind we can be to each other. And how much information put out there online is either inaccurate or completely false.

This really came to be more salient in my mind during the 2016 elections to the US presidency. But I think it’s become even more salient ever since for multiple reasons, including the incredible and tragic rise in antisemitism in the world over recent years and all sorts of information running about out there on the interwebs that is, again, factually incorrect around all sorts of topics. Election related, related to certain geographical regions, to certain groups, et cetera. And so this is something I care deeply about, just given my personal background, just given what I’m observing in society. When we last had a conversation, I was at Wealthfront in the fintech space, and I recall a case happening, which really shocked all of us to our core with another company. I’m not going to name the company, but it’s another fintech company. A young person tried to use this other company to make certain types of investments and trades, but he was not well versed in how that world works, how those trades works, what options are and how to use them. And he believed that he had lost an incredible amount of money that he did not have and wasn’t able to lose. And it brought him to enough depths of despair that he ended up taking his own life. And that to me was just one story of many that made it incredibly clear that we need to be responsible and ethical in how we build technology products. And that few things could be more important than working on trust and safety. So yeah, it’s definitely an area I’m passionate about.

And we’re recording this a day after yet another senate hearing with all of the heads of different social media companies who were faced with difficult facts about the effects that they’ve had on society and on families who lost their loved ones and other incredibly tragic stories because of the way they built their platforms, because of things they ignored. And I think research can play an absolutely critical role in building trustworthy ethical experiences, responsible experiences that really matter in this world, probably more than anything else I could think of. So that’s the long answer to your question.

Steve: What kind of information can researchers provide that can — into situations like the ones that you’re describing?

Noam: What sort of research we should be doing or not be doing and at what level, at what altitude, if that company put more effort into the first of all, age gating the platform and ensuring that people have the knowledge and the skill to conduct certain trades, but beyond that, the usability of the platform. Going back to the basics, which we don’t do enough of, I would suggest, which is just making sure that the information one is seeing is clear, you know, and not open to interpretations that could have incredibly tragic consequences. Like thinking I lost $700,000, I think was the number when that was in fact not the case at all. So for me, it goes back to those basics. Research can inform all sorts of more nuanced reactions than the one we’re seeing.

Another thing that happened this week while we’re recording, which demonstrates what happens when you let go of your entire trust and safety team, including researchers, was that Taylor Swift, the incredible pop singer, artist extraordinaire, she was facing something incredibly tough to face online, whether you’re a celebrity or not, which was AI-generated nude images of her, fake images obviously. These were all generated by AI such that if you searched for her name, for Taylor Swift’s name, on particular social platforms, you would see those AI-generated images and perhaps believe, because they were very realistic, that these were in fact images of Taylor Swift when they were not. The solution this company came up with was to remove any search results for the terms Taylor Swift or Taylor or Swift or any combination of her first name and last name, which is moronic. I’m not sure what adjective to use. It’s incredibly aggressive, and I think as technologists we can do a lot better than cancel an entire search query because of that sort of thing happening. I think one thing for sure is that there’s no doubt there is a need for trust and safety professionals. There is a need for trust and safety researchers. We know how to inform the responsible and ethical building of these sorts of products and how to address these issues in much more nuanced and rational ways with much better outcomes. I mean, that seems pretty obvious, but it’s clearly not obvious to some of the people leading some of these companies. I hope that changes, and I’m very proud that at Upwork we do have these teams and we are working on these things. We care deeply about the trust and safety of the people we serve.

Steve: I mean, you’re describing a failure with Taylor Swift AI images that there’s, you know, I guess the jargon is bad actors. People are behaving in a way that’s harmful. And when that happens, when there’s a system that can be exploited or manipulated or used to cause harm, you know, I think you’re identifying like there’s a gap. The system can be used that way. But you’re saying also that without researchers, companies are not as well set up to respond to those malicious behaviors.

Noam: Yeah.

Steve: And I’m, I guess I’m just looking to have the dots connected for me a little bit more like, but you can see sort of the failure of the systems and the failure of the humans that are the malicious users. But in that scenario, or kind of analogous ones, how do researchers serve to either prevent or, you know, mitigate those kinds of malicious uses?
Noam: So I can give – there are a few answers here. One example would be that at X and other companies, some of the research we did and in some companies are still doing goes into supporting content moderators and support agents and other such people who are reviewing this type of content. And the research informs building tools so they can get to those problems faster and eliminate them and get rid of that content in more efficient and more effective ways. So for the time being, as you probably know, there are often humans in the loop here reviewing content. They’re using certain tools to do so. Those tools make them better at their job, and building those tools requires research. So that’s one example I would suggest.

Another example is that in certain companies that, again, care more about these trust and safety issues, research informs providing users with tools that enable them to control their experience. Whether it’s blocking certain people or removing certain things from their experience or a bunch of other things that we can do. But ultimately, some platforms choose to give people more agency and more control over their experience, and research has heavily informed those sorts of tools. And you end up with an experience that is catered to your needs and what you’re willing to see and what you’d prefer not to see. And I think a final example is that even though I think a lot of researchers, we think of ourselves as mostly informing the user-facing user experience. You know, the actual designs that people end up seeing when they use a product, for example, Facebook’s feed. Several researchers in our field work more on the back end of things and helping companies sharpen and calibrate their algorithms such that the content that shows up for users makes more sense. We had that at Twitter, now X. We had that at Meta. Most companies that have any sort of recommendation systems and search systems and other such systems, they’re doing a lot of research on what to showcase to users and what sorts of underlying taxonomies make sense and various tagging systems. All sorts of inputs and parameters that go into these models and adjust these models.

To give an even more specific example, in the realm of AI, we have all sorts of parameters, right? One of those parameters is called temperature, and when you adjust the temperature parameter, it sort of influences how creative versus how fixed by nature the algorithm responds to things, right? Like, how much it kind of thinks out of the box, so to speak, versus not. When you change the temperature of an AI-based tool, that of course influences how people experience it, right? And how they experience maybe how empathetic that experience feels or how aggressive it feels or how insulting it feels and so forth and so forth. And we need a lot of research going into these things to understand how tweaking all of these parameters affects how people perceive these tools, these technologies, these experiences that we’re building. So those are just some of the ways in which research, I think, can inform these topics.
Steve: I don’t know. You used the word health kind of early on here. There’s a quality to the experience that we have with these tools, these platforms, separate from bad actors and abuse and misinformation, disinformation. There are research questions to just set the tone or kind of create the baseline experience. It sounds like that’s what the — if you’re working on the feed at Facebook, if you’re thinking about that algorithm, you’re using research to just create a — ideally in the best situation, a healthy versus unhealthy experience. Just — I think there’s research that talks about, oh, when you compare yourself to others, if you see positive messages, you react this way. If you see negative messages, you react this way. So you’re making me realize that asking these kinds of questions around sort of the healthfulness of the experience, I think I locked in on sort of malicious behavior, bad actors, exploitation and so on. But I think I’m hearing from you that there’s just a base on it, like what’s it like to go on — you know, I mean, what’s it like to go on LinkedIn every day when people are being laid off or when people are trying to get your attention or when people are performing, as people do on all these platforms. There is an experience that research can help understand and inform fine-tuning of algorithms and sort of what’s shown to people and how in order to create the desired experience.

Noam: Absolutely. Trust and safety is an incredibly complex space. It’s very layered. To your point, you can create more trustworthy and safe experiences if you stop bad actors from even entering the experience in the first place. And again, ATX and I imagine other companies as well, part of what we did on the research side was inform things like account security.

And how do we help people secure their accounts and how do we make it harder for bad actors to open accounts even though their intentions are malicious? So you can create more trustworthy platforms by stopping bad actors at that stage. And then there are more lines of defense, which again, research can inform each and every one of those lines of defense to make sure that the ultimate, the end experience for each and every user is a healthy one, is a trustworthy one. And I really don’t think there’s any stage where research can’t have incredibly meaningful impact. And I just really, really hope that as we lean in even more to AI and other incredibly advanced and complex systems that to many of us are a weird and wonderful black box that we simply do not understand. I really hope that we increase our investment in trust and safety exponentially because if we don’t, I really think the results will be horrific. And it’s our responsibility as insight gathering functions, as researchers, whatever you want to call it, to take ownership of this, to advocate for this and to make sure we’re doing this in a way that matches the incredible evolution and development of these platforms. It’s just incredible to witness.

Steve: If we were to go into the future and write the history of, I guess I’ll just call it trust and safety user research, what era are we in right now for that as a practice or an adoption?

Noam: That’s a tough one. That’s a tough question. I think the only response I have for you now, but let’s talk again in four years or so, is that trust and safety in a sense, you mentioned bad actors earlier. Trust and safety in a sense is always this ongoing battle between forces of good, so to speak, and the forces of evil trying to catch up to each other and match each other’s capabilities and then beat the other side. With even better capabilities, I think what we’re witnessing now with AI-based systems is that the pace of innovation, the pace at which they are evolving and learning is shocking and hard to comprehend. It’s really, really hard to comprehend. Now, that’s not to say that it’s not going to take a long time before some of these systems are fully incorporated in our lives.

We’ve been talking about self-driving cars for a very long time, and they are absolutely out there right now in the streets of San Francisco and maybe Phoenix, Arizona, and maybe a few other cities doing their thing and learning how to do their thing. But I think it’s going to be quite a while before every single vehicle on the road is a self-driving car. But that said, these systems are just getting more and more complicated. I think our ability to understand them, it’s getting very difficult. We have to figure out what tools we need to develop in order to catch up, in order for the forces of good, so to speak, to match the forces of evil. And we also need to remember that everything these systems are learning, they’re learning from us. And sadly, the human history is riddled with terrible acts and a long list of biases and isms, racism and sexism and ageism and everything else. So these AI systems are sadly learning a lot of bad things from us and implementing them, so to speak. So again, we have a great responsibility to be better because a little bit similar to a child, AI systems are learning from what we are generating. So we kind of have to be a role model to AI and we have to make sure that we’re leveraging AI, maybe somewhat ironically, to deal with issues created by the incredible development of this technology. So I hope that sort of answers the question.
Steve: We know as researchers, right, any answer that doesn’t answer your question reveals the flaw in the question.vAnd my flaw is that I asked you to decouple user research for trust and safety from everything else. And I think you answered in a way that says, hey, this stuff is all connected. The problems, society at large, the technology, and the building of things are all connected and research is a player in that. So, yeah, you gave a bigger picture answer to my attempt to sort of segment things out. I think we’re going to come back to AI in a bit, but I wanted to ask you, in addition to sort of trust and safety that we’ve talked about over the four years and this issue of building responsibly that you’ve highlighted, are there other things that you have seen or observed about our field that you want to use this time to reflect on?
Noam: Yeah, absolutely. I think, as I mentioned, because this happened to me as well, we’ve seen a large number of tech layoffs and certainly for research teams, but not only, of course. We’ve seen reorgs happen, major reorgs. Because, I mean, reorgs are a reality in tech, everyone who’s worked in tech knows this, but we’ve seen some major reorganizations.

And in fact, we’ve seen entire research teams shut down, including the example I gave earlier of the team at Twitter, now X. And as part of that, we’ve seen some incredibly thought-provoking articles come out. And I’m sure you’ve read some of these. One of them from a former leader at Airbnb, Judd Antin. He was my skip level manager, wrote an article about how the UX research reckoning is here. Another incredibly interesting article was around the waves of research practice by Dave Hora. Jared Spool wrote an article about how strategic UX research is the next thing. And I think that what all of these articles had in common was some sort of discussion on the value that insight-gathering functions or research functions bring to the table. And you might not be surprised by this, but I have a hot take for you on this that I would be happy to discuss.

Steve: Bring it on. Hot take

Noam: Hot take time.

Steve: I’m ready.

Noam: Are you ready for this? So here’s the thing. If we stick to the UX research reckoning framing, I’m a bit of a stickler for words. So the relevant, or I believe that the relevant definition for reckoning that Judd meant to reference is the avenging or punishing of past mistakes or misdeeds. So basically, as UX researchers, we made some mistakes, we made some misdeeds, and now we are being punished for it by being laid off. And again, the broader point in that article, I think, is what’s the value we bring as researchers? And I am here to say that although I agree that we’ve made mistakes, this has, everything that’s happened, has very little, if anything, to do with value. To do with the value that we bring. And I think it has everything to do with valuation, which is a very different thing. And if I take Meta as an example, Meta was suffering from a tough time as a company, spending a whole lot of money on AR, VR, and other capabilities. The stock was at one of its lowest points in recent years, if not the history of the company. And so Mark Zuckerberg announced a year of efficiency. And part of his idea of efficiency was to lay off about half of the research organization. And we have to ask ourselves, is that because researchers did not bring value to the organization? And again, I would suggest not. I would suggest that these days I’ve had work, and in every company I’ve been part of, I’ve seen some incredible value brought forward by researchers. Insights that can make a huge difference to everything from the user experience to the strategy to use Jared Spool’s and others terms.

But there’s a couple of problems. The first problem is a problem of attribution. How can you calculate the return on investment of research? How do you know and how can you record and document which decisions and which things were influenced by research and which weren’t? If I’m an engineer or designer and I’m working within my Jira or whatever platform or linear or whatever platform you’re using to manage your software development, then I have some sort of ticket. I have some sort of task. I write 10 lines of code. Everyone knows those 10 lines of code are mine or mine and other people. Everyone knows what those lines of code translate into in the experience. And so the ownership of what that experience looks like from design to engineering is clear because it’s clear who made the Figma and it’s clear who wrote the code. And everything is incredibly accurately documented. When it comes to research, when it comes to knowledge, you know, research is circulated in all sorts of ways, right? From Slack channels to presentations to a variety of meetings and one-on-one get-togethers with cross-functional partners. And in all of those meetings and all of those interactions, research is coming through in some way. But it’s incredibly fuzzy and unclear how that translates into impacts on the products. That doesn’t mean research doesn’t have value. That means it’s hard to measure the value.

And then one more thing that I think is going on, which you probably know very well as one of the most knowledgeable people on the topic of interviews that I know, is what happens when I’m responding to your question? In this case, maybe you have some questions about what insights have we learned? What happens as I’m giving you a response? What are you doing? Make a guess.

Steve: I’m thinking about my next question.

Noam: You are thinking about your next question. It’s so hard to avoid that tendency. And I think in many cases, product managers, product leaders and other cross-functional partners of research, they’re taking in the research, but they’re just thinking about their next question. And to be fair, I think that one more thing that’s going on here is that we as researchers do not understand the feeling of being held accountable for certain metrics. And for millions, if not billions of dollars in revenue, that can be moved one way or the other by the quality of what we choose to build and what we choose not to build. And the roadmaps we have, the strategy we have, etc. Usually those are product leaders who are accountable for that. And we’re not. And so the pressure is on them. And so as they take in our insights, they can’t help but just think their own thoughts and think about their vision and maybe ignore certain things that we share. And then business leaders, ultimately, what do they care about? Again, valuation. The stock. That’s just how it works, which is why I said in the beginning that I don’t think this is about value at all. I think it’s about valuation. I think business leaders are optimizing their business for their valuation, for their stock price. They’re not laying off researchers because we didn’t deliver value or because we weren’t strategic enough. They’re laying off researchers and many other people because that’s one way of a few ways to become more efficient, to look good in front of your shareholders. It’s not such a complicated game. You know, we’re doing this interview a day after a particular company started offering dividends to its shareholders, and that had a very expected effect in the market on that stock. It just went up quite a little bit higher. That’s how the game works. Those are the dynamics of the market.

And so we’re in the situation where I’m not saying we haven’t made mistakes again. I think Judd, for example, absolutely had a point when he discussed the different levels of research and the fact that we’re making a mistake by looking at usability as some sort of basic, tactical type of research that only junior researchers should do and that we shouldn’t be focused on. And that we should only be looking at higher levels and higher altitudes of research. I couldn’t agree more. I absolutely agree with Judd on that. But this basic premise of research not delivering value I think is incredibly problematic. And I don’t think it’s correct. I don’t think we need to move into some third or fourth or whatever wave of research. I just don’t see that personally. I think many of us have already been in wave one and wave two and wave three of research. We’ve already been doing strategic research. We’ve already been affecting the business level, the product level, the design level. We’ve already been conducting all sorts of research from usability to incredibly foundational, generative research. And I think we’re being very, very hard on ourselves. And I think we need to cut ourselves a little bit of slack. Just a little bit.

Steve: I mean, I’m all about being kind to ourselves and not blaming ourselves for things that are beyond our control. We’re all susceptible to that and it’s hard to kind of watch that going on collectively. But when you’re in a situation where there is, I don’t know, a misalignment of values, like what, you know, like you said, value versus valuation. When that misalignment, that’s my word, not yours, when that exists, we can cut ourselves slack, but that’s not going to change that gap. I don’t want to say to you like, well, here’s, you know, you just outlined a systemic, deeply rooted, the nature of capitalism, it goes all the way up. How do we fix that? I guess I don’t, I think that’s not a fair question, although, you know, take a shot if you have a hot take there. Are there mindset changes or incremental steps or, you know, things that you’ve seen research teams do that acknowledge to some extent the difference between we’re not breaking value to their concern is about something else and how do we kind of meet them where they’re at?

Noam: So look, Steve, that’s an incredibly fair question. And I do want to be crisp about the fact that, yes, we need to be doing something. Something needs to change even if I view the problem differently. But before I get to that, just to reiterate, we as researchers know very well that it’s absolutely critical to identify the problem and to identify the correct problem at the right level. So before I get to what we should do, I just want to highlight the fact that in my view the issue here is that some of us have misidentified the problem, in my opinion. And we need to be tackling the actual problem.

And just to get to that and to pivot to kind of the second topic that we did cover last time and I want to cover again today. We did talk about in our original conversation about research methods and how we do research. And I do think, even though I believe we’ve brought a lot of value to the organizations that we work in as insight gathering functions, I do absolutely believe that given the broad evolution of the landscape we operate in, we do need to rethink how we operate. Not because we haven’t delivered value, but because the ways in which we can deliver value are rapidly changing. And I think we can now sort of extend ourselves.
And I was very influenced by a book titled Multipliers, not sure if you’ve read it. But the basic idea of it is that there are employees within any company who are multipliers in the sense that they don’t just do great work, they make everyone else’s work even better. They level up everyone around them and they create these situations where they define incredible opportunities and they liberate people around them to get to those opportunities and to make the most out of them. They create a certain climate, which is a comfortable climate for innovation, but at the same time an intense climate where a lot of incredible things can happen. Where I’m getting at is that, and this is not surprising probably to the people listening to this, is that the era of AI is upon us and I think it’s incredibly important to acknowledge the ways in which we can extend our work and ourselves with AI tools. So I know that my mind has moved a little bit from methods, so to speak, to leveraging AI to use similar methods but at a scale that we’ve never experienced before and we’ve never been able to offer before to our partners.

Yeah, I mean, I think there are certain paradigms in our industry that are changing and perhaps AI is even eradicating those paradigms and rendering them useless. I mean, if it’s okay, one recent example I have is that we had this paradigm that we need to make a tough choice. We’ve talked about this, you and I, a little bit. We have to make a tough choice between gathering qualitative data at small scales, which can often be okay, by the way, unless you’re developing a very complex product or unless you want to make sure that trust and safety is in the center of everything you do and then maybe you need a little bit more scale and you just couldn’t get it because you didn’t have the people to reach that scale of interviews or qualitative research. Or, of course, the other choice you could make was to gather quantitative data at any scale you like as long as you can afford it, namely by sending out surveys to hundreds or thousands of people. The issue is survey data is shallow data or thin data or whatever you want to call it, whereas I believe it was Sam Ladner who coined the term “thick data” for qualitative data. And sometimes you need that thick data and you need it at a scale that we were never able to reach before. And AI enables you to do that.

I’ve personally witnessed tools, one of them being Genway, which are completely revolutionizing the way we conduct research. I’ve seen existing research tools, Sprig would be a good example, Lookback, there’s so many incredible tools that have incorporated AI into their workflows. And they are making paradigms like the one I mentioned, this choice between thick and thin data, they’re making them irrelevant, absolutely irrelevant. Which is very interesting to me. And it ties to this idea of multipliers, this idea in this book I love. Because AI research tools, like the ones I mentioned and so many more that we could talk about all day, they enable us, in a sense, to be multipliers. They liberate us, in a sense, to do a lot more than we could ever do before. And hopefully that translates into us enabling our cross-functional partners and the teams we work in to deliver their best thinking and their best work as well. So that’s, I think, where our field is going in a nutshell.

Steve: Can you describe with maybe a little bit of specificity what a work process or set of work tasks that a researcher might go through where AI tools like the ones you’re describing, like how is that, yeah, what are they doing, what’s kind of coming to them and, you know, what does that process look like that’s AI enabled?

Noam: I can give a couple of examples. The first example, if I think of a tool like Genway, an interview tool, is that interviewing is tough, as you know well. You’ve written what I consider, and many people in our industry consider, kind of the Bible of interviewing people. No offense to the actual Bible. And as someone who’s written one of the primary guides to how to interview people, I think you appreciate more than others how complex being an interviewer can be. It’s something that you can learn over years and years of training and mentorship and still not nail some pretty critical aspects of interviewing. For example, asking the right, the best, the ideal follow-up question, and actually listening to what’s being told to you actively, rather than thinking about that follow-up question all the time, because listening is what enables you to ask a good follow-up question. Systems like these can train on an unlimited number of past interviews and an unlimited number of texts like your book, and learn from all of that how to conduct the best possible interviews, right? And these types of abilities to learn and then apply that learning in an interview situation, I believe it’s fair to say it would be technically impossible for any researcher to achieve that level of learning, certainly in a matter of hours or days or months at the most or weeks, rather than years in the case of a researcher.

You know, one of my hot takes, I hope the audience doesn’t kill me for this, is that David Letterman interviewed people for many, many years. And I personally think David Letterman is a horrible interviewer. I never understood why he asked the questions that he did, and everything about his interview style is very, very odd to me. But putting that aside, interviewing is a very complex skill. None of us can really ever witness how other people do it. And we all have to spend years of practice learning how to become better interviewers, which is a deceptively difficult skill to build. And these AI tools are coming in and, at least in theory, can learn all of that shockingly quickly. That’s one example, and I’m very curious to see how the research community responds to these types of tools and uses that, and what issues they do find in the quality of these types of interviews and how they can be improved.

The second example is that I recall from even my undergrad psychology studies, not to mention my graduate studies, that our ability to hold information in our brain is quite limited. And so even when we’re synthesizing five interviews, not to mention 500, because sometimes you need 500, it’s very, very challenging. If you do five 30-minute in-depth interviews with people, organizing your thoughts and synthesizing those interviews has never been a trivial task. And I think there are a large number of biases and other issues and strange heuristics that we use to synthesize information that might not lead to the optimal outcome, an outcome that’s as objective and accurate, an accurate representation of the entirety of those interviews and how they interact with each other as we would want them to be. One particular task that generative AI and AI in general is very good at is summarizing and synthesizing information. And especially as we collect more information, that becomes a lot more relevant and even critical, I’d suggest.

When we entered the big data era, we needed to develop a bunch of tools. You know, so many companies came out of that era building tools that enabled us to analyze and very easily visualize in beautiful dashboards what those data are telling us. Now, we can also start collecting qualitative data at unimaginable scales. And not just qualitative or quantitative data, because I think that distinction is going to matter less and less as time goes by, but more importantly, we will be and become so much closer to the people we serve, our users, our customers. I think we’ve talked about this in the previous podcast, but I think we talk about diary studies and how diary studies used to be physically sent to people’s mailboxes, right? And so you as a researcher had to plan your studies, send out an actual diary, have people log their entries into it, and then they would have to send it back, and then you would have to very manually look into those entries. And obviously that takes a very long time. These days, and especially with the support of AI tools, you can be in touch with the people you serve all the time, as much as you want and as much as they want, and you can both collect data and synthesize data and even communicate those insights at a pace that’s hard to even fathom, for me at least. But it’s very immediate. Can you say very immediate or can you not modify the word immediate? Is something either immediate or not? Okay, I don’t know. I’m just afraid of my mother and what she might say here about my grammatical choices. But anyway, yeah, yeah. But I think that’s what matters the most.

Steve: We’ll ask her to fast forward over this part.

Noam: People’s schedules don’t really matter anymore because they can choose to interact with AI, for example, whenever they want to. And it can be in context in real time. And then AI can immediately synthesize those learnings. And it can immediately improve the way it collects insights based on that interaction and all previous interactions. I was thinking about this a lot in this framing of multipliers. Like, who is the multiplier in this context?

Steve: What does this hold for researchers? Like there’s research, which I think you’re describing a really audacious vision for how, what research will be and I think speaks to the point about valuation versus value, but researchers, which currently refers to humans, what do you, what’s your vision for that or your anticipation for that?

Noam: Is it the AI? Is it the researchers? Like, who is multiplying whom? But what I do think is that, well, first of all, I’m a techno-optimist or whatever you want to call it. Even with everything that’s happened, even with all of the tragedies and the negative aspects of technology that we’ve discussed in this conversation and others, I am still at heart a techno-optimist. And so my deep belief is that certainly for the foreseeable future, if not beyond, AI will become a valuable extension of ourselves and our work. And I do believe that even if we don’t have to deliver more value necessarily than we already are, even if our value just goes underappreciated and there’s nothing terribly wrong with how we’ve approached things, I still think that AI will augment our work, will amplify our work, will enable us to really invite the teams we work with to do their best work ever.

Because AI will help us see opportunities for research that we haven’t seen. It will help us settle a bunch of debates that maybe we’ve struggled to settle before. It will help us to connect with more users, more customers, more clients, whatever you call them, from all over the world in a way which vastly improves how equitably and how inclusively we build technology products, which is something that we’ve struggled with traditionally, if we’re being honest here.

A very clear example of that is that an AI can speak a bunch of different languages and connect with people across all time zones and languages, and even mimic certain characteristics of a person so they feel more comfortable in that context. So for multiple reasons, I feel like if we’re at all concerned about getting buy-in from our partners, if we’re concerned about the value we bring, the impact we have, I definitely think AI tools can really improve our chances of getting to where we want to be. And I think it’s going to be a very long time, if ever, before these tools replace us as researchers. You know, the reason I chose to be a researcher, the reason I chose to be a psychologist is because of the incredible complexity of the human mind.

You know, the people listening to this can’t see this, but for my friends who are physicists, for example, so I’m holding up my phone right now in front of the camera, and if I drop my phone onto my desk or onto my floor, it’s a very easy calculation for physicists to say how quickly will the phone hit the desk, and what energy will there be there when the phone hits the desk, and what’s the chances that the phone will break given the speed of its fall. Physics is a beautiful thing, but it’s also a fairly reliable scientific practice.

You know, there are rules in physics, and they’re fairly clear. And even though physicists sometimes, on occasion, like to look down upon people like me with a PhD in psychology and a background in psychology, I think that in many ways, the field of psychology and other fields that deal with the human experience, with the human mind, they are so much fuzzier and so much more complex. And I’m saying this because I think many, many other professions will be replaced by some form of AI before researchers ever are. And that’s because of this complexity, this fuzziness that’s hard to capture. I think in many ways, our field is incredibly technical, but in other ways, our field is not technical at all.

There’s a lot of art to it, and there are all sorts of different aspects to it. It’s a lot easier for an AI to generate a piece of code or to generate a contract or to read a mammogram or an MRI and identify something, rather than talking to another human being and understanding them deeply. I think that’s a lot more complicated. So I’m not too concerned about the research fields, but we’ll wait and see, I guess.

Steve: So as we kind of head towards wrapping up, you know, since I’ve known of you and known you, I’ve always seen you doing different things, I guess, to be involved with the community of user research and what’s that look like for you now?

Noam: I, like many people in our profession, have definitely been on a journey. And if I’m being honest with myself and the audience, it’s been a very challenging few years, certainly for me. And I know for many others out there, whether it’s COVID and layoffs and a bunch of other personal events and things in life that just happen to us. And that’s part of the reason why I decided, and I know quite a few other people in our community decided to do this, to pursue coaching, among other things. I took a certification in coaching. I think even with my background in psychology, I felt like there was so much more to learn in this realm. I think that it’s always been important for me to support people in our community, and I wanted to do that even better.

And so one of the things I’m doing these days, to a limited extent, is some coaching, not just for UX researchers or UX professionals, but people in tech in general. And then I have some thoughts for the near future around sharing some of that coaching and ideas in other ways. In addition to continuing to teach in all sorts of ways, I’m still teaching at a bunch of different institutions and planning to restart some of my teaching on Maven, which is a wonderful platform for learning all sorts of things. I think the general trend in my life and career right now, and maybe this will resonate with people, is that definitely a challenging few years feel like maybe now coming out of it a little bit, ready to take on certain other challenges in addition to my role at Upwork, etc. And I know that I really want to be there to support our community in particular as we all go through rather challenging times.

So I just invite anyone who wants to get in touch to message me on LinkedIn or email me or get in touch in any way that works for you, and I’d be happy to chat and help. And then I also thought about, and we’ll see where this goes, but as you can tell, these topics of AI and where our field is going and how it’s evolving, that’s a lot to me and I’m thinking about it constantly and want to be part of this evolution, if not revolution, in how we work. And so I’d love to have conversations similar to this, whether out in public or privately around these topics to continue to understand them. And I’m just looking forward to seeing what is next for our industry. I feel like when we spoke a few years ago, I think we had a solid sense of what’s to come. And I think in many ways we discussed things that did end up manifesting in some way or another. But in this conversation, Steve, I don’t know what we’ll be talking about in four years. If you give me the opportunity to talk to you again. And I can’t decide with myself if that’s exciting or incredibly anxiety-provoking. So I don’t know. Why not both? Or if I’m going to be an optimist or say I’m an optimist, then I’ll choose to be an optimist and say, maybe that’s exciting.

Maybe it’s exciting that I really don’t know what’s coming down the line. But I do know that I want to thank you again so much for taking the time to do this. It’s always such a pleasure to talk to you. So thank you for giving me the opportunity.

Steve: That’s my line, man. I’m saying thank you.

Noam: Well, for me, it’s a special treat. Maybe I can share also with whoever’s listening to this that we did get a chance to meet in person finally. Not that long ago. And that was even more of a treat. And I really do hope that our community of research can get together more often moving forward and meet up and discuss all of these issues.

Steve: These are some really encouraging, I think, provocative things to think about and some really positive and encouraging sentiments for everyone. And there will be show notes that go with this podcast as always and so the stuff that Noam you’ve mentioned and you know ways to get in touch with you, we’ll put that all in there so people can connect with you if, if by some chance they aren’t already connected with you. So yeah, I’ll flip it back as well and say thank you for taking the time and for thinking so deeply about this and sharing with everybody. It was lovely to have the chance to revisit with you and kind of catch up on some of these topics after a few years and. I look forward to four years from now doing this again if not sooner.

Noam: Can’t wait. Adding it to my calendar right now.

Steve: All right.

Noam: Cheers, Steve. Have a good rest of the day.

Steve: Cheers.

Well, that’s another episode in the can. Thanks for listening. Tell everyone about Dollars to Donuts and give us a review on Apple Podcasts. You can find Dollars to Donuts in most of the places that you find podcasts. Or visit portigal.com/podcast to get all the episodes with show notes and transcripts. Our theme music is by Bruce Todd.

About Steve