[S1E6] Katie Deloso: “Don’t sell them, just listen to them”

The dangers of feature bloat, running impactful UX research calls, and how to make sure that users – new and old – see the value in using your product

Listen to the podcast here.

Find out more about Katie here: https://www.knurture.com/


Episode transcript:

[[ Season intro ]]

[[Episode intro]]

What does it actually take to get your user experience from “meh” to great? How can you consistently gather useful feedback and translate it into features that make your users stay? Where do sales and marketing come in – and how can you continue reinforcing your product’s value during onboarding? All of that – in this episode.


{{INTERVIEW}}

Katie Deloso   

Hi, I'm Katie, I design products. I've worked with 250 software companies over the last 15 years on designing their product experiences, doing everything from their user research to the actual design of the product itself.


{{BREAKOUT]]

During our initial conversation, Katie said something that was really interesting – that running UX research is actually not that hard. Which is not what I expected to hear, to be honest. But, of course, there’s a little more to the story…

Ekaterina (Sam) Howard   

You mentioned that UX research is actually easy, like you can see when something is broken. But if that's so simple, why are there so many products with terrible UX?

Katie Deloso  

Well, it's a combination of things, right? 

So I think a lot of companies don't end up doing the research, or they do a half baked version of research, where it's more of a sales call, where they're trying to use it as a discovery technique, where they're asking a bunch of questions, and they're leading the witness or saying, you know, what, do you really like this feature that we have here? It's really shiny, would you like it? 

And so they come back with findings, like, "They love the feature!" Well, if you position something in such a way, that you're telling somebody, you're leading the witness and saying “Would you like it?” people should be agreeable, they go “Yeah, of course!” But they aren't being asked in that moment to take their wallet out and pay for it.

So, you know, as I said,  we're agreeable by nature and so people generally say yes, but when it comes to actually time to get out their credit card, or to try to push it through with their boss, they can't do it. 

So there's that one piece of, they're not really doing what I would consider to be UX research, they're doing some variation on getting out of the building for Lean Startup. 

And that causes people to get mixed signals. And then they say, research doesn't work. You know, we did it, we got a bunch, we got a bunch of data, and it led us in the wrong direction. So we're just gonna go with our gut.

That's one side of it.

The other side is, they either have trouble recruiting for research, or they don't want to use up too much of their social capital on research. 

So they end up having a really hard time with getting people on the phone and actually walking them through the product. 

So a lot of times, I'll talk to people about generating that group of people that they can use over and over again, who are really invested and excited about helping to improve the product experience. So it kind of becomes this next thing. 

So the actual watching of somebody using an interface, it's very clear what works and what doesn't. But the process of getting them to that point is actually where I think most research endeavors fail.


Ekaterina (Sam) Howard    

That makes sense. Okay. So let's start with the discovery call / UX research call mash-up.


Katie Deloso   

Yeah.


Ekaterina (Sam) Howard   

Sounds like a bad idea. So can you give me some examples of good questions versus bad questions, because I see this fall apart spectacularly as well?

Katie Deloso 

Yeah. I'm trying to think of some really good, bad questions.

It's more about listening and remaining neutral, more than anything. 

So a lot of times, the questions that I'll try to do on the discovery side are more jobs to be done inspired questions where I'm asking them more about what they actually do in their day-to-day life:

  • “So tell me a little bit more about how you would solve this problem today?”

  • “Can you walk me through your process, even show me your screen and tell me how you would do this using the tools that you already have right now or using our tool?”

Or, you know, there are variations on that question, versus “Hypothetically, can you tell me a little bit about [it], if you had something that did this particular thing, how you might go about using that and how that would change your life.”

When people get into this hypothetical world where they're asking them about a new feature or a new product, something that they haven't actually used or done before, people tend to be overly optimistic about how this is going to help them accomplish something or do something. 

And the agreeableness factor added into all of this means that. it's not that people are trying to mislead you, it's just that they imagine a future, maybe a little bit differently from the way that you're talking about it. And you guys are given two totally different things. Or they end up saying things like “That sounds amazing.” And they'll go down a path of how that might actually change things. But it's not enough to actually move the needle for them.


Ekaterina (Sam) Howard  

Okay, so is there a difference - and we're probably splitting hairs at this point - but is there a difference between the "hypothetically" or "describe your ideal future," for the UX piece?

Katie Deloso   

I think the description of the ideal future is fine, as long as it's taken with a grain of salt, right?

It's more about... in asking that question, it's more about understanding why they want something to fix a particular problem, rather than the solution they're offering itself. 

So I think if you ask that question, what you're really looking for is, what problem they're trying to solve, in a roundabout way of asking what their, what their brighter future would look like.


Ekaterina (Sam) Howard  

That still not helping with the UX, like the actual features, nitty-gritty of it?


Katie Deloso   

Generally, no, unless they say I'm using a tool that already does these five things and this is the limitation of that tool that I'm running into right now. 

That's helpful, that's useful, because they're actually doing something and trying to accomplish something. I mean, sure, all of these things can be used as some sort of signal, I would just put a lower weight on something that's all hypothetical, not based in actual behaviors.


Ekaterina (Sam) Howard  

Okay, and what about letting them play with a prototype versus walking through?


Katie Deloso  

Let them play with it. 

And the reason why is, first of all, most of the time, you're not going to be there in the real world when they're using the application. So it's helpful to see what they would do on their own, what problems they'd run into, what they don't understand, and how they would actually go through that workflow without you guiding them.

So a lot of times, I'll just sit back and watch somebody actually use the prototype so I'll see what they're trying to do. And then I'll jump in and ask them questions, if, for example, they're running into a problem and what they were trying to do and they'll usually narrate through what they were trying to accomplish. 

So basically, have them talk out loud and think out loud as they're using the prototype. You'll see where they stumble, you'll hear them have those hiccups, and that's when it becomes really clear where their issues and the experience overall.


Ekaterina (Sam) Howard  

So what are some of the questions that you've asked [them] in the moment?


Katie Deloso   

That's a little tougher, because it usually is very specific, because I'll try to do that initial setup with them of understanding what they're doing today. 

And I'll usually try to connect something that they're doing right now to the prototype that they're using, and ask them a little bit about what they're trying to accomplish. 

Or I'll ask them what they thought something should do if it didn't behave in the way that they thought it would, or what they were trying to get done. In clicking on that, or using that particular drop down, maybe they open it up and they don't select an option, maybe there's an option missing that they really need in order to filter a large table, for example. 

Generally, the biggest things that I see happen over and over again, are problems with the perception of the value that this product is providing. So they don't fully understand why [choose] this product over going back to my current tool set that I'm already really familiar with. “I use Google Sheets every day. Why would I use this tool over using Google Sheets?” – for example.

And if they're able to articulate that at the end of the research session, I think that that's a very big win. It's bridging that gap between the product and the marketing so that you're able to say like, great, marketing sold me this or sales sold me this, the product actually reinforces it, so they can look at it at the end of the day and say, "Yeah, this is what I was sold, it does the job." 

So that's the biggest miss that I end up seeing over and over again, is just that marketing or sales sometimes sells them an idea, a brighter future that they think they're going to get. 

And then the product is expected to deliver on those expectations and oftentimes it does, it just doesn't tell you that it did it. So it's that moment where you need to reinforce and celebrate that win that most products miss.


Ekaterina (Sam) Howard   

And I suspect you're not just talking about like confetti when you send an email or just the celebration screen.


Katie Deloso   

No! I mean, celebration screens are amazing. And for some brands, and some products, that works spectacularly.

A lot of that actually gets done from the tiny pieces of text throughout the interface, where it's just reinforcing what the value of what you're doing is and it's explaining why. 

Why am I asking you to answer these seven questions? If I tell you why, you're more likely to actually do this.

There's that old Xerox copier example that I feel like every Organizational Behavior student has heard the story of. There was a study and there were like 15 people in line, and if somebody just cuts to the front and tries to use the machine, people are upset, they're offended. But if they give any excuse, it doesn't matter how bad the excuse is, they're okay. They're more okay with that person cutting in line. 

Same idea for an application – we need to explain why we're asking for some of these things. Otherwise, people feel like, "Why are you trying to get all this information out of me? Why do I have to go through all these different steps?"

I know I'm taking us on a tangent here. I recently have been debating this online with people because I posted something along the lines of "More steps in onboarding process is actually okay, as long as there's a reason why you're asking for somebody to do this. And you're not going to see an increased drop off, you're not going to see people bailing out of the process, it actually helps both the end user because they are getting a more customized application, getting more customized notifications from you, and helps the team because they understand their customers. It's a win win.”


Ekaterina (Sam) Howard   

Yes. So what's the problem?

Katie Deloso   

Well, the problem is this myth of the click, which is “More clicks equals more drop offs,” and Andrew Chen actually just posted about this. 

So I didn't realize my post was timely and people were starting to compare the two ideas. “More clicks results in more drop offs,” well, sure, if you're just asking them to do things without giving them a reason, or a brighter future, as to why? Why are they doing this? Why are they taking these steps? Sure, if I have to fill out 15 pieces of personal information with no reason why, I'm not going to do it, and I'm going to drop off out of the process.

That doesn't mean more clicks equals more drop off, it means more clicks without a reason or a motivating factor behind them results in more drop offs. Whereas on the other end of the spectrum, when people feel like these actions that I'm taking are actually helping me along my path in this application, like maybe it's going to be [helpful]. 

I'm working on an application that does advice for somebody who's getting a mortgage. And what they found was that the more questions that you actually asked somebody about their situation, the more they felt like they're going to get customized advice back from you.



[[BREAKOUT]]

If your product has more than one use case, this is probably going to be relevant. Personally, I get very frustrated if I need to figure out my use case without any support.

So what does it take to create a relevant onboarding experience that saves your new users’ time and makes them get the value? 

That’s what we’re going to discuss – plus, we’ll talk about a situation where you’re building an all-in-one solution… end up with a Frankenapp… and now need to course-correct to make your product sellable again.



Ekaterina (Sam) Howard   

What I see happen a lot is... yes, people eventually get to the value. But it takes them so long and there are so many wrong turns that by the time they get there, they're just like, "Oh, my God, and you couldn't have helped me get there, like, five hours ago."


Katie Deloso  

Yep. So, and this is because the marketing team has been really great at messaging, they know what resonates, they tested all these different alternatives. So they've got this messaging set. And then something happens, where you click "Log in," and it gets thrown across the fence to the product team. 

And the product team kind of adds onboarding at the last minute, because they're busy building features, and they're like, “Great, well, we just need to like, you know, get people to set things up.”

And it becomes a very tactical thing, rather than this should really be a blend between product and marketing, where that messaging is being pulled in from the marketing side, and matched up with what you want them to do in the product to get the most value out of the product.


Ekaterina (Sam) Howard  

Okay, now, I would really love to see that picture of an ideal world where marketing and product work together on making this happen. Yeah, tell me more.


Katie Deloso   

I mean, it's tough, right? 

And so a lot of times, what ends up happening is, when I get pulled into these kinds of things, I'll go talk to marketing, I'll go talk to sales, and say, what works, what doesn't work, what messaging really resonates, what do people expect and then talk to product and say, "What do people need to do in order to get the application to work in the first... I call it the first 14, because it's really the first 14 minutes, 14 hours, 14 days."

If you make it past 14 days, and they aren't set up, you're toast, right? 

Even with large applications, you start losing people at that two week mark. 

So it's about kind of understanding both perspectives, and then trying to pull them together into a single [one], all you're doing is you're looking at what they're being sold, what future looks like for them and pulling it together with the actions they're being asked to do. 

Generally, they're in some level of alignment, and all you have to do is pull them together. 

The bigger problem is when they're not aligned. But generally, they're pretty close. So it's just really about tweaking it to the point that messaging just gets reflected back to them from the marketing and sales side, rather than just very...


Ekaterina (Sam) Howard 

Can you give us some examples?

Katie Deloso  

Well, there's a really good application that does [it]. Not sure about the marketing to product side, but there's a really great app called Simple.

Talking about more clicks, this is an intense onboarding. 

Simple is an intermittent fasting app. And they ask you, I think, upwards of 30 questions in their onboarding process. However, as they are asking these questions, they're reinforcing the value you're going to get out of the application at each step. 

So they'll ask you things like, "Are you here to lose weight? Are you here to feel better?" 

So I think I answered yes, I want to lose weight. 

And the subsequent questions they asked, they asked, "What's your weight today? What's your goal weight? Do you have a special event coming up where you would like to know approximately what your weight might be?" 

And so you answer a couple of questions and they're able to tell you in a couple months that you'll meet your goal weight by this date, and by your vacation in March, we think you'll have dropped eight pounds. 

And then they'll say something like, you know, "That's the 10% mark, amazing!" 

10% weight loss has all of these amazing health benefits, even if you're not at your goal weight.

So they're able to kind of keep you moving along based on some of your earlier answers. 

Now, I'm sure if I answered and said, instead of losing weight, my goal is to [get] mental clarity, they might walk through, “What time of day do you tend to feel a bit sluggish?” and then they'll subsequently walk you through why. They'll have you answer some questions around that so that they can reinforce that value back to you again

So while it looks like a lot of questions, you're kind of getting bumped every time and you're feeling like, yeah, this app is getting customized and wow, look at all these great things that I'm going to do. 

So they've done a really good job of pulling in the promise that intermittent fasting is basically going to change your life. And they take it through, and you start imagining what your life is going to be like, and how it's gonna make your life better, through that onboarding process.


Ekaterina (Sam) Howard   

What would that look like for B2B?


Katie Deloso  

So for B2B, you are being sold something that's going to do something for you, right? 

Generally, you're going to be faster, you're going to have new information, you're going to get some sort of superpower in your day to day job. 

I'm trying to think of a really solid example of one that I've worked on, that maybe I can abstract out a little bit.

So let's take Figma, Figma is a great example. So one of the things that they actually do show in their onboarding flow reflects their value proposition that you can work together with developers and other designers in real time.

One of the first things their onboarding flow does, I actually just looked at their onboarding flow recently, so this is not like me magically having knowledge of all onboarding flows in the world.

But one of the first things, the first couple of screens [they] show, they actually will ask you questions about who do you want to collaborate with and they'll actually show cursors on top of the screen in an abstracted way to show you're gonna be working with all these people in real time. For a designer, [it] is so exciting, because you know, all the old tools like Photoshop, Sketch, you were working by yourself, you used to have a repository on GitHub, where you were checking in and checking out designs.

I feel like it doesn't feel as revolutionary today as it did a couple of years ago, but there's still pulling in that value proposition of now you guys get to work together in a more collaborative fashion. 

I think they've done a really good job of pulling in that value prop in their onboarding. So it's basically saying what's your competitive differentiator for the most part, and then pull it through, pull it through the onboarding, show it in the visuals, show it in the kinds of questions that you're asking, and it's developing this cohesive theme for the application that should reflect what you're selling.

Ekaterina (Sam) Howard   

Okay, a sub-case, for everybody out there who is an all-in-one, complete solution that will solve all your problems.

Katie Deloso   

Don't get me started on “Swiss army knife.”


Ekaterina (Sam) Howard   

No, please, please do.


Katie Deloso 

So here's the thing, I feel like I hear that all the time: "We're the Swiss army knife [industry] [category]." 

For example, there are tons of these in social media management that have lived and died over the course of the years. Because if you do everything, you're bloating your software, which is diluting the value proposition, making it a lot harder for me to tell people what we what you guys do. 

So I often think of it like, when I am looking at my toolkit, how do I know I want to pick your tool over all these other tools that I already have? 

If you do everything for everyone, it's really hard to know. Like, when do you pull out a Swiss army knife versus a screwdriver, versus all the other tools that are inside of the Swiss army knife? It's really hard to know what is the use case for this thing.

So first of all, it makes it harder to know what to sell because you don't understand when you would use this in your everyday life.

It makes it harder to support because you have to have a huge codebase, because you're doing all these different things and all these different features without a particular focus, and then you end up a lot of times in a case that I call it a Frankenapp where they bolted on all of these different pieces and now you have this kind of zombie monster thing that is really, really hard to navigate. Because they kind of bolt these things on, and then they kind throw in a sub menu, throw it in this menu over here. And then people who came in for this one little tiny feature can't find it and have no idea how to use it. It's not being supported, because there are 15 other features that are getting more support.

And what I think most companies find is that only a handful of features end up getting used in those cases, when they actually are tracking usage.

They're like, wow, you know, these three features are the ones that get used over and over again, rather than this trailing list of 10 other things that we've added on for the sake of parity with other applications, for "I've heard once in a sales meeting that somebody wants this. And so we built it," or we built it for a huge enterprise customer to close the deal. 

All those things end up happening. 

And it just, it becomes unsustainable in the long term for most companies, unless they raise tons and tons of venture money, they have huge teams that are really able to support it. But even then, I rarely - maybe it's just the nature of what I do that I rarely end up seeing those cases, because I'm usually the person who gets called in to fix the bolting-on architecture mess.

Ekaterina (Sam) Howard   

Okay, so when somebody says "Frankenapp," I hear HubSpot. But you're very right. It's a very different scenario – HubSpot can do this because it's HubSpot. But tell me a bit more about the cases where this is a mess, that's not working, and you need to fix it. How do companies realize that this is a Frankenapp?


Katie Deloso   

It becomes, at least what I end up hearing, things like, "Yeah, our customers don't know what we do."They can't articulate our value proposition, they can't explain why our software over anything else.

You hear that on the marketing and sales side. On the product side, it's, "I think we just really need to fix our navigation here, because you know, people come in, and they don't know where to go, and they don't know what to do. Or maybe we need a better dashboard, to think a dashboard might be the fix for this, because we can just tell them where to go next." 

Those are usually the types of things that are the leading indicators, there's just a lot going on here. 

And then on the technical side, what ends up happening is they're looking at their team, they're looking at the structure, and they're like, "Who's gonna support this feature we built six months ago, who's going to support this feature we built a year ago?" they were still getting bugs and complaints and things about it. And now we have to either we have to prioritize, are we going to work on a new feature or a feature that's being used really heavily or are we going to support this kind of lesser used feature that 1% of our customer base is using and we're getting some bugs. 

It becomes a juggling act for the team. And I think that usually is where most people end up realizing it. 

But I ended up hearing it on the product side, usually before that, which is just like, people can't find what's going on. They can't navigate this thing. Or they can't find this particular thing. 

It's like "I need to do this workflow and it's all chopped up in a million different places," because it's just really hard to do some things really, really well. 

Back in the day it wasn't everything in one, they started with pieces and those pieces grew over time, and then it became a more comprehensive solution. And if you have the funding to do that, and you've the team to do that, great! I'm not saying don't go in that direction. 

I'm just saying, for most companies who don't have gigantic teams, don't have gigantic amounts of funding, which, let's be real, funding has gotten a lot harder lately, focusing on a core feature set is really phenomenally helpful, rather than trying to do everything for everyone.


Ekaterina (Sam) Howard  

Okay, but how do I expand into new markets if I don't have new features?


Katie Deloso   

I think in that case, it's fine. I'm just saying don't go after 50. 

People who go after the idea of an all-in-one solution as their value proposition, that's where you end up in a really, really scary situation. I think if you're going after a new market, or you're going after expanding your customer base, and you're saying "Great, maybe we focused previously on a particular vertical or particular industry. Now, we want to focus on this new industry over here, which there's going to be expansion in the future." 

So that makes complete sense. 

It's just going in with their eyes open, as to really limiting it down to what's going to differentiate them from everything else that's out there, because software's gotten a lot easier to build today and there are people creating these niche applications everywhere. 

So how are you going to compete with those in that particular market, or in a particular category, what's going to be the differentiating feature that really wows them and gets them excited? 

I think that's where they should focus their efforts rather than on the trailing list of small things that people say that they want, maybe doesn't actually move the needle for them.


Ekaterina (Sam) Howard   

So kind of to follow up on that. If the goal is not to expand and cover adjacent markets, if the goal is to be really truly amazing at this core set of features – isn't there this assumption that if you're not adding features, you're gonna lose people? Do you see that?

Katie Deloso   

I think you see that. I think it's about choosing those battles wisely.

The thing that I'm cautioning against is the natural inclination to run and build as many things as possible, and to build them in a very half-assed manner. 

Like, I think what I end up seeing is: "Somebody asked me for this thing, we made it in a week, we released it." 

Well, have you looked at it in a year? No. Has anybody? 

Those types of scenarios are when Frankenapps are really built. 

If you're going into it with this feature [that] is going to really differentiate us from the rest of the market, it's something that our customers - we're hearing this over and over again - that they need something that accomplishes this particular thing, then that probably should be built and thought about in a way not that it just checks the box, it does this thing, but that it actually solves a unique problem in a way that's unique from the rest of the market. 

So it's more about showcasing that you understand your customers, and you're going to be able to solve something for them, rather than like, "We check the box, we have this feature, this feature this feature and this feature. Great, done."


Ekaterina (Sam) Howard   

Okay, so it's almost like building for a comparison table versus building for the customer.


Katie Deloso  

Yeah, basically.


Ekaterina (Sam) Howard   

Yikes. That sounds like a huge red flag when you realize you're building for a comparison table. And I'm assuming this is not a UX best practice right? 


Katie Deloso  

No. 


Ekaterina (Sam) Howard   

Yeah, I guessed correctly.


Katie Deloso  

Building for a comparison table, it's basically like, this customer went, and they looked at this product, and they compared it to us. And this product did these four things. And we didn't do these four things. So we should do the same four things that that product has.

That doesn't get to the heart of what they were trying to do, or why they were trying to do it. Maybe they didn't need all four of those things. 

Maybe it was one thing that they really needed in order to be successful or to consider switching from their current toolset to yours. 

And it's understanding the underlying motivation behind what they're really saying when they're saying "This tool over here has four things that you don't" that helps to really focus in on what features should be built and which one should be maybe put in the backlog for the future. 

And maybe, if you hear more about it and get more details about it, maybe that makes sense to go after. 

But I feel like too many people run with this idea of like, we have to have parity with every single competitor in the market. 

And that's just really hard to do – and to do it really, really well.

Ekaterina (Sam) Howard  

Okay, so kind of sidetracking, but who should own those questions? Sales will probably hear about this, but it sounds like there's a little more nuance than, you know, "These four things need to happen for me to be able to sell, come on, people."

Katie Deloso   

I mean, this is the perpetual problem, right? 

This is why I sometimes joke that I feel like I'm a therapist. Because sales would come in and say, "I can't close this big deal until we have these four things." 

And product will come back and say, "These eight things aren't feasible." 

And then at the end, they'll get dumped on marketing, and they'll say, "Marketing, go market this." So everybody's doing their own thing. 

So it's hard, it's really hard to pull these different, competing motivations together. 

Product has limitations on what can be done in a certain amount of time. 

Sales wants to close that deal. 

And marketing wants to pull in qualified leads, but sometimes they're being handed things after the fact and that people are kind of tossing it over and saying "Great, can you market that?" 

So it becomes this cycle. The best teams that I've seen work together on these kinds of problems, where sales is able to either dig at that motivation, or they're able to really nicely hand those people off to somebody on the product team who can maybe dig and understand a little bit more about what's going on. 

And then, of course, keeping the rest of the team in the loop as to what's happening so that planning is actually happening and they're not just releasing a bunch of stuff in order to close that deal.


Ekaterina (Sam) Howard 

So when you see those teams work together, is it a culture thing? Is it an organizational structure thing? Is there some other secret ingredient that people need to know about?


Katie Deloso   

No, I mean, it's a combination of, it's a combination of culture and structure. 

And I think it's more culture than anything, it's more an openness to digging in and understanding what those real problems are and to asking those kinds of questions and taking kind of an experimental feel where they're more like a scientist than a producer of things, if that makes sense. 

So they're trying to understand the why and test hypotheses rather than, let's just get stuff cranking out the door. 

I hear a lot of "They asked for it, we'll just release it. And we'll see how it does." 

I call that a bad iteration. Because they're like, we'll just iterate, we'll launch it and we'll iterate on it. So it's this idea of like, launch it, and we'll see. 

Without a true hypothesis behind what we're going to learn, or a thought as to why it could be successful. It's just somebody told me to do this. I wanted to see how it does.

Ekaterina (Sam) Howard 

In an ideal world, what does that new feature testing look like? First of all, what does a good hypothesis look like?


Katie Deloso   

A good hypothesis would be basically to be able to say, "I think my customers need to do something. And I think I've solved it. By doing it in this way. I think it'll create this amount of value for them." And seeing does it actually create that value? And does it actually get people excited and in the door on marketing and sales side of things? What was the second part of your question, I forgot.


Ekaterina (Sam) Howard   

Okay, so we have a hypothesis. So now, we build a feature, we're launching it, how do we measure it and decide whether or not when we need to iterate?


Katie Deloso  

Yep. So you launched the feature, and there are a couple of ways that people approach this. This is a little outside of my typical area of expertise. 

I know a lot of companies will actually monitor usage on it. And so they'll be able to see which of our customers, If they're using a product like Amplitude or something along those lines, they're able to segment and say which, which of our paying customers are using these features, how often. 

And then you'll also get signals on the sales and marketing side, when it comes time for renewal as to which features are most important to them. 

A lot of times, you'll also see this in their paywall structure. 

So if there are paywalls around a certain feature, you could also see if that's the reason why somebody might upgrade and go down that upgrade path. That's also a really good indicator that there's value there. Because they're willing to pay more for it.

Ekaterina (Sam) Howard   

So another thing that I'm curious about is deciding when it's enough research.


Katie Deloso   

Ooooh, that's a hard one!


Ekaterina (Sam) Howard   

Is there ever enough research?



Katie Deloso

So it's funny, because it feels like this very difficult to find thing. But it becomes very clear that you're hearing the same things over and over again. 

And that's usually when you know, when there's enough research. It generally happens much faster than most people think. Like where people think you need hundreds and hundreds of interviews to get to that level. 

On the qualitative side, we're not looking for statistical significance, we're looking for those motivating factors, the whys. 

And if somebody runs into a usability problem, my theory is, if one person runs into it, more people are going to run into it. Let's try to fix it rather than just leaving it out there. 

But generally, what happens is you're looking at your research results, you're like, "Yeah, I'm hearing the same three things." Every call, it's starting to happen, where I'm hearing the same three things over and over again

Then you know you've got enough qualitative research to run with, at least for the short term. I'm a big fan of keeping that testing process going, especially as you're building more and more of your product over time, so building that base is really helpful. 

So you can kind of consistently run research, more often at the beginning as you're building kind of initial versions of a new feature, and then you can kind of taper off over time.

Ekaterina (Sam) Howard   

So there's the feature feedback, right? There's the product, and then there is planning for the next set of features, or like the general product direction. Again, in this magical ideal world where none of the restrictions we have exist, if you could do anything for a startup, what would that look like?

Katie Deloso   

It depends on the company. And it also depends on what they're doing. 

So let's say, you're building feature after feature after feature. Which, you know, that's pretty rare. Most companies will sprint, they'll build a bunch of new features, then they move into marketing, sales and support mode. And then they come back and they do more and more, they learn, they hear from sales and marketing what is resonating and what's not, they'll come back and they'll say, "Great, we have a new feature we want to release for we want to expand into this other market. So we're going to try to release a new product or feature set." 

So during those periods of more intense product development, I'd love to get a couple of calls each week. It's not something that's crazy, a handful of calls, 3 to maybe 10 if you have a lot of uncertainty. If there's a lot of uncertainty, maybe that 10-ish mark is helpful, but generally I try to stay in the range of 5 to 10 interviews for a particular feature set, because that's really all you need to get a good indication of what's working and what's not working, and then continuously running that as you're designing out new things.

If you're not actively designing new product features, then once a month, just go interview a couple people on their use of your product, see what their issue is... 

You know, product teams are usually working on something, so it's just a matter of deciding on where to focus, and what you want to learn. 

Ideal world, they build a repository of people that they can reach out to, even a database. They know who they are. They generally know what their interests are, because they may have different types of customers who are more interested in different types of features. And they can call them when needed. 

That would be an ideal-world scenario. 

One thing I've been particularly interested in is if anybody has created a really fantastic research database. This has been the holy grail for me for a long time - are people doing anything interesting, around putting their findings in a single repository and tagging them in an interesting way? 

I know people have built this on Notion and in Google Drive. And I haven't seen one that's amazing yet. But maybe with AI, things will get more interesting, we'll see.

Ekaterina (Sam) Howard   

So have you seen examples of Google Drive based databases that actually get used?

Katie Deloso   

No, that's what I'm saying. I'm saying that they're awful.


Ekaterina (Sam) Howard   

Yes!


Katie Deloso  

And so people are like, digging for this magical insight they can't find because it's just data everywhere. 

And so it usually ends up living in the researcher's head right and they become this like, resource of information where people are like, "Well, what do you think about this?" and then they have to try to pull from what they remember, which you know, has all kinds of biases too, like recency bias, and all kinds of stuff in there. 

So that becomes tricky as well. I don't know if I've seen anybody who's cracked this nut yet. But maybe it's out there and I just don't know about it.

[BREAKOUT]

As it happens, we actually stumbled on a pretty neat example of a research repository living in a Slack channel of all places – I’ll link to it in the transcript.

If you have an example of a research repository that helps everyone build better hypotheses and make more informed decisions, please reach out and share it with us!

Coming up next: talking about a project that included 120 user research sessions – and why it was actually less overwhelming than it sounds. 



Ekaterina (Sam) Howard  

Okay, can we talk about Quick Sprout? 


Katie Deloso   

Sure.


Ekaterina (Sam) Howard  

Over 120 user research sessions, why so many?!


Katie Deloso   

This was over the course of a long time. So I worked on Quick Sprout for many months. So if you average, let's say, 5 to 10 a week, that's only like three months' worth of interviews, and they were also developing a brand new product at the time. 

So this was something that was not in existence, it was in Neil's head and we were basically moving from stuff that was completely amorphous, so a lot of that upfront was understanding the customer base, understanding who's using this, who we're actually creating this for, and then taking it all the way through to an actual product. 

A long period of time, from very fuzzy to something that's in focus. So that's why so many. That's not unique people either, these are going to be sessions that maybe the same person comes in a couple of weeks later, a month or two later and we're following up on something that we've talked about previously.

Ekaterina (Sam) Howard  

When you do research, interviews, or user research sessions, what is the point at which you cannot be this human database walking around that remembers all of the data points and can spit out an answer. When does it become overwhelming?

Katie Deloso  

Quickly.

But the good news is, because I'm also designing a lot of times, those research findings are getting translated into design hypotheses. 

So it's basically getting moved from my brain, you know, list of findings and notes into, "Well, here's our hypothesis. Let's design it out. And then let's test it." 

So it becomes an iterative process where the research is getting rolled into design, and then design is being tested and it keeps getting iterated on and on and on.

It's not like this big snowball of 120 research sessions that says, we did 10 sessions, we learned this great, we designed this. Alright, we showed this prototype that was designed, people started to use it. Now we have a bunch of new hypotheses. Go back to the designing and iterating on the design, and it just becomes this kind of cycle.

It's not like this big research study where we interview hundreds of people, we synthesize data findings. I know large companies do that kind of thing, but we tend to take the scrappy approach to it.


Ekaterina (Sam) Howard 

Fair. Okay, so with UX design, and then also marketing and sales happening at some point, once it's time to launch, how does working together with sales and marketing play out?

Katie Deloso  

It's interesting because I usually come in as a consultant, so I'll have a little bit more latitude to go talk to sales and marketing, as long as I ask permission. So it really is unique to each team and what I'll try to do is just work with them to understand  sales – this is how we're demoing, or this is a big part of our pitch, or our initial deck, and I'll need to make sure that the product sells itself in those meetings. 

So if I understand what they're going to demo, I can also think about how the product experience can reflect what they're demoing and reflect the value proposition, like we talked about earlier, where the product is showing off its value.

Sales also often plays a huge role in that and thinking through how it works in the demo process or even if it's just sitting in a static sales doc, how that looks, is really helpful and draws the two together.

Very similar for marketing, a lot of times marketing is thinking about updating the website, updating collateral, all that kind of stuff. So it's thinking about how the product can be shown, either in little snippets of the product or even in an open demo. That's become trendy lately, to keep an open version of the product we can play with it on the site. I'm thinking about how those experiences might work. It's helpful for both teams, so marketing is able to showcase the value proposition that on the product side, we're able to reflect that idea too.

Ekaterina (Sam) Howard   

So open demos - is there a case where it's a bad idea to let people play with the product?

Katie Deloso   

I mean, if it's hard to use.

I'm sure there are bad cases. And this is relatively new. I've been watching this right now and seeing what people are saying and I'm kind of watching this debate to see how it plays out.

I've seen people reporting that it's working really well for them to do open demos. I assume that they had a really solid product experience, that the product speaks for itself and sells itself.

I think if the product does that, then an open demo is probably fantastic.

In cases where the product is either hard to use, or doesn't sell itself and explain why it's valuable, you might lose in those cases. But at the same time, if you get them to sign up for a free trial, these are the same people, who are going to be your tire kickers.

They sign up. They play with it for two minutes. And they're like, "Nah, not what I thought it was." And they instantly churn. 

So I'm guessing people are seeing the higher activation rates because they're weeding out the tire kickers from the very beginning part of the conversion funnel. So they're probably like, “Our metrics look great on activation,” because they've culled down the people who are making it through the conversion point.


Ekaterina (Sam) Howard   

Okay, so can we unpack what "the product that sells itself" might mean?


Katie Deloso   

So earlier, when I was talking a bit about reinforcing value, “a product that sells itself” does a couple of things. 

First, it delivers on the promises that it made. 

So sales, marketing promises, and the product actually does those things, and does them pretty well. 

The second is that it shows you that it is doing this. 

So a really good example of this actually... I was actually just talking to somebody about this the other day. And they were saying, “Our load times are insanely fast so we can take in all this data and spit it back out at you. The results are out in seconds. milliseconds.” And the feedback that they've gotten was "Well, it didn't seem like it was a lot of work, like you went from point A to Z very quickly here." 

So I actually advocated for giving a very brief loader in the middle, even though it didn't technically take that long to load and while it's spinning, it's explaining what it's doing on the backend in order to generate this amazing result for them.

So in the process of loading, it's selling itself you, it's explaining why it's valuable to you.

I think that's a more concrete example of a product that's actually selling itself. Those loaders actually are really valuable. And the first time I actually implemented one was one where we did have a time to load, it was an AI application that was actually doing a lot of work on the backend, and it was gonna take 30 seconds to get from the previous screen to the next screen.

So this was actually hiding how long it was taking to process and explaining that there's this very complex process happening behind the scenes.

Ekaterina (Sam) Howard

Are there some other ways you could do that?

Katie Deloso   

So there are simple things like reinforcing value.

A really straightforward example of this is, when you check out on Instacart, they'll tell you how many hours you've saved by not going to the store today. 

That's an immediate reflection of value. 

If it's a time saver, that one is super straightforward, right? 

There are other ways [where] the application just basically tells you [what] it's doing, it says, "I'm doing my job, this is the job I told you, I'm gonna do. And here's how, here's me succeeding at it," that's a very clear way to do it. 

I'm trying to think of some other really good examples of this. It's usually done, again, in those small bits of text throughout the application. So success states are a great example of where you can inject this value.

It could even be putting in like, little timers, like little estimated amounts of time that something's going to take just so they can kind of mentally go "Wow, so it can take me two minutes here! It used to take me more to go through this process by hand!" 

So it really depends on what that value prop is that sales and marketing is selling and what the value of the product is overall, what job it's solving for, and reflecting that either in the copy or in different types of stages in the application.


[BREAKOUT]

All of that makes sense – but what is the best way to show off your product’s value without giving it away? As you might imagine, there is no definitive answer. 

Instead, Katie will share some ideas on how to think about choosing a pricing model and paywalling – or not – access to all of your product features.



Ekaterina (Sam) Howard   

If you're in a freemium model, are there mental shortcuts or frameworks that would be helpful for founders that are thinking “Do I even keep those people those freeloaders that are not converting on my freemium plan or do I just accept that for like every 1 paid customer, I'll have 50 of those?” And what are the tradeoffs?


Katie Deloso   

Yeah, that's a tough question. 

I think one part of it is, how much are those three customers actually costing you at the end of the day? 

I think with the rise of some of the AI applications right now, those free customers can become expensive fast, really, really fast. 

Each call is costing you money. You multiply that out by 1000s of potentially freeloaders, that's something that can get wild, very quickly. 

And so I think it's about weighing the cost of having these free customers versus the chance that your upsell may capture them at some point, even if they've been around for five years. 

You know, if it's not costing you much, and you can keep marketing to them, and marketing to them isn't really a huge suck on your resources, why not?

There's kind of like a why not question. 

But if they become a problem from a support perspective, they're constantly asking for our help, or I think I saw somebody saying that I recently was reading somebody's post that their free customers were 90% of their support requests. That's insane, right? They're not paying, it seems like they're unlikely to convert, and they're, and they're sucking out resources from the company in support requests and all this other stuff. 

So if they're not a drain on resources, I don't have a strong opinion as to why not. 

But I think in most cases, they end up having a cost to them. So it's really dependent on how expensive they are at the end of the day.

Ekaterina (Sam) Howard   

Can you have produce-lead growth if you're not letting people into your product?

Katie Deloso   

I think that's an interesting question.

I'm not saying no to all free customers, I just think they should have less cost associated with them. We don't want to have somebody sitting here for five years.

That's why a lot of companies will timebox their free trials to, say, two weeks, rather than just having this free version with paywalls and the free version lives on forever.

I think that's why so many companies do it that way because otherwise, you could have somebody say, "Hey, we're using a free feature for forever." 

So it's more about how you're structuring that free plan. Are you creating paywalls where they're eventually going to hit a usage and that usage happens fairly quickly?

And the AI application example I gave you earlier, [is] probably one of those. I see a ton of companies doing “You get five credits, you run out of credits, that's it.” Or they do a 14-day free trial and they try to upsell them from there.


Ekaterina (Sam) Howard   

So if we're doing a free trial, do you have any tips on how to think about feature breakdown, paywalls and features versus keeping everything available and doing this like reverse thing where you end up downgrading if you don't start paying.

Katie Deloso   

I don't have the strong opinion on these, honestly.

I've seen it work really well for some companies where you open up all the premium features, and then you take them away. 

And in cases where the upgrade features are harder to explain, that makes a lot of sense, you get them hooked on the feature set, and then you take it away. 

I've also seen the opposite work really well, where there are certain features that people really want. 

A great example of this would be a tool like CrunchBase or Apollo, where one of those features might be exporting a list of leads. 

They probably found people are signing up for trials exporting the leads and leaving. So the upgrade feature works really well in those scenarios, because you could possibly try to use it for as much as you could in the free scenario and not end up converting at all. So when that upgrade is really, really clear, then it makes sense to start free with “Here are some upgrade features that you can get when you start paying.”

So I think either works, but it just depends.


Ekaterina (Sam) Howard  

So I'm hearing one more time yet again, as in all the conversations I've had so far, it depends on the customer.


Katie Deloso 

I know, I'm such a consultant, right?

Ekaterina (Sam) Howard   

Yeah, no, like the whole point is to not have those shortcuts [that] are going to be a one-size-fits all.


Katie Deloso  

Right. I mean, it's really not. I rarely have a “This is the rule, stick to it” kind of a thing.


Ekaterina (Sam) Howard 

Well, when you do have them, are there any rules that people really do need to stick to?


Katie Deloso  

Research your customers!


Ekaterina (Sam) Howard   

Something that nobody wants to hear ever.


Katie Deloso   

I feel like a broken record. I feel like people are always like "What's the one thing?" 

Talk to their customers. And not just talk to them, show them something, get them to use it

Because you'll learn so much from it, it's so low cost generally to do. It's not a lot of time, we're talking like an hour or two a week. It's not crazy expensive. 

And I find that the insights from that are just so helpful in making decisions.


Ekaterina (Sam) Howard  

Okay, to convince people who are like, "Just just we're not doing this, that's no, we're not talking to our customers." What is the most spectacular Titanic-level product gotcha that you've been able to find and save the day.


Katie Deloso   

Oh, Titanic-level gotcha. 

I mean, there have been entire application ideas that have been killed and scrapped because they weren't valuable. 

You know, they were going after a brand new feature set, they decided, nope, nope, we're going to completely go in a different direction here, this feature set is not actually valuable at all, we'll completely shift to something else, or completely shift markets even, if the company is really early, deciding this target market over here that we thought was our ICP, is not the ICP, we need to go look at a completely different market for this product. 

Both of those have happened.

 So I think those are the monumental shifts that happen. 

They're rare, though, honestly, it's usually fine tuning, and that fine tuning is what makes the difference often for so many companies, and it kind of makes everything click, and makes everything come together. 

So it's usually not “We scrapped our product, or we went to go find a brand new market.” 

Generally, in those cases, they are very, very, very early. Usually, with more established companies, it's realizing that there's just a nuance there, a small tweak in the feature set, or even in just how the feature set is being perceived by the customers. That makes the biggest difference.


Ekaterina (Sam) Howard  

What does it look like when it finally clicks?


Katie Deloso   

It becomes easy for somebody to describe. So at the end of a research session, "Awesome. How would you explain this to a coworker or a friend?" I like to think of it like the bar test, "How would you shout this out to somebody in a bar that's really loud and crowded?”

I call it the "yeallable why," why I'm using this product. 

So if you can say in a couple of words, why this is going to change your life or make your job easier, that's when you've found it. 

Or at the end of the research session, this is the really exciting thing, is that at the end of the research session, people sometimes will be like, "Is this a real product? Where can I go buy this?" If you're not talking to a current customer, you kind of know you've found something interesting if they want to go to the website and pay for it themselves. And that's happened a couple of times. That's kind of when you know, we're onto something here, they're ready to take out their credit card.


Ekaterina (Sam) Howard   

But at the same time, this is not a research session, when you're asking them leading questions like "Wouldn't you buy this?"


Katie Deloso  

Nope. They're just playing with the prototype. 

So they're playing with the product, and basically at the end of all these calls, I always say, "Are there any other questions or anything I can help you with?" to try to wrap things up.

And so sometimes this comes up as a question of, "Where can I go get this thing?"


Ekaterina (Sam) Howard 

Unprompted. This is when you know, you've made it.


Katie Deloso   

Yeah. It happens more often than you would actually imagine. I don't even know the numbers on it. It's like if the product is really selling itself and reflecting that value, it does actually happen. Especially if they are from a general pool of people who don't have experience with the product. Typically, that's a much earlier product that maybe doesn't have an existing customer base.


Ekaterina (Sam) Howard 

Anything that you want everybody to know, in addition to the “Do the research part?”


Katie Deloso  

Do the research, people. No, I know, it's so much more complex than it sounds. 

Yeah, obviously do the research and I think people blow this up into a big thing they think it's focus groups where they have to rent a room and have people and moderators and stuff. There are much scrappier ways to do research. And anything helps as long as you're not trying to sell people during the research section.

Don't sell them.

Let them talk.


Ekaterina (Sam) Howard   

It can be hard.


Katie Deloso   

It's so hard. It's really hard. Especially if you're a founder and you have a sales background. You just want to, I know you do. So that's the biggest thing that I caution against is don't sell them, just listen to them

<<Episode outro>>

If you’ve been listening to other episodes of “You’re Doing Growth Wrong,” you may have noticed that most of the conversations touch on the importance of listening to your prospects and customers. And – just as importantly – being open to not knowing what will come out of this conversation.

In my own conversations with early-stage founders I’ve noticed that they really crave certainty. Especially if they don’t have a background in marketing.

They ask questions like:

  • What’s the best CTA?

  • What would be a perfect email subject line?

  • What are some examples of the best B2B SaaS website copy?

  • Should I lead with results?


There is no best CTA. There’s no “perfect” email subject line. “Best” B2B SaaS copy is the one that performs – regardless of what copywriters / designers / other consultants think about it. And outside of best practices – which, to be clear, are table stakes – the only way to figure out what to lead with is to… yes! Do the research. Interpret the results. Build your hypotheses and launch

Fortunately, you don’t have to reinvent the wheel when you do that.

I hope that this seasons’ conversations will help you avoid some of the most common mistakes I see founders and marketers make when they search for certainty instead of answers.


<<Season 2 – preview>>

To be honest, I didn’t know what I was getting into when I started recording these conversations. So thank you for listening – and thank you to the wonderful folks who joined me on this adventure!

In the next season of “You’re Doing Growth Wrong,” I’ll continue talking to consultants working with B2B SaaS startups – this time, with a twist. We’re going to do deep dives into the “how” of positioning, messaging and GTM. With examples from their work or from outstanding companies that are definitely doing it right.

Subscribe to my newsletter to get the first episode in your inbox once it’s released – some time in the fall of 2024.

See you in season 2.

Previous
Previous

Conversion copy math: why you need these 3 elements for copy that actually converts

Next
Next

[S1E5] Diane Wiredu, Lion Words: Not everyone gets to be the Liquid Death of B2B SaaS — and that’s OK.